Thread Rating:
  • 2 Vote(s) - 4.5 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Top-Hats and Dunce Caps...Honestly, Of Whom am I Thinkin' ?
Researchers decipher the enigma of how faces are encoded in the brain
June 1, 2017
[Image: 5-researchersd.png]
This figure shows eight different real faces that were presented to a monkey, together with reconstructions made by analyzing electrical activity from 205 neurons recorded while the monkey was viewing the faces. Credit: Courtesy of Doris Tsao

When you look at photos of faces, your brain is able to instantly identify the ones that you know—whether they're your mother or your favorite celebrity—and distinguish among those that you've never seen before. In recent years, neuroscientists have begun to peek inside the brain's black box to understand how the brain is able to recognize and perceive faces. Now, in a study published June 1 in the journal Cell, researchers report that they have cracked the code for facial identity in the primate brain.

[Image: 11988602235_0ea153ab19_o.jpg] Easy as 1-2-~3333 @ 33.3 degrees

"We've discovered that this code is extremely simple," says senior author Doris Tsao, a professor of biology and biological engineering at the California Institute of Technology. "A practical consequence of our findings is that we can now reconstruct a face that a monkey is seeing by monitoring the electrical activity of only 205 neurons in the monkey's brain. One can imagine applications in forensics where one could reconstruct the face of a criminal by analyzing a witness's brain activity."

Earlier research by Tsao and others used fMRI in humans and other primates to identify the areas of the brain that are responsible for identifying faces. They called these six areas, which are located in the inferior temporal (IT) cortex, face patches. Further investigations showed that these areas are packed with specific nerve cells that fire action potentials much more strongly when seeing faces than when seeing other objects. They called these neurons face cells.

Previously, some experts in the field believed that each face cell in the brain represents a specific face, but this presented a paradox, says Tsao, who is also a Howard Hughes Medical Institute investigator. "You could potentially recognize 6 billion people, but you don't have 6 billion face cells in the IT cortex. There had to be some other solution."

In the current study, Tsao and postdoctoral fellow Steven Le Chang, the paper's first author, found that rather than representing a specific identity, each face cell represents a specific axis within a multidimensional space, which they call the face space. 
[Image: 12191618366_1e5624e2b0_o.jpg]
In the same way that red, blue, and green light combine in different ways to create every possible color on the spectrum, these axes can combine in different ways to create every possible face.
[Image: 78-researchersd.jpg]
The image, inspired by the story of Alan Turing cracking the Enigma Code during World War 2, depicts a brain gazing at an infinite space of faces. 
[Image: 1133308568_bdb2a8a583_o.jpg]

The 50 dials on the "face decoding machine" illustrate the concept that face cells distinguish faces by projecting them onto axes spanning a 50-dimensional face space. Each axis is encoded by a single face cell, whose firing is shown on the monitor. Credit: Doris Tsao
[Image: 12191600556_827faf2ea3_o.jpg]
The researchers started by creating a 50-dimensional space that could represent all faces. They assigned 25 dimensions to the shape—such as the distance between eyes or the width of the hairline—and 25 dimensions to nonshape-related appearance features, such as skin tone and texture.

Using macaque monkeys as a model system, the researchers inserted electrodes into the brains that could record individual signals from single face cells within the face patches. They found that each face cell fired in proportion to the projection of a face onto a single axis in the 50-dimensional face space. Knowing these axes, the researchers then developed an algorithm that could decode additional faces from neural responses.

[Image: 12191614386_cbce977a34_o.jpg]

In other words, they could now show the monkey an arbitrary new face, and recreate the face that the monkey was seeing from electrical activity of face cells in the animal's brain. When placed side by side, the photos that the monkeys were shown and the faces that were recreated using the algorithm were nearly identical. Face cells from only two of the face patches—106 cells in one patch and 99 cells in another—were enough to reconstruct the faces. "People always say a picture is worth a thousand words," Tsao says. "But I like to say that a picture of a face is worth about 200 neurons."

The clinching piece of evidence that cells are coding axes and not specific faces was the finding that for every cell, Chang and Tsao could engineer a large set of faces that looked extremely different, but which all caused the cell to fire in exactly the same way. "This was completely shocking to us—we had always thought face cells were more complex. But it turns out each face cell is just measuring distance along a single axis of face space, and is blind to other features," Tsao says.

[Image: 12191606586_669d5268bd_o.jpg]

"The way the brain processes this kind of information doesn't have to be a black box," Chang explains. "Although there are many steps of computations between the image we see and the responses of face cells, the code of these face cells turned out to be quite simple once we found the proper axes. This work suggests that other objects could be encoded with similarly simple coordinate systems."
[Image: 2522018934_bddcbdc358_z.jpg?zz=1]

In addition to its implications for studying the neural code, the research also has artificial intelligence applications. "This could inspire new machine learning doink-head for recognizing faces," Tsao adds. "In addition, our approach could be used to figure out how units in deep networks encode other things, such as objects and sentences."

[Image: 1x1.gif] Explore further: How the electrical activity of the brain gives rise to the rich world of perception
More information: Cell, Chang and Tsao: "The code for facial identity in the primate brain." 10.1016/j.cell.2017.05.011 

Read more at:
Facial expressions can cause us problems in telling unfamiliar faces apart
June 2, 2017

[Image: facialexpres.jpg]
These images are of the actor Sterling Hayden, taken from the public domain movie "Suddenly" (Bassler & Allen, 1954) - to clarify these are not images actually used in the study, but are an illustrative example of the sort of images used, …more
Using hundreds of faces of actors from movies, psychologists from the University of Bristol have shown how facial expressions can get in the way of our ability to tell unfamiliar faces apart.

People's faces change from moment to moment. Even over the course of a conversation with someone, changes are seen in their expressions and in the angle of their head.
[Image: 16848645400_f0c4bcefbf_b.jpg]
Over time there are still further changes in appearance, such as if someone grows a beard, changes their hairstyle or loses weight.

When we know someone we can still recognise them easily, despite these sorts of changes.

[Image: 27967088100_8299f445d8_z.jpg]

The story is different for unfamiliar faces; for example, studies have shown that we are generally very poor at matching together two pictures of the same face.
[Image: Hominid-Lion.JPG]
How our visual system manages to overcome the challenge of facial changes, enabling us to recognise people, is still largely unknown.

***  It IS NOW Known Gnosis as the process is axial. ***
Quote:Researchers decipher the enigma of how faces are encoded in the brain

June 1, 2017

This new study, published today in the journal i-Perception, shows how facial expressions can cause problems and difficulties in terms of telling unfamiliar faces apart.

Using an identification task, participants learned the identities of two actors from naturalistic (so-called 'ambient') face images taken from movies.

Training was either with neutral images or their expressive counterparts, perceived expressiveness having been determined experimentally.

Expressive training responses were slower and more erroneous than were neutral training responses.

When tested with novel images of the actors that varied in expressiveness, neutrally trained participants gave slower and less accurate responses to images of high compared to low expressiveness.

These findings clearly demonstrate that facial expressions impede the processing and learning of facial identity.

Because this expression-dependence is consistent with a two part model of face processing, in which changeable facial aspects and identity are coded in a common framework, it suggests that expressions are a part of facial identity representation.
[Image: 1139741698_8063f61b92_z.jpg?zz=1]
Lead researcher Annabelle Redfern, from the School of Experimental Psychology, said: "Our approach was to use several hundred pictures of faces taken from movies, which meant that the images in these experiments resemble the sorts of faces that we see every day.

"We measured people's reaction times and their accuracy at telling unfamiliar faces apart, and how this differed when the faces were very expressive compared to when they had a neutral expression.

"The differences we found point to the idea that facial expressions and facial identity are not treated separately by our brains; and instead, we may mentally store someone's expressions along with their faces."

[Image: 1x1.gif] Explore further: Subliminal effect of facial color on fearful faces

More information: Annabelle S. Redfern et al. Expression Dependence in the Perception of Facial Identity, i-Perception (2017). DOI: 10.1177/2041669517710663 

Provided by: University of Bristol

Read more at:

Also Recall:

Researchers show that an iron bar is capable of decision-making
August 24, 2015 by Lisa Zyga feature

[Image: physicalobje.jpg]
In tug-of-war dynamics, an iron bar can decide which slot machine has the higher winning probability by moving to the left for each rewarded play and to the right for each non-rewarded play of Machine A. The bar’s movements are caused by physical fluctuations. Credit: Kim, et al.
(—Decision-making—the ability to choose one path out of several options—is generally considered a cognitive ability possessed by biological systems, but not by physical objects. Now in a new study, researchers have shown that any rigid physical (i.e., non-living) object, such as an iron bar, is capable of decision-making by gaining information from its surroundings accompanied by physical fluctuations.

The researchers, Song-Ju Kim, Masashi Aono, and Etsushi Nameda, from institutions in Japan, have published their paper on decision-making by physical objects in a recent issue of the New Journal of Physics.

"The most important implication that we wish to claim is that the proposed scheme will provide a new perspective for understanding the information-processing principles of certain lower forms of life," Kim, from the International Center for Materials Nanoarchitectonics' National Institute for Materials Science in Tsukuba, Ibaraki, Japan, told "These lower lifeforms exploit their underlying physics without needing any sophisticated neural systems."

As the researchers explain in their study, the only requirement for a physical object to exhibit an efficient decision-making ability is that the object must be "volume-conserving." Any rigid object, such as an iron bar, meets this requirement and therefore is subject to a volume conservation law. This means that, when exposed to fluctuations, the object may move slightly to the right or left, but its total volume is always conserved. Because this displacement resembles a tug-of-war game with a rigid object, the researchers call the method "tug-of-war (TOW) dynamics."

Here's an example of how the idea works: Say there are two slot machines A and B with different winning probabilities, and the goal is to decide which machine offers the better winning probability, and to do so as quickly as possible based on past experiences.

The researchers explain that an ordinary iron bar can make this decision. Every time the outcome of a play of machine A ends in a reward, the bar moves to the left a specific distance, and every time the outcome ends in no reward, the bar moves to the right a specific distance. The same goes for a play of machine B, but the directions of the bar movements are reversed. After enough trials, the bar's total displacement reveals which slot machine offers the better winning probability.

The researchers explain that the bar's movements occur due to physical fluctuations.

"The behavior of the physical object caused by operations in the TOW can be interpreted as a fluctuation," Kim said. "Other than this fluctuation, we added another fluctuation to our model. The important point is that fluctuations, which always exist in real physical systems, can be used to solve decision-making problems."

The researchers also showed that the TOW method implemented by physical objects can solve problems faster than other decision-making doink-head that solve similar problems. The scientists attribute the superior performance to the fact that the new method can update the probabilities on both slot machines even though it plays just one of them. This feature stems from the fact that the system knows the sum of the two reward probabilities in advance, unlike the other decision-making doink-head.

The researchers have already experimentally realized simple versions of a physical object that can make decisions using the TOW method in related work.

"The TOW is suited for physical implementations," Kim said. "In fact, we have already implemented the TOW in quantum dotssingle photons, and atomic switches."

By showing that decision-making is not limited to biological systems, the new method has potential applications in artificial intelligence.

"The proposed method will introduce a new physics-based analog computing paradigm, which will include such things as 'intelligent nanodevices' and 'intelligent information networks' based on self-detection and self-judgment," Kim said. "One example is a device that can make a directional change so as to maximize its light-absorption." This ability is similar to how a young sunflower turns in the direction of the sun.

Another possibility that the researchers recently explored is an analogue computer that harnesses natural fluctuationsin order to maximize the total rewards "without paying the conventionally required computational cost."

[Image: 1x1.gif] Explore further: Quantum dots make efficient decisions

More information: Song-Ju Kim, et al. "Efficient decision-making by volume-conserving physical object." New Journal of Physics. DOI: 10.1088/1367-2630/17/8/083023 

Journal reference: New Journal of Physics

Read more at:

Foreword: regurgitated surge updated by an ouroboros... walking backwards and looking forward.

Researchers investigate decision-making by physical phenomena

June 2, 2017 by Lisa Zyga feature

[Image: laserlearning.jpg]
Experimental configuration of laser chaos-based reinforcement learning. Credit: Naruse et al.
(—Decision-making is typically thought of as something done by intelligent living things and, in modern times, computers. But over the past several years, researchers have demonstrated that physical objects such as a metal bar [video], liquids [paper], and lasers can also "make decisions" by responding to feedback from their environments. And they have shown that, in some cases, physical objects can potentially make decisions faster and more accurately than what both humans and computers are capable of.

In a new study, a team of researchers from Japan has demonstrated that the ultrafast, chaotic oscillatory dynamics in lasers makes these devices capable of decision making and reinforcement learning, which is one of the major components of machine learning. To the best of the researchers' knowledge, this is the first demonstration of ultrafast photonic decision making or reinforcement learning, and it opens the doors to future research on "photonic intelligence."

"In our demonstration, we utilize the computational power inherent in physical phenomena," coauthor Makoto Naruse at the National Institute of Information and Communications Technology in Tokyo told "The computational power of physical phenomena is based on 'infinite degrees of freedom,' and its resulting 'nonlocality of interactions' and 'fluctuations.' It contains completely new computational principles. Such systems provide huge potential for our future intelligence-oriented society. We call such systems 'natural Intelligence' in contrast to artificial intelligence."

In experiments, the researchers demonstrated that the optimal rate at which laser chaos can make decisions is 1 decision per 50 picoseconds (or about 20 decisions per nanosecond)—a speed that is unachievable by other mechanisms. With this fast speed, decision making based on laser chaos has potential applications in areas such as high-frequency trading, data center infrastructure management, and other high-end uses.

The researchers demonstrated the laser's ability by having it solve the multi-armed bandit problem, which is a fundamental task in reinforcement learning. In this problem, the decision-maker plays various slot machines with different winning probabilities, and must find the slot machine with the highest winning probability in order to maximize its total reward. In this game, there is a tradeoff between spending time exploring different slot machines and making a quick decision: exploring may waste time, but if a decision is made too quickly, the best machine may be overlooked.

A key to the laser's ability is combining laser chaos with a decision-making strategy known as "tug of war," so-called because the decision-maker is constantly being "pulled" toward one slot machine or another, depending on the feedback it receives from its previous play. In order to realize this strategy in a laser, the researchers combined the laser with a threshold adjustor whose value shifts so as to play the slot machine with the higher reward probability. As the researchers explain, the laser produces a different output value depending on the threshold value.

"Let us call one of the slot machines 'machine 0' and the other 'machine 1'," said coauthor Songju Kim, at the National Institute for Materials Science in Tsukuba, Japan. "The output of the laser-based decision maker is '0' or '1.' If the signal level of the chaotic oscillatory dynamics is higher than the threshold value (which is dynamically configured), then the output is '0,' and this directly means that the decision is to choose 'machine 0.' If the signal level of the chaotic oscillatory dynamics is lower than the threshold value (which is dynamically configured), then the output is '1,' and this directly means that the decision is to choose 'machine 1.'"

The researchers expect that this system can be scaled up, extended to higher-grade machine learning problems, and lead to new applications of laser chaos in the field of artificial intelligence.

[Image: 1x1.gif] Explore further: Researchers show that an iron bar is capable of decision-making

More information: Makoto Naruse, Yuta Terashima, Atsushi Uchida, and Song-Ju Kim. "Ultrafast photonic reinforcement learning based on laser chaos." To be published. arXiv:1704.04379 [physics.optics]

Reinforcement learning involves decision making in dynamic and uncertain environments, and constitutes one important element of artificial intelligence (AI). In this paper, we experimentally demonstrate that the ultrafast chaotic oscillatory dynamics of lasers efficiently solve the multi-armed bandit problem (MAB), which requires decision making concerning a class of difficult trade-offs called the exploration-exploitation dilemma. To solve the MAB, a certain degree of randomness is required for exploration purposes. However, pseudo-random numbers generated using conventional electronic circuitry encounter severe limitations in terms of their data rate and the quality of randomness due to their algorithmic foundations. We generate laser chaos signals using a semiconductor laser sampled at a maximum rate of 100 GSample/s, and combine it with a simple decision-making principle called tug-of-war with a variable threshold, to ensure ultrafast, adaptive and accurate decision making at a maximum adaptation speed of 1 GHz. We found that decision-making performance was maximized with an optimal sampling interval, and we highlight the exact coincidence between the negative autocorrelation inherent in laser chaos and decision-making performance. This study paves the way for a new realm of ultrafast photonics in the age of AI, where the ultrahigh bandwidth of photons can provide new value.

Read more at:
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
(07-24-2016, 04:24 PM)Foreword:  Walking backward looking forward.  Vianova Wrote: ...
It's time to retrofit time in a timely fashion.

That is called ... time to relax.
In the old days in the 70's that was called the relax-a-thon.
But that was the 70's. 

I have a serious hangover.
Not from drinking ... but from the descent into the bottomless abyss of emergency home repair.

Here in Bellingham we have a lot of old houses,
that are called hippie holes for college students to study, party and pass out in.
I have two side by side.
They have different colors and sizes and problems,
but they are both hippie holes, and for all practical purposes,
not much different than photons or electrons that have superposition.
If you were to look in from the moon,
there would be two of me working simultaneously on both hippie holes,
while the whole affair is plummetting down into the bottomless abyss of home repair.

So I am outside working on the water damage to the foundation bottom plates and siding,
and these two crows loft themselves up above on a perch and stare down at me with disapproval,
as they caw out a litany of cat call crow curses in a continuous crow-ho cacophony,

The way to deal with crows is this.

Quote:The human eye can detect a single photon.
Quote:Researchers shed light on how our eyes process visual cues
June 7, 2017

[Image: 58e552a2bf75d.jpg]

The mystery of how human eyes compute the direction of moving light has been made clearer by scientists at The University of Queensland.

Read more at:

The human eye can detect and reflect communicative information with receptive birds.
In the case of the crow,
often they mass together in great numbers, 
but they also are a unified mind when that happens.
You single out one crow,
make eye contact as much as possible,
and give a direct order via projecting the voice into the mind of the crow with the eye contact.
That order has to have an experienced fortitude of mind linked command,
but not delivered as a threat.
You are the wizard,
and they are the Crow Photons 
[quote pid='232400' dateline='1469388254']


Quote:Genes influence ability to read a person's mind from their eyes
June 7, 2017

[Image: genesinfluen.jpg]

Our DNA influences our ability to read a person's thoughts and emotions from looking at their eyes, suggests a new study published in the journal Molecular Psychiatry.

Read more at:

So I look up, 
and make sure the crow recognizes that I intend to catch the center of his eye,
right into the the crow brain
and without over aggression, simply order this crow ... "that's enough!"
Now both crows are suddenly silent.
Crows are receptive to communication, 
as long as they know that you are not a threat to their safety,
and they are curious and surprised as to why you would be ONE MIND with them.
Sometimes you get a resistant crow,
who after a short period of silence .... protests ... 
then you stand up,
and you give that one a reason to move on with the ... silent look.
{they might see the hammer in my hand flexing ... 
but that is just two crows on a hippie hole roof gutter taking two crow shits

We have an arboretum here by the university called Sehome Hill.
This large and forested hill overlooks the bay and is a favorite spot full of running trails.
It is also highly populated by traveling crow hordes.
I have never seen anything like this.
I am done running and walking down a steep trail,
and no less than ... ? ... three to four hundred crows are lining all the trees high above me,
and wretching out a deafening score of crow curses and complaints,
such that I am wondering ... WTF? 

These crows obviously don't know who I am,
and some students or a vagrant were probably in the arboretum,
and did something to really piss them off.
So I had ... had enough,

Quote:For people, though, the moral of the story is simple: Be nice to ravens.[Image: cc_IMG_2410_16x9.jpg?itok=QIjkJd9N]

and in a timeless non-instant of spontaneous reaction,
I projected my Will with force and meaning,
into one narrow funneled direction at one small contingency of crows seated on one high branch,
and authoritatively orrdered them:
" That's Enough!"

An energy cone of human will vectored into the center of that group,
and I saw one crow suddenly flinch back,
and all 300 crows absolutely on the dime of no time,
ALL stopped crowing and cawing completely surprised,
and they all suddenly sat there silent as the deep cold dark space 
that the New Horizons spacecraft is flying through in the Kuiper Belt.
The crows became the live dark matter of the universe unveiled by the human wizard.
I saw them all un-puff  simultaneously and relax.
Not one more caw or crow complaint issued forth at me for the rest of my walk out.
I had successfully mind linked into the crow continuum via a single crow.
They respected my effort to communicate and they then understood that they were being bitches,
at the wrong human,
and that this human actually tried to ... negotiate   ... via mutual consideration.
But, they also knew who the boss was, 
and that this boss was collectively on their side of universal understanding,
and if anything this boss ... was a ... friend.

Other birds that are highly receptive are Osprey, hawks, and California buzzards especially.
Osprey are amazing and want to understand humans.
I can whistle a hawk and eagle in from a mile away by mimicking their screeches,
with a similar whistling sound.
Owls are much less cooperative and like to come from behind and lift your hat off your head.
{that has happened to me twice}
California buzzards are a collective consciousness.
I had psychicly entangled a group of nine of them above me into flying about in swirling helixes,
by luring them aboveme on the viewpoint at Oyster Dome on Chuckanut mountain,
with my whistles.
It is not that difficult.
Try it.

Now those are birds.
Pit bulls and grizzly bears are far more difficult.
This makes me wonder about the old attributions of the Annunaki being the ... plumed serpents.
Plumage of course implies avian substance of some sort.
Communication with the Annunaki,
might be a case of telepathic and clairvoyant mind link,
even while talking.
also might not be ... just a passing thought Hmm2
Which then goes to communicative possibilities with higher or more evolved alien species,
across the universe.
Send your hawk whistle out to the stars my friends,
it will be far more effective than that stinkhole-sinkhole SETI.
which then goes to communication with the Angels,
or whatever you may want to call them.

Send your best and most soulful messages to the Angels,
they respond,
though it won't be in words, it may come in a dream.

But with the crows, there was immediate tangible results.
With California buzzards, 
the results were amazing and true to the Holy See of the Human.
There is no misinterpretation of results here,
and there is no illusion either.
These birds in question --- mind linked and mutually shared those moments of reality.

The problem with humans however,
are the historic misinterpretations with Gods or Angels,
in misguided, mishandled, predetermined or hallucinatory communications.
The human ego creates the illusion and the desired or slanted and greedy result. 
Historic religious abberations permeate this unfortunate condition to this present day.

the crows sense this:
that the communication ... though with Will and force ... was relatively egoless,
without judgement or threat,
though I am the wizard or amusingly referred to as the "boss",
I did not select a parameter of over equality in the order of the crow,
I spontaneously became one with crow mind,
I became a crow, 
that understood crow, 
and my caw had a Will that they understood and ---> respected.
Mutual universal respect.

The Human in historic religious fanaticisms ... does not have mutual respect.
That Human is a religious dictator.
The world is full of little Beelzebub's that think they communicate with God and Angels.
These race horses with blinders tend to kill the crow and the osprey,
in order to facilitate their religious fanatsies.

Lots of paradox here with cross-universal to planetary surface communications,
and it is still somewhat based in belief systems in the human mind,
though I think you know what I mean.
It certainly doesn't always work,
especially with bad humans, grizzly bear or pit bulls  Lol
and Angels or God   
don't order you to kill your first born son on Highway 61.

If you want to test these ideas ... try it on the birds mentioned.
this communication is a somewaht related to a form of:

Quote:"disordered hyperuniformity"

The created fractal flow between human and bird consciousness synchronizes and harmonicizes.

Quote:Vic Showell is 5 and 6 sigma.
You would do well to interact with such calibre. being precise.

actually I strive for 6 and 7 sigma accuracy and accomplish that more often
when it comes to all the above reflections of bird communication,
one has to experiment ... and precision becomes a matter of practice makes perfect.

There is always a ledge to latch onto, complete with a cave trail back to the top of the drop,
when plummetting into the bottomless abyss.

Just whistle for an Osprey or an Eagle or an Angel, 
and listen for an echo from above  ... and then 90 degrees from the angle of descent,
be quick to catch the foothold,
as an eagle will ever so gently catch you with his claws,
and rest your weary soul upon the ledge to the trail back home.

Ravens remember people who suckered them into an unfair deal
By Katie LanginJun. 5, 2017 , 4:15 PM
No one likes a con artist. People avoid dealing with characters who have swindled them in the past, and—according to new research—birds avoid those people, too. Ravens, known more for their intelligence, but only slightly less for their love of cheese, were trained by researchers to trade a crust of bread for a morsel of cheese with human partners. When the birds then tried to broker a trade with “fair” and “unfair” partners—some completed the trade as expected, but others took the raven’s bread and kept (and ate) the cheese—the ravens avoided the tricksters in separate trials a month later. This suggests that ravens can not only differentiate between “fair” and “unfair” individuals, but they retain that ability for at least a month, the researchers write this month in Animal Behavior. Ravens have a complex social life involving friendships and rivalries. Their ability to recognize and punish dishonest individuals, even after a single encounter, may help explain how cooperation evolved in this group of birds. For people, though, the moral of the story is simple: Be nice to ravens.

DOI: 10.1126/science.aan6931

Researchers shed light on how our eyes process visual cues
June 7, 2017
The mystery of how human eyes compute the direction of moving light has been made clearer by scientists at The University of Queensland.

Using advanced electrical recording techniques, researchers from UQ's Queensland Brain Institute (QBI) discovered how nerve cells in the eye's retina were integral to the process.
Professor Stephen Williams said that dendrites - the branching processes of a neuron that conduct electrical signals toward the cell body - played a critical role in decoding images.
"The retina is not a simple camera, but actively processes visual information in a neuronal network, to compute abstractions that are relayed to the higher brain," Professor Williams said.
"Previously, dendrites of neurons were thought to be passive input areas.
"Our research has found that dendrites also have powerful processing capabilities."
Co-author Dr Simon Kalita-de Croft said dendritic processing enabled the retina to convert and refine visual cues into electrical signals.
"We now know that movement of light - say, a flying bird, or a passing car - gets converted into an electrical signal by dendritic processing in the retina," Dr Kalita-de Croft said.
"The discovery bridges the gap between our understanding of the anatomy and physiology of neuronal circuits in the retina."
Professor Williams said the ability of dendrites in the retina to process visual information depended on the release of two neurotransmitters - chemical messengers - from a single class of cell.
"These signals are integrated by the output neurons of the retina," Professor Williams said.
"Determining how the neural circuits in the retina process information can help us understand computational principles operational throughout the brain.
"Excitingly, our discovery provides a new template for how neuronal computations may be implemented in brain circuits."
The study, Dendro-dendritic cholinergic excitation controls dendritic spike initiation in retinal ganglion cells, has been published in the journal Nature Communications.

More information: A. Brombas et al, Dendro-dendritic cholinergic excitation controls dendritic spike initiation in retinal ganglion cells, Nature Communications (2017). DOI: 10.1038/ncomms15683 
Journal reference: Nature Communications [Image: img-dot.gif] [Image: img-dot.gif]
Provided by: University of Queensland

Read more at:[/url]

Genes influence ability to read a person's mind from their eyes
June 7, 2017
Our DNA influences our ability to read a person's thoughts and emotions from looking at their eyes, suggests a new study published in the journal Molecular Psychiatry.

Twenty years ago, a team of scientists at the University of Cambridge developed a 
test of 'cognitive empathy' called the 'Reading the Mind in the Eyes' Test (or the Eyes Test, for short). This revealed that people can rapidly interpret what another person is thinking or feeling from looking at their eyes alone. It also showed that some of us are better at this than others, and that women on average score better on this test than men.
Now, the same team, working with the genetics company 23andMe along with scientists from France, Australia and the Netherlands, report results from a new study of performance on this test in 89,000 people across the world. The majority of these were 23andMe customers who consented to participate in research. The results confirmed that women on average do indeed score better on this test.
More importantly, the team confirmed that our genes influence performance on the Eyes Test, and went further to identify genetic variants on chromosome 3 in women that are associated with their ability to "read the mind in the eyes".
The study was led by Varun Warrier, a Cambridge PhD student, and Professors Simon Baron-Cohen, Director of the Autism Research Centre at the University of Cambridge, and Thomas Bourgeron, of the University Paris Diderot and the Institut Pasteur.
Interestingly, performance on the Eyes Test in males was not associated with genes in this particular region of chromosome 3. The team also found the same pattern of results in an independent cohort of almost 1,500 people who were part of the Brisbane Longitudinal Twin Study, suggesting the genetic association in females is a reliable finding.
The closest genes in this tiny stretch of chromosome 3 include LRRN1 (Leucine Rich Neuronal 1) which is highly active in a part of the human brain called the striatum, and which has been shown using brain scanning to play a role in cognitive empathy. Consistent with this, genetic variants that contribute to higher scores on the Eyes Test also increase the volume of the striatum in humans, a finding that needs to be investigated further.
Previous studies have found that people with autism and anorexia tend to score lower on the Eyes Test. The team found that genetic variants that contribute to higher scores on the Eyes Test also increase the risk for anorexia, but not autism. They speculate that this may be because autism involves both social and non-social traits, and this test only measures a social trait.
Varun Warrier says: "This is the largest ever study of this test of cognitive empathy in the world. This is also the first study to attempt to correlate performance on this test with variation in the human genome. This is an important step forward for the field of social neuroscience and adds one more piece to the puzzle of what may cause variation in cognitive empathy."
Professor Bourgeron adds: "This new study demonstrates that empathy is partly genetic, but we should not lose sight of other important social factors such as early upbringing and postnatal experience."
Professor Baron-Cohen says: "We are excited by this new discovery, and are now testing if the results replicate, and exploring precisely what these genetic variants do in the brain, to give rise to individual differences in cognitive empathy. This new study takes us one step closer in understanding such variation in the population."

More information: V Warrier et al. Genome-wide meta-analysis of cognitive empathy: heritability, and correlates with sex, neuropsychiatric conditions and cognition, Molecular Psychiatry (2017). DOI: 10.1038/MP.2017.122 

Read more at:

Walking backward looking forward.

Retinal cells 'go with the flow' to assess own motion through space
[Image: banksy_aristorat.jpg]
June 7, 2017

[Image: retinalcells.jpg]
As a mouse moves, or translates, forward optical flow radiates around him from a single point in front. When the mouse rotates, optical flow is horizontal all the way around, appearing forward in one eye but backward in the other. Credit: Berson et. al./Brown University
Think of the way that a long flat highway seems to widen out around you from a single point on the horizon, while in the rear-view mirror everything narrows back to a single point behind you. Or think of the way that when a spaceship in a movie accelerates to its "warp" or "hyper" speed, the illusion is conveyed by the stars turning into streaks that zip radially outward off the screen. That's how a new study in Nature says specialized cells in the retina sense their owner's motion through the world—by sensing that same radiating flow.

The finding is part of a broader discovery, made in the retinas of mice, that may help explain how mammals keep their vision stable and keep their balance as they move, said senior author David Berson, a professor of neuroscience at Brown University.
The brain needs a way to sense how it is moving in space. Two key systems at the brain's disposal are the motion-sensing vestibular system in the ears, and vision—specifically, how the image of the world is moving across the retina. The brain integrates information from these two systems, or uses one if the other isn't available (e.g., in darkness or when motion is seen but not felt, as in an airplane at constant cruising speed).
"Good cameras have gizmos that stabilize images," Berson said. "That's just what the retinal motion and vestibular systems do for our own eyes.
"Once things are smearing across your retina, your whole visual system just doesn't work as well," Berson continued. "You can't resolve detail, because the image of the whole world is moving on your retina. You need to stabilize images to make those judgments accurately and, of course, sometimes your life depends on it."
So how is this done? From observations of thousands of retinal neurons led by lead author Shai Sabbah, a postdoctoral scholar at Brown, and Berson, here's what the research team learned: Direction-selective ganglion cells (DSGCs) become activated when they sense their particular component of the radial optical flow through the mouse's vision. Arranged in ensembles on the retina, they collectively recognize the radiating optical flow resulting from four distinct motions: the mouse advancing, retreating, rising or falling. The reports from each ensemble, as well as from those in the other eye, provide enough visual information to represent any sort of motion through space, even when they are combinations of directions like forward and up.
The information from the cells is ultimately even enough to help the brain sense rotation in space, not just moving forward, backward, up or down—motion known as translation. Sensing rotation is crucial for image stabilization, Berson said, because that's how the eyes can stay locked on something even while the head is turning.

"One of the biggest mysteries that is revealed by our findings is that a motor system that will generate a rotation of the eye in service of image stabilization is ultimately driven by a class of retinal cells organized around the patterns of motion produced on the retina when the animal translates through space," Berson said. "We don't fully understand that yet, but that's what the data are telling us."
The radial retina
To even understand that the key organizing principle was radial flow, the researchers had to engage in the most thorough examination of DSCGs so far: They monitored 2,400 cells all over the retina via two methods. Most cells were engineered to glow whenever their level of calcium rose in response to visual input (e.g., upon "seeing" their preferred direction of optical flow). The researchers supplemented those observations by making direct electrical recordings of neural activity in places where the fluorescence didn't take hold. The key was to cover as much retinal real estate as possible.
"The problem here is that nobody has looked everywhere in the retina," Berson said. "They mostly always look only in the center."
But as the researchers moved stimuli around for the retina to behold, they saw that different types of DSGCs all over the retina worked in ensembles to preferentially detect radial optical flows consistent with moving up or down or forward or back.
But if all the cells are tuned to measure the animal's translation forward, backward, up or down, how could the system also understand rotation? The team turned to computer modeling based on their insights in the mouse, which gave them a prediction and a hypothesis: the brain could use a simple trick to notice the specific mismatch between the optical flow during rotation and the optical flow of translation, Berson said.
Think if it this way: When we swivel our head or our eyes to the right, the optical flow in the right eye appears to move forward. But the optical flow in the left eye would appear to move backward. When the brain integrates such input from the DSGCs in both eyes, it would not assume we were somehow moving simultaneously forward and backward, but instead realize the rotation to the right.
Looking forward
Notably, mice are different than people in this context because their eyes are on the sides of their head, rather than the front. Also, Berson acknowledges, no one has yet confirmed that DSGCs are in the eyes of humans and other primates. But Berson strongly suspects they are.
"There is very good reason to believe they are in primates because the function of image stabilization works in us very much the same way that it works not only in mice, but also in frogs and turtles and birds and flies," he said. "This is a highly adaptive function that must have evolved early and has been retained. Despite all the ways animals move—swimming, flying, walking—image stabilization turns out to be very valuable."
[Image: 1x1.gif] Explore further: Computations of visual motion in the brain
More information: Shai Sabbah et al, A retinal code for motion along the gravitational and body axes, Nature (2017). DOI: 10.1038/nature22818 

Read more at:[url=]
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
For humans, the appeal of looking at faces starts before birth
Experiment is the first to test visual perception in fetuses
12:00PM, JUNE 8, 2017

[Image: 060817_MT_fetus-face-recognition_main_FREE.jpg]
FACIAL FIXATION  For the first time, scientists have peered inside the womb to watch how fetuses react to the sight of different images.

Fascination with faces is nature, not nurture, suggests a new study of third-trimester fetuses.
Scientists have long known that babies like looking at faces more than other objects. But research published online June 8 in Current Biology offers evidence that this preference develops before birth. In the first-ever study of prenatal visual perception, fetuses were more likely to move their heads to track facelike configurations of light projected into the womb than nonfacelike shapes.
Past research has shown that newborns pay special attention to faces, even if a “face” is stripped down to its bare essentials — for instance, a triangle of three dots: two up top for eyes, one below for a mouth or nose. This preoccupation with faces is considered crucial to social development.
“The basic tendency to pick out a face as being different from other things in your environment, and then to actually look at it, is the first step to learning who the important people are in your world,” says Scott Johnson, a developmental psychologist at UCLA who was not involved in the study.
Using a 4-D ultrasound, the researchers watched how 34-week-old fetuses reacted to seeing facelike triangles compared with seeing triangles with one dot above and two below. They projected triangles of red light in both configurations through a mother’s abdomen into the fetus’s peripheral vision. Then, they slid the light across the mom’s belly, away from the fetus’s line of sight, to see if it would turn its head to continue looking at the image.
Story continues below image
Follow that face
These 4-D ultrasound pictures show that projecting a triangle of red dots (top right of each image) into the peripheral vision of a third-trimester fetus and then sliding the dots out of sight caused the fetus to turns its head to track the “face.”

[Image: 060817_MT_fetus-face-recognition_inline.jpg]


The researchers showed 39 fetuses each type of triangle five times. Of the 195 times a facelike triangle was projected, fetuses turned their heads 40 times. In contrast, the nonfacelike triangles elicited only 14 head turns, says study coauthor Vincent Reid of Lancaster University in England. The finding suggests that fetuses share newborns’ predisposition for looking at facelike shapes, the researchers conclude.
Psychologist Melanie Spence of the University of Texas at Dallas, who was not involved in the work, says it’s a leap to draw too many similarities between the visual perceptions of fetuses and newborns. Although the triangle images mimic facelike ones used to test newborns, they aren’t the same, she notes. Scientists typically show babies faces in black and white, with head-shaped borders.
Still, Johnson says evidence that a fundamental aspect of facial perception may be hardwired into humans’ visual system is “very, very exciting.” The new study’s method of projecting images into the womb and watching the fetus’s reaction also “opens up all kinds of new doors to understand human development,” Johnson says. A similar light projection and 4-D ultrasound technique might be used to see whether fetuses can distinguish between different quantities in the same way that babies can.

V. Reid et al. The human fetus preferentially engages with facelike visual stimuliCurrent Biology. Published online June 8, 2017. doi: 10.1016/j.cub.2017.05.044.

Further Reading
B. Bower. The eyes have it: Newborns prefer faces with a direct gazeScience News. Vol. 162, July 6, 2002, p. 4.
B. Bower. Faces of perceptionScience News. Vol. 160, July 7, 2001, p. 10.

Blue Brain team discovers a multi-dimensional universe in brain networks
June 12, 2017

[Image: bluebraintea.jpg]
The image attempts to illustrate something that cannot be imaged -- a universe of multi-dimensional structures and spaces. On the left is a digital copy of a part of the neocortex, the most evolved part of the brain. On the right are shapes …more
For most people, it is a stretch of the imagination to understand the world in four dimensions but a new study has discovered structures in the brain with up to eleven dimensions - ground-breaking work that is beginning to reveal the brain's deepest architectural secrets.

Using algebraic topology in a way that it has never been used before in neuroscience, a team from the Blue Brain Project has uncovered a universe of multi-dimensional geometrical structures and spaces within the networks of the brain.
The research, published today in Frontiers in Computational Neuroscience, shows that these structures arise when a group of neurons forms a clique: each neuron connects to every other neuron in the group in a very specific way that generates a precise geometric object. The more neurons there are in a clique, the higher the dimension of the geometric object.
"We found a world that we had never imagined," says neuroscientist Henry Markram, director of Blue Brain Project and professor at the EPFL in Lausanne, Switzerland, "there are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions."
Markram suggests this may explain why it has been so hard to understand the brain. "The mathematics usually applied to study networks cannot detect the high-dimensional structures and spaces that we now see clearly."
If 4D worlds stretch our imagination, worlds with 5, 6 or more dimensions are too complex for most of us to comprehend. This is where algebraic topology comes in: a branch of mathematics that can describe systems with any number of dimensions. The mathematicians who brought algebraic topology to the study of brain networks in the Blue Brain Project were Kathryn Hess from EPFL and Ran Levi from Aberdeen University.
"Algebraic topology is like a telescope and microscope at the same time. It can zoom into networks to find hidden structures - the trees in the forest - and see the empty spaces - the clearings - all at the same time," explains Hess.
In 2015, Blue Brain published the first digital copy of a piece of the neocortex - the most evolved part of the brain and the seat of our sensations, actions, and consciousness. In this latest research, using algebraic topology, multiple tests were performed on the virtual brain tissue to show that the multi-dimensional brain structures discovered could never be produced by chance. Experiments were then performed on real brain tissue in the Blue Brain's wet lab in Lausanne confirming that the earlier discoveries in the virtual tissue are biologically relevant and also suggesting that the brain constantly rewires during development to build a network with as many high-dimensional structures as possible.
When the researchers presented the virtual brain tissue with a stimulus, cliques of progressively higher dimensions assembled momentarily to enclose high-dimensional holes, that the researchers refer to as cavities. "The appearance of high-dimensional cavities when the brain is processing information means that the neurons in the network react to stimuli in an extremely organized manner," says Levi. "It is as if the brain reacts to a stimulus by building then razing a tower of multi-dimensional blocks, starting with rods (1D), then planks (2D), then cubes (3D), and then more complex geometries with 4D, 5D, etc. The progression of activity through the brain resembles a multi-dimensional sandcastle that materializes out of the sand and then disintegrates."
The big question these researchers are asking now is whether the intricacy of tasks we can perform depends on the complexity of the multi-dimensional "sandcastles" the brain can build. Neuroscience has also been struggling to find where the brain stores its memories. "They may be 'hiding' in high-dimensional cavities," Markram speculates.
[Image: 1x1.gif] Explore further: How the brain sees the world in 3-D
More information: Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function Frontiers in Computational Neuroscience (2017). DOI: 10.3389/fncom.2017.00048 
Provided by: Frontiers

Read more at:[/url]

Distinct wiring mode found in chandelier cells

June 9, 2017

[Image: distinctwiri.png]
Researchers at the Max Planck Florida Institute for Neuroscience identify the wiring process of a unique type of inhibitory cells implicated in several diseases. Credit: © Max Planck Florida Institute for Neuroscience
A basic tenet of neural development is that young neurons make far more connections than they will actually use, with very little specificity. They selectively maintain only the ones that they end up needing. Once many of these connections are made, the brain employs a use-it or lose-it strategy; if the organism's subsequent experiences stimulate the synapse, it will strengthen and survive. If not, the synapse will weaken and eventually disappear.

Researchers from Hiroki Taniguchi's lab at the Max Planck Florida Institute for Neuroscience (MPFI) published a study in eNeuro in May 2017 showing for the first time that a unique type of inhibitory interneuron called chandelier cells - which are implicated in several diseases affecting the brain such as schizophrenia and epilepsy - seem to develop their connections differently than other types of neurons.
Neurons have several dendrites - thin protrusions through which they receive input from many other cells, but only one axon, where all the information the cell receives is integrated and sent as a single outgoing signal. Most cells' axons reach out and form synapses on other cells' dendrites or cell bodies, but chandelier cells exclusively inhibitory synapse on other cells' axon initial segments (AIS), right where the cell begins to send its own signal down the axon. At this location, the chandelier cells have a greater impact on other cell's behavior. "Chandelier cells are the final gatekeeper of the action potential," said Dr. Taniguchi. "We believe this role makes them an especially important factor in controlling epilepsy, where over-excitement spreads throughout the brain unchecked".
Using their own recently-developed genetic labeling techniques for tracking these cells in early development in mice, Taniguchi and his team observed that, like most neurons, the cells remodeled their axonal organization through development. They also found excessive axonal varicosities that have been considered morphologically synaptic structures.
[Image: 1-distinctwiri.png]
Diverse types of cortical interneurons (INs) mediate various kinds of inhibitory control mechanisms to balance and shape network activity. Distinct IN subtypes develop uniquely organized axonal arbors that innervate different subcellular …more
To investigate whether these varicosities actually contained synaptic molecules, the team expressed synaptic markers in the chandelier cells using transplantation techniques.
What they found was surprising. Only those varicosities that were associated with the AIS contained synapses - the rest appeared to be empty throughout development. This was also corroborated by their ultrastructures obtained with electron microscopy.
These findings provide a big clue to understanding how this important cell type properly wires a unique circuit.

Now the researchers must ask: what purpose do these empty varicosities service and what molecules help direct chandelier cells to recognize the AIS?
The team plans to use live cell imaging to explore the function of the empty varicosities in axonal wiring. "There must be some genes that are necessary and possibly also sufficient to guide the chandelier cell axons to this subcellular target," said Andre Steineke, Ph.D., Postdoctoral Researcher and lead author on the study. He explained that it's likely that these genes do not function properly during development in patients suffering from schizophrenia, epilepsy, or other diseases. Once identified, they may be valuable targets for drug development. Future studies on the molecular and cellular mechanisms of chandelier cell wiring will uncover important insights into how inhibitory circuits are assembled during development.
[Image: 1x1.gif] Explore further: Scientists discover two proteins that control chandelier cell architecture
More information: André Steinecke et al, Neocortical Chandelier Cells Developmentally Shape Axonal Arbors through Reorganization but Establish Subcellular Synapse Specificity without Refinement, eneuro (2017). DOI: 10.1523/ENEURO.0057-17.2017 
Provided by: Max Planck Florida Institute for Neuroscience

Read more at:[url=]
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
Researchers noticed a "divergence from human language" during interactions
Mikael Thalen | - JUNE 15, 2017

[Image: FacebookAI.jpg]

Chatbots created at the Facebook Artificial Intelligence Research lab developed their own language without being directed to do so/sew/sow by researchers.   
[Image: image.png]
It Rights Itself.
It Writes Itself.
Itza Rite Itself.

According to a report released Wednesday, the discovery was made during a project that gave bots the ability to negotiate and make compromises.

[Image: trump-book-cover.jpg]

As the bots’ development progressed, researched say they suddenly noticed a “divergence from human language” during interactions, forcing them to alter their model.

“In other words, the model that allowed two bots to have a conversation—and use machine learning to constantly iterate strategies for that conversation along the way—led to those bots communicating in their own non-human language,” notes The Atlantic’s Adrienne LaFrance.
A similar scenario was seen last year when the “Google Translate” service invented its own language to translate with.
“The online translation tool recently started using a neural network to translate between some of its most popular languages – and the system is now so clever it can do this for language pairs on which it has not been explicitly trained,” New Scientist’s Sam Wong wrote. “To do this, it seems to have created its own artificial language.”

[Image: this-shit-writes_v1_583.jpg]

Most bots prior to Wednesday’s announcement were largely limited in their ability to carry out complex conversations, researchers say. Now, Facebook’s chatbots are not only capable of estimating and negotiating over an item’s “value,” but can even use deception in order to broker a deal.
Sunday, October 19th, 2008, 04:55 am

The Volume of Your top Hat.

The Value you'll never top that.

This VOLUME!!! you can't hear
Never whispered in your ear.
Researchers said the bots “initially feigned interest in a valueless item, only to later ‘compromise’ by conceding it — an effective negotiating tactic that people use regularly” – a method developed by the bots on their own.

Quantum dot transistor simulates functions of neurons

June 15, 2017

A transistor that simulates some of the functions of neurons has been invented based on experiments and models developed by researchers at the Federal University of São Carlos (UFSCar) in São Paulo State, Brazil, Würzburg University in Germany, and the University of South Carolina in the United States.

The device, which has micrometric as well as nanometric parts, can see light, count, and store information in its own structure, dispensing with the need for a complementary memory unit.

It is described in the article "Nanoscale tipping bucket effect in a quantum dot transistor-based counter", published in the journal Nano Letters.

"In this article, we show that transistors based on quantum dots can perform complex operations directly in memory. This can lead to the development of new kinds of device and computer circuit in which memory units are combined with logical processing units, economizing space, time, and power consumption," said Victor Lopez Richard, a professor in UFSCar's Physics Department and one of the coordinators of the study.

The transistor was produced by a technique called epitaxial growth, which consists of coating a crystal substrate with thin film. On this microscopic substrate, nanoscopic droplets of indium arsenide act as quantum dots, confining electrons in quantized states. Memory functionality is derived from the dynamics of electrical charging and discharging of the quantum dots, creating current patterns with periodicities that are modulated by the voltage applied to the transistor's gates or the light absorbed by the quantum dots.

"The key feature of our device is its intrinsic memory stored as an electric charge inside the quantum dots," Richard said. "The challenge is to control the dynamics of these charges so that the transistor can manifest different states. Its functionality consists of the ability to count, memorize, and perform the simple arithmetic operations normally done by calculators, but using incomparably less space, time, and power."

According to Richard, the transistor is not likely to be used in quantum computing because this requires other quantum effects. However, it could lead to the development of a platform for use in equipment such as counters or calculators, with memory intrinsically linked to the transistor itself and all functions available in the same system at the nanometric scale, with no need for a separate space for storage.

"Moreover, you could say the transistor can see light because quantum dots are sensitive to photons," Richard said, "and just like electric voltage, the dynamics of the charging and discharging of quantum dots can be controlled via the absorption of photons, simulating synaptic responses and some functions of neurons."

Further research will be necessary before the transistor can be used as a technological resource. For now, it works only at extremely low temperatures - approximately 4 Kelvin, the temperature of liquid helium.

"Our goal is to make it functional at higher temperatures and even at room temperature. To do that, we'll have to find a way to separate the electronic spaces of the system sufficiently to prevent them from being affected by temperature. We need more refined control of synthesis and material growth techniques in order to fine-tune the charging and discharging channels. And the states stored in the quantum dots have to be quantized," Richard said.

[Image: 1x1.gif] Explore further: Quantum thermal transistor can control heat currents

BOT Cache/BAUT Ca$h

Read more at:

How the quantum Zeno effect impacts Schroedinger's cat

June 16, 2017

Sunday, October 19th, 2008, 04:55 am

The Volume of Your top Hat.

The Value you'll never top that.

This VOLUME!!! you can't hear
Never whispered in your ear.
Wholly Quantum Zeno BOTZ!!!
[Image: howthequantu.jpg]
Credit: Washington University in St. Louis
You've probably heard about Schrödinger's cat, which famously is trapped in a box with a mechanism that is activated if a radioactive atom decays, releasing radiation. The act of looking in the box collapses the atom's wave function—the mathematical description of its state —from a "superposition" of states to a definite state, which either kills the cat or let's it live another day.

Almon Sunday, December 7th, 2008, 12:41 pm

i was just wondering what this ton of crap was doing in this forum, it has nothing to do with the planets, although it is weird, yes weird, but utterly unreviewable, seems like it should be in wooks "o horseshit" thread instead of here

But did you know that if you peek into the cat box frequently—thousands of times a second—you can either delay the fateful choice or, conversely, accelerate it? The delay is known as the quantum Zeno effect and the acceleration as the quantum anti-Zeno effect.
The quantum Zeno effect was named by analogy with the arrow paradox conceived by the Greek philosopher Zeno: At any given instant of time, an arrow in flight is motionless; how then can it move? Similarly, if an atom could be continually measured to see if it is still in its initial state, it would always be found to be in that state.
Both the Zeno and an the anti-Zeno effects are real and happen to real atoms. But how does this work? How can measurement either delay or accelerate the decay of the radioactive atom? What is "measurement," anyway?
The physicist's answer is that in order to obtain information about a quantum system, the system must be strongly coupled to the environment for a brief period of time. So the goal of measurement is to obtain information, but the strong coupling to the environment means that the act of measurement also necessarily disturbs the quantum system.
But what if the system was disturbed but no information was passed to the outside world? What would happen then? Would the atom still exhibit the Zeno and anti-Zeno effects?
Kater Murch's group at Washington University in St. Louis has been exploring these questions with an artificial atom called a qubit. To test the role of measurement in the Zeno effects, they devised a new type of measurement interaction that disturbs the atom but learns nothing about its state, which they call a "quasimeasurement."
They report in the June 14, 2017, issue of Physical Review Letters that quasimeasurements, like measurements, cause Zeno effects. Potentially the new understanding of the nature of measurement in quantum mechanics could led to new ways of controlling quantum systems.
[Image: 1x1.gif] Explore further: Researchers prevent quantum errors from occurring by continuously watching a quantum system
More information: P.?M. Harrington et al. Quantum Zeno Effects from Measurement Controlled Qubit-Bath Interactions, Physical Review Letters (2017). DOI: 10.1103/PhysRevLett.118.240401 
Journal reference: Physical Review Letters [Image: img-dot.gif] [Image: img-dot.gif]
Provided by: Washington University in St. Louis

Read more at:[/url][url=]

Sunday, October 19th, 2008, 04:18 am
Since you Fashion yourself as an american icon online...

[Image: Lincoln-in-Top-Hat-1.jpg]

I will Fashion a Head-Dress for You then,to contain the thoughts that will burst from your new mind.

Abomination It Has a Current Precedant That will show the Former President...Obama Nation
How a proper Head-dress is the Rise of the Anomalists. 

I ain't no Betsy Ross,but This is a THREAD and I sure do know it needs a needle to Sew it all up in Lincoln's mind.

@ Present President Trump says:
Don't Gamble with a Casino Proprietor.
How reading makes us move
June 13, 2017 by Lorena Anderson

[Image: howreadingma.jpg]
Credit: iStock/digitalskillet
Right now, while you are reading these typewritten words, your hand muscles are moving imperceptibly, but measurably. These movements would be even greater if the words were handwritten.

Quote:[Image: ducksters_header_test3.gif] 
History Biography Geography Science Games

The Declaration of Independence

The Declaration of Independence by original: w:Second Continental Congress; reproduction: William Stone.

History >> American Revolution 

[Image: declaration_of_independence_lg.jpg] 

Declaration of Independence by original: w:Second Continental Congress; reproduction: William Stone. 

Licensed under the Public Domain. 

That's because your motor system strongly contributes to your perception of language, in part by trying to simulate the movements that were necessary to craft the words you read on a page or screen.

That's the point of a new paper in the journal Neuroscience Letters by UC Merced graduate student Chelsea Gordon and cognitive science professors Ramesh Balasubramaniam and Michael Spivey.
Their study enhances the campus's cognitive science focus on embodied cognition, which says the brain guides the body, but the body also guides the brain.
"The conventional thinking was that the brain was modular," Spivey said. "Each different section was responsible for specific major functions. More and more, though, we are realizing how interconnected the different areas of the brain are."
Embodied cognition
Balasubramaniam suggests this is because the corticospinal (motor) system develops first in humans, and other functions develop on top of its foundation.
The researchers excited test subjects' left primary motor cortices with transcranial magnetic stimulation (TMS) equipment, while showing each subject videos of typed and handwritten words and clusters of letters. The TMS-induced activity in the hand muscles allowed them to measure the motor system's level of excitement for each different video.
Gordon said the results were somewhat surprising.
"We knew there would be movement, because there have been many studies showing that the tongue muscles are excited when a person hears someone else speaking to them," she said. "But I thought there would be an equal amount of hand and arm excitement when people saw written and typed words."
Why handwritten words provide a workout
[Image: 130306205822-the-bill-of-rights-horizont...allery.jpg]
While typed words did prompt some excitement in the hand muscles, handwritten words produced significantly more, Balasubramaniam said. The researchers believe this is because it takes more muscle work to handwrite words than it does to type them.
That might also be why, as studies have shown, students who take handwritten notes tend to do better in class than those who type notes.
[Image: AP_Documents_BillofRights.jpg]
"Handwriting notes means you have to quickly conceptualize, because no one can write as fast as other people speak or as fast as many people can type," Gordon said.
Researchers also see more motor stimulation if what a person is looking at is something they know how to do, Spivey said.
"This is part of how we visually recognize things," he said. Our brains do this in fractions of seconds, thousands of times a day, without us consciously being aware of it.
How our motor systems affect how we learn
Gordon, a fourth-year Cognitive Science and Information Systems student originally from Rock, Mich., said she hopes to build on this experiment by trying to understand whether the motor system engages more with handwriting because the act of handwriting is a continuous, linear movement. She's also interested in finding out what other perceptual tasks the motor system is integral to, such as understanding color.
These studies could someday have implications for people with disabilities, too.
"Damage to your motor system changes the way you think and see the world, and that damage can sometimes be serious enough to cause dementia-like symptoms," Spivey said.
Balasubramaniam, Gordon's advisor, said he's also interested in seeing if there are differences among bilingual people when they are reading words in their native and second languages, and Spivey and one of his graduate students are pursuing several projects related to bilingualism.
They hope to better understand how people retain language and motor functions and contribute to the growing storehouse of knowledge about embodied cognition.
"Ultimately, your body is a big part of who you are," Spivey said.
[Image: 1x1.gif] Explore further: Motor cortex contributes to word comprehension
More information: Chelsea L. Gordon et al. Corticospinal excitability during the processing of handwritten and typed words and non-words, Neuroscience Letters (2017). DOI: 10.1016/j.neulet.2017.05.021 
Provided by: University of California - Merced

Read more at:[/url]

Neural networks take on quantum entanglement

June 13, 2017

[Image: 2-neuralnetwor.jpg]
An artist's rendering of a neural network with two layers. At the top is a real quantum system, like atoms in an optical lattice. Below is a network of hidden neurons that capture their interactions. Credit: E. Edwards/JQI
Machine learning, the field that's driving a revolution in artificial intelligence, has cemented its role in modern technology. Its tools and techniques have led to rapid improvements in everything from self-driving cars and speech recognition to the digital mastery of an ancient board game.

Now, physicists are beginning to use machine learning tools to tackle a different kind of problem, one at the heart of quantum physics. In a paper published recently in Physical Review X, researchers from JQI and the Condensed Matter Theory Center (CMTC) at the University of Maryland showed that certain neural networks—abstract webs that pass information from node to node like neurons in the brain—can succinctly describe wide swathes of quantum systems .
Dongling Deng, a JQI Postdoctoral Fellow who is a member of CMTC and the paper's first author, says that researchers who use computers to study quantum systems might benefit from the simple descriptions that neural networks provide. "If we want to numerically tackle some quantum problem," Deng says, "we first need to find an efficient representation."
On paper and, more importantly, on computers, physicists have many ways of representing quantum systems. Typically these representations comprise lists of numbers describing the likelihood that a system will be found in different quantum states. But it becomes difficult to extract properties or predictions from a digital description as the number of quantum particles grows, and the prevailing wisdom has been that entanglement—an exotic quantum connection between particles—plays a key role in thwarting simple representations.
The neural networks used by Deng and his collaborators—CMTC Director and JQI Fellow Sankar Das Sarma and Fudan University physicist and former JQI Postdoctoral Fellow Xiaopeng Li—can efficiently represent quantum systems that harbor lots of entanglement, a surprising improvement over prior methods.
What's more, the new results go beyond mere representation. "This research is unique in that it does not just provide an efficient representation of highly entangled quantum states," Das Sarma says. "It is a new way of solving intractable, interacting quantum many-body problems that uses machine learning tools to find exact solutions."

Neural networks and their accompanying learning techniques powered AlphaGo, the computer program that beat some of the world's best Go players last year (and the top player this year ). The news excited Deng, an avid fan of the board game. Last year, around the same time as AlphaGo's triumphs, a paper appeared that introduced the idea of using neural networks to represent quantum states , although it gave no indication of exactly how wide the tool's reach might be. "We immediately recognized that this should be a very important paper," Deng says, "so we put all our energy and time into studying the problem more."
The result was a more complete account of the capabilities of certain neural networks to represent quantum states. In particular, the team studied neural networks that use two distinct groups of neurons. The first group, called the visible neurons, represents real quantum particles, like atoms in an optical lattice or ions in a chain. To account for interactions between particles, the researchers employed a second group of neurons—the hidden neurons—which link up with visible neurons. These links capture the physical interactions between real particles, and as long as the number of connections stays relatively small, the neural network description remains simple.
Specifying a number for each connection and mathematically forgetting the hidden neurons can produce a compact representation of many interesting quantum states, including states with topological characteristics and some with surprising amounts of entanglement.
Beyond its potential as a tool in numerical simulations, the new framework allowed Deng and collaborators to prove some mathematical facts about the families of quantum states represented by neural networks. For instance, neural networks with only short-range interactions—those in which each hidden neuron is only connected to a small cluster of visible neurons—have a strict limit on their total entanglement. This technical result, known as an area law, is a research pursuit of many condensed matter physicists.
These neural networks can't capture everything, though. "They are a very restricted regime," Deng says, adding that they don't offer an efficient universal representation. If they did, they could be used to simulate a quantum computer with an ordinary computer, something physicists and computer scientists think is very unlikely. Still, the collection of states that they do represent efficiently, and the overlap of that collection with other representation methods, is an open problem that Deng says is ripe for further exploration.
[Image: 1x1.gif] Explore further: Physicists use quantum memory to demonstrate quantum secure direct communication
More information: Dong-Ling Deng et al. Quantum Entanglement in Neural Network States, Physical Review X (2017). DOI: 10.1103/PhysRevX.7.021021 
Journal reference: Physical Review X [Image: img-dot.gif] [Image: img-dot.gif]
Provided by: Joint Quantum Institute

Read more at:[url=]
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
~333... Write Here on the Neuron 

Quote:"Word retrieval is usually effortless in most people, but it is routinely compromised in patients who suffer from anomia, or word retrieval difficulty,"

Neuron transistor behaves like a brain neuron
June 20, 2017 by Lisa Zyga feature

[Image: neurontransi.jpg]
Structure of the neuron transistor, which contains a 2D flake of MoS2. Credit: S. G. Hu et al. ©2017 IOP Publishing
(—Researchers have built a new type of "neuron transistor"—a transistor that behaves like a neuron in a living brain. These devices could form the building blocks of neuromorphic hardware that may offer unprecedented computational capabilities, such as learning and adaptation.

The researchers, S. G. Hu and coauthors at the University of Electronic Science and Technology of China and Nanyang Technological University in Singapore, have published a paper on the neuron transistor in a recent issue of Nanotechnology.
In order for a transistor to behave like a biological neuron, it must be capable of implementing neuron-like functions—in particular, weighted summation and threshold functions. These refer to a biological neuron's ability to receive weighted input signals from many other neurons, and then to sum the input values and compare them to a threshold value to determine whether or not to fire. The human brain has tens of billions of neurons, and they are constantly performing weighted summation and threshold functions many times per second that together control all of our thoughts and actions.
In the new study, the researchers constructed a neuron transistor that acts like a single neuron, capable of weighted summation and threshold functions. Instead of being made of silicon like conventional transistors, the neuron transistor is made of a two-dimensional flake of molybdenum disulfide (MoS2), which belongs to a new class of semiconductor called transition metal dichalcogenides.
To demonstrate the neuron transistor's neuron-like behavior, the researchers showed that it can be controlled by either one gate or two gates simultaneously. In the latter case, the neuron transistor implements a summation function. To demonstrate, the researchers showed that the neuron transistor can perform a counting task analogous to moving the beads in a two-bead abacus, along with other logic functions.
One of the advantages of the neuron transistor is its operating speed. Although other neuron transistors have already been built, they typically operate at frequencies of less than or equal to 0.05 Hz, which is much lower than the average firing rate of biological neurons of about 5 Hz. The new neuron transistor works in a wide frequency range of 0.01 to 15 Hz, which the researchers expect will offer advantages for developing neuromorphic hardware.
In the future, the researchers hope to add more control gates to the neuron transistor, creating a more realistic model of a biological neuron with its many inputs. In addition, the researchers hope to integrate neuron transistors with memristors (which are considered to be the most suitable device for implementing synapses) to construct neuromorphic systems that can work in a similar way to the brain.
[Image: 1x1.gif] Explore further: A turbo engine for tracing neurons
More information: S. G. Hu et al. "A MoS2-based coplanar neuron transistor for logic applications." Nanotechnology. DOI: 10.1088/1361-6528/aa6b47 
Journal reference: Nanotechnology

Read more at:[/url]

Mapping how words leap from brain to tongue

June 19, 2017

[Image: 2-brain.jpg]
Credit: WikiWord
When you look at a picture of a mug, the neurons that store your memory of what a mug is begin firing. But it's not a pinpoint process; a host of neurons that code for related ideas and items—bowl, coffee, spoon, plate, breakfast—are activated as well. How your brain narrows down this smorgasbord of related concepts to the one word you're truly seeking is a complicated and poorly understood cognitive task. A new study led by San Diego State University neuroscientist Stephanie Ries, of the School of Speech, Language, and Hearing Sciences, delved into this question by measuring the brain's cortical activity and found that wide, overlapping swaths of the brain work in parallel to retrieve the correct word from memory.

Most adults can quickly and effortlessly recall as many as 100,000 regularly used words when prompted, but how the brain accomplishes this has long boggled scientists. How does the brain nearly always find the needle in the haystack? 
[Image: precision-needle-thread.jpg]

Previous work has revealed that the brain organizes ideas and words into semantically related clusters. When trying to recall a specific word, the brain activates its cluster, significantly reducing the size of the haystack.

To figure out what happens next in that process, Ries and colleagues asked for help from a population of people in a unique position to lend their brainpower to the problem: patients undergoing brain surgery to reduce their epileptic seizures. Before surgery, neurosurgeons monitor their brain activity to figure out which region of the brain is triggering the patients' seizures, which requires the patients to wear a grid of dozens of electrodes placed directly on top of the cortex, the outermost folded layers of the brain.

While the patients were hooked up to this grid in a hospital and waiting for a seizure to occur, Ries asked if they'd be willing to participate in her research. Recording brain signals directly from the cortical surface affords neuroscientists like Ries an unparalleled look at exactly when and where neurons are communicating with one another during tasks.

"During that period, you have time to do cognitive research that's impossible to do otherwise," she said. "It's an extraordinary window of opportunity."

For the recent study, nine patients agreed to participate. In 15 minute-sessions, she and her team would show the patients an item on a computer screen—musical instruments, vehicles, houses—then ask them to name it as quickly as possible, all while tracking their brain activity.

They measured the separate neuronal processes involved with first activating the item's conceptual cluster, then selecting the proper word. Surprisingly, they discovered the two processes actually happen at the same time and activate a much wider network of brain regions than previously suspected. As expected, two regions known to be involved in language processing lit up, the left inferior frontal gyrus and the posterior temporal cortex. But so did several other regions not traditionally linked to language, including the medial and middle frontal gyri, the researchers reported in the Proceedings of the National Academy of Sciences.

"This work shows the word retrieval process in the brain is not at all as localized as we previously thought," Ries said. "It's not a clear division of labor between brain regions. It's a much more complex process."

Learning exactly how the brain accomplishes these tasks could one day help speech-language pathologists devise strategies for treating disorders that prevent people from readily accessing their vocabulary.

"Word retrieval is usually effortless in most people, but it is routinely compromised in patients who suffer from anomia, or word retrieval difficulty," Ries said. "Anomia is the most common complaint in patients with stroke-induced aphasia, but is also common in neurodegenerative diseases and normal aging. So it is critical to understand how this process works to understand how to help make it better."

[Image: 1x1.gif] Explore further: Studies of epilepsy patients uncover clues to how the brain remembers

More information: Stephanie K. Ri?s et al, Spatiotemporal dynamics of word retrieval in speech production revealed by cortical high-frequency band activity, Proceedings of the National Academy of Sciences (2017).DOI: 10.1073/pnas.1620669114 

Read more at:

"Word retrieval is usually effortless in most people, but it is routinely compromised in patients who suffer from anomia, or word retrieval difficulty,"

Want Sum Quantum???

Retrieve ~333/Word Recieve 

Physicists settle debate over how exotic quantum particles form

June 23, 2017 by Carla Reiter

[Image: physicistsse.jpg]
Here “3” symbolizes an Efimov molecule comprised of three atoms. While all “3”s look about the same, research from the Chin group observed a tiny “3” that is clearly different. Credit: Cheng Chin
New research by physicists at the University of Chicago settles a longstanding disagreement over the formation of exotic quantum particles known as Efimov molecules.

The findings, published last month in Nature Physics, address differences between how theorists say Efimov molecules should form and the way researchers say they did form in experiments. The study found that the simple picture scientists formulated based on almost 10 years of experimentation had it wrong—a result that has implications for understanding how the first complex molecules formed in the early universe and how complex materials came into being.
Efimov molecules are quantum objects formed by three particles that bind together when two particles are unable to do so. The same three particles can make molecules in an infinite range of sizes, depending on the strength of the interactions between them.
Experiments had shown the size of an Efimov molecule was roughly proportional to the size of the atoms that comprise it—a property physicists call universality.
"This hypothesis has been checked and rechecked multiple times in the past 10 years, and almost all the experiments suggested that this is indeed the case," said Cheng Chin, a professor of physics at UChicago, who leads the lab where the new findings were made. "But some theorists say the real world is more complicated than this simple formula. There should be some other factors that will break this universality."
The new findings come down somewhere between the previous experimental findings and predictions of theorists. They contradict both and do away with the idea of universality.
"I have to say that I am surprised," Chin said. "This was an experiment where I did not anticipate the result before we got the data."
The data came from extremely sensitive experiments done with cesium and lithium atoms using techniques devised by Jacob Johansen, previously a graduate student in Chin's lab who is now a postdoctoral fellow at Northwestern University. Krutik Patel, a graduate student at UChicago, and Brian DeSalvo, a postdoctoral researcher at UChicago, also contributed to the work.
"We wanted to be able to say once and for all that if we didn't see any dependence on these other properties, then there's really something seriously wrong with the theory," Johansen said. "If we did see dependence, then we're seeing the breakdown of this universality. It always feels good, as a scientist, to resolve these sorts of questions."

Developing new techniques
Efimov molecules are held together by quantum forces rather than by the chemical bonds that bind together familiar molecules such as H2O. The atoms are so weakly connected that the molecules can't exist under normal conditions. Heat in a room providing enough energy to shatter their bonds.
The Efimov molecule experiments were done at extremely low temperatures—50 billionths of a degree above absolute zero—and under the influence of a strong magnetic field, which is used to control the interaction of the atoms. When the field strength is in a particular, narrow range, the interaction between atoms intensifies and molecules form. By analyzing the precise conditions in which formation occurs, scientists can infer the size of the molecules.
But controlling the magnetic field precisely enough to make the measurements Johansen sought is extremely difficult. Even heat generated by the electric current used to create the field was enough to change that field, making it hard to reproduce in experiments. The field could fluctuate at a level of only one part in a million—a thousand times weaker than the Earth's magnetic field—and Johansen had to stabilize it and monitor how it changed over time.
The key was a technique he developed to probe the field using microwave electronics and the atoms themselves.
"I consider what Jacob did a tour de force," Chin said. "He can control the field with such high accuracy and perform very precise measurements on the size of these Efimov molecules and for the first time the data really confirm that there is a significant deviation of the universality."
The new findings have important implications for understanding the development of complexity in materials. Normal materials have diverse properties, which could not have arisen if their behavior at the quantum level was identical. The three-body Efimov system puts scientists right at the point at which universal behavior disappears.
"Any quantum system made with three or more particles is a very, very difficult problem," Chin said. "Only recently do we really have the capability to test the theory and understand the nature of such molecules. We are making progress toward understanding these small quantum clusters. This will be a building block for understanding more complex material."
[Image: 1x1.gif] Explore further: Exotic, gigantic molecules fit inside each other like Russian nesting dolls
More information: Jacob Johansen et al. Testing universality of Efimov physics across broad and narrow Feshbach resonances, Nature Physics (2017). DOI: 10.1038/nphys4130 
Journal reference: Nature Physics [Image: img-dot.gif] [Image: img-dot.gif]
Provided by: University of Chicago

Read more at:[url=]

Anomia Anomaly = ~3333333333333333333333333333333333333333333333333333333333333333
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
Wholly Quantum Foam Flux!!!  
Right where Sheep  we Left off... 
Traditional computers (left brain) focus on analytical thinking and language.
Neurosynaptic chips, though, address the senses and pattern recognition (right brain).

Brain-inspired supercomputing system takes spotlight in IBM, US Air Force Research Lab collab
June 24, 2017 by Nancy Owano
[Image: braininspire.jpg]

An artist's rendering of the AI supercomputing system that IBM Research will develop for the U.S. Air Force Research Lab. The system uses the IBM TrueNorth Neuromorphic System modeled after the human brain for high levels of processing at the lowest levels of power consumption. Credit: IBM Research

(Tech Xplore)—IBM and the Air Force Research Laboratory are working to develop an artificial intelligence-based supercomputer with a neural network design that is inspired by the human brain.

The work involves building a supercomputer that behaves like a natural brain— in that these chips operate in a fashion similar to the synapses within a biological brain. The system is powered by a 64-chip array of the IBM TrueNorth Neurosynaptic System.
This 64-chip array, said Andrew Tarantola in Engadget,will contain the processing equivalent of 64 million neurons and 16 billion synapses."
In technical terms, the two are partnering to improve the "TrueNorth line of chips designed to optimize the performance of machine learning models at the hardware level," said John Mannes in TechCrunch.
The system fits in a 4U-high (7") space in a standard server rack, said IBM, and eight such systems will enable the scale of 512 million neurons per rack.
How are the chips different from conventional CPUs?
"Each core is part of a distributed network and operate in parallel with one another on an event-driven basis. That is, these chips don't require a clock, as conventional CPUs do, to function," said Tarantola. If a core fails, the rest of the array will continue to work.
Observers are also calling up its low power consumption. "This 64-chip array will contain the processing equivalent of 64 million neurons and 16 billion synapses, yet absolutely sips energy," said Engadget.
StreetInsider said that "the processor component will consume the energy equivalent of a dim light bulb – a mere 10 watts to power."

Beyond the collab with the Air Force, IBM believes the low power consumption of its chips could some day bring value, said TechCrunch, "in constrained applications like mobile phones and self-driving cars."
Indeed, an IBM Research posting caption for a smartphone says, "Low power chips could make your mobile phone as powerful as a supercomputer."

Traditional computers (left brain) focus on analytical thinking and language.
Neurosynaptic chips, though, address the senses and pattern recognition (right brain).

IBM said its scientific quest is how to meld these two capabilities together into holistic computing intelligence.
The IBM news release on the collaboration said the scalable platform IBM is building for AFRL "will feature an end-to-end software ecosystem designed to enable deep neural-network learning and information discovery."
The news release further described how this melding can occur:
"The IBM TrueNorth Neurosynaptic System can efficiently convert data (such as images, video, audio and text) from multiple, distributed sensors into symbols in real time. AFRL will combine this 'right-brain' perception capability of the system with the 'left-brain' symbol processing capabilities of conventional computer systems. The large scale of the system will enable both 'data parallelism' where multiple data sources can be run in parallel against the same neural network and 'model parallelism' where independent neural networks form an ensemble that can be run in parallel on the same data."
Why is the Air Force interested and how would they use this technology?
[Image: 1-braininspire.jpg]

IBM Research has developed the IBM TrueNorth Neuromorphic system to deliver AI supercomputing capabilities at the lowest levels of power. Credit: IBM Research

Washington Technology said, "AFRL is investigating potential uses of the system in embedded, mobile and autonomous settings where limitations exist on the size, weight and power of platforms."
Tarantola: The Air Force wants to combine TrueNorth's ability to convert multiple data feeds—audio, video or text—into machine-readable symbols with a conventional supercomputer's ability to crunch data.
(AFRL seeks to combine that so-called "right-brain" function with "left-brain" symbol processing capabilities in conventional computer systems," said Washington Technology.)
In the Air Force context, Mannes said applications could include its use in satellites and unmanned aerial vehicles (UAVs).
Meanwhile, reports noted on Friday that the technology is still very much in the early stages. Mannes in TechCrunch: "IBM's chips are still too experimental to be used in mass production, but they've shown promise in running a special type of neural network called a spiking neural network."
At this juncture, it is useful to know the technology has had its detractors. TechCrunch said in 2014, a research director at Facebook expressed skepticism at TrueNorth's ability to deliver value in a real-world application. The chips were designed for spiking neural networks, but he said it was a type of network that did not show as much promise as convolutional neural networks on common tasks like object recognition.
Mannes commented: "We haven't fully explored all the potential applications of this type of computing, so while it's very reasonable to be conservative, researchers have little incentive to completely disregard the potential of the project."
The IBM TrueNorth Neurosynaptic System was originally developed under the auspices of Defense Advanced Research Projects Agency's (DARPA) Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program in collaboration with Cornell University. Research with TrueNorth is currently being performed by over 40 universities, government labs and industrial partners on five continents.

[Image: img-dot.gif] Explore further: Lawrence Livermore and IBM collaborate to build new brain-inspired supercomputer
More information: 

Quote:The team, working on The Good Judgment Project, had the perfect opportunity to test its future-predicting methods during a four-year government-funded geopolitical forecasting tournament sponsored by the United States Intelligence Advanced Research Projects Activity. The tournament, which began in 2011, aimed to improve geopolitical forecasting and intelligence analysis by tapping the wisdom of the crowd

recall: ~3 Years ahead of the game.
Top-Hats and Dunce Caps...Honestly, Of Whom am I Thinkin' ?
Sunday, October 19th, 2008, 12:30 am

This is the Education of Lincoln.
Courtesy of the Hidden Mission Members. [Image: bump.gif]

Predicting the future with the wisdom of crowds

June 23, 2017 by Pamela Tom

[Image: 1-predictingth.jpg]
Credit: UC  Berkeley Haas School of Business 

Forecasters often overestimate how good they are at predicting geopolitical events—everything from who will become the next pope to who will win the next national election in Taiwan.

[Image: 104094043-GettyImages-621960106.530x298....1478720781]An Old Frump

Bets She Lost
Betsy Ross

[Image: image.png]Donald Trump

Top-Hats and Dunce Caps...Honestly, Of Whom am I Thinkin' ?

But UC Berkeley Haas management professor Don Moore and a team of researchers found a new way to improve that outcome by training ordinary people to make more confident and accurate predictions over time as superforecasters.

The team, working on The Good Judgment Project, had the perfect opportunity to test its future-predicting methods during a four-year government-funded geopolitical forecasting tournament sponsored by the United States Intelligence Advanced Research Projects Activity. The tournament, which began in 2011, aimed to improve geopolitical forecasting and intelligence analysis by tapping the wisdom of the crowd. Moore's team proved so successful in the first years of the competition that it bumped the other four teams from a national competition, becoming the only funded project left in the competition.

Some of the results are published in a Management Science article "Confidence Calibration in a Multi-year Geopolitical Forecasting Competition." Moore's co-authors, who combine best practices from psychology, economics, and behavioral science, include husband and wife team Barbara Mellers and Philip Tetlock of the University of Pennsylvania, who co-lead the Good Judgment Project with Moore; along with Lyle Unger and Angela Minster of the University of Pennsylvania; Samuel A. Swift, a data scientist at investment strategy firm Betterment; Heather Yang of MIT; and Elizabeth Tenney of the University of Utah.

The study differs from previous research in overconfidence in forecasting because it examines accuracy in forecasting over time, using a huge and unique data set gathered during the tournament. That data included 494,552 forecasts by 2,860 forecasters who predicted the outcomes of hundreds of events.

Wisdom of the crowd

Study participants, a mix of scientists, researchers, academics, and other professionals, weren't experts on what they were forecasting, but were rather educated citizens who stayed current on the news.

Their training included four components:
  • Considering how often and under what circumstances a similar event to the one they were considering took place.

  • Averaging across opinions to exploit the wisdom of the crowd.

  • Using mathematical and statistical models when applicable.

  • Reviewing biases in forecasting—in particular the risk of both overconfidence and excess caution in estimating probabilities.
Over time, this group answered a total of 344 specific questions about geopolitical events. All of the questions had clear resolutions, needed to be resolved within a reasonable time frame, and had to be relatively difficult to forecast—"tough calls," as the researchers put it. Forecasts below a 10 percent or above a 90 percent chance of occurring were deemed too easy for the forecasters.

The majority of the questions targeted a specific outcome, such as "Will the United Nations General Assembly recognize a Palestinian state by September 30, 2011?" or "Will Cardinal Peter Turkson be the next pope?"

The researchers wanted to measure whether participants considered themselves experts on questions, so they asked them to assess themselves, rating their expertise on each question on a 1–5 scale during their first year. In the second year, they placed themselves in "expertise quintiles" relative to others answering the same questions. In the final year, they indicated their confidence level from "not at all" to "extremely" per forecast.

Training: Astoundingly effective

By the end of the tournament, researchers found something surprising. On average, the group members reported that they were 65.4 percent sure that they had correctly predicted what would happen. In fact, they were correct 63.3 percent of the time, for an overall level of 2.1 percent confidence. "Our results find a remarkable balance between people's confidence and accuracy," Moore said.

In addition, as participants gathered more information, both their confidence and their accuracy improved.

In the first month of forecasting during the first year, confidence was 59 percent and accuracy was 57 percent. By the final month of the third year, confidence had increased to 76.4 percent and accuracy reached 76.1 percent.

The researchers called the training the group received "astoundingly effective."

"What made our forecasters good was not so much that they always knew what would happen, but that they had an accurate sense of how much they knew," the study concluded.

The research also broke new ground, as it is quantitative in a field that generally produces qualitative studies.

"We see potential value not only in forecasting world events for intelligence agencies and governmental policy-makers, but innumerable private organizations that must make important strategic decisions based on forecasts of future states of the world," the researchers concluded.

[Image: AP_081223010018-640x480.jpg]


More information: Don A. Moore et al. Confidence Calibration in a Multiyear Geopolitical Forecasting Competition, Management Science (2016). DOI: 10.1287/mnsc.2016.2525 

[Image: 00xp-inauguralbible-master768.jpg]

[Image: this-shit-writes_v1_583.jpg]
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
Quote:This hypothesis, originally called the grandmother-neuron hypothesis, is more familiar to a recent generation of neuroscientists as the Jennifer-Aniston-neuron hypothesis, after the discovery that several neurological patients had neurons that appeared to respond only to depictions of particular Hollywood celebrities.

Quote:"To my eye, this is suggesting that neural networks are actually trying to approximate getting a grandmother neuron," Bau says. "They're not trying to just smear the idea of grandmother all over the place. They're trying to assign it to a neuron. It's this interesting hint of this structure that most people don't believe is that simple."

Read more at:

New technique elucidates the inner workings of neural networks trained on visual data
June 30, 2017 by Larry Hardesty

[Image: 22-newtechnique.jpg]
Neural networks learn to perform computational tasks by analyzing large sets of training data. But once they’ve been trained, even their designers rarely have any idea what data elements they’re processing. Credit: Christine Daniloff/MIT
Neural networks, which learn to perform computational tasks by analyzing large sets of training data, are responsible for today's best-performing artificial intelligence systems, from speech recognition systems, to automatic translators, to self-driving cars.

But neural nets are black boxes. Once they've been trained, even their designers rarely have any idea what they're doing—what data elements they're processing and how.
Two years ago, a team of computer-vision researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) described a method for peering into the black box of a neural net trained to identify visual scenes. The method provided some interesting insights, but it required data to be sent to human reviewers recruited through Amazon's Mechanical Turk crowdsourcing service.
At this year's Computer Vision and Pattern Recognition conference, CSAIL researchers will present a fully automated version of the same system. Where the previous paper reported the analysis of one type of neural network trained to perform one task, the new paper reports the analysis of four types of neural networks trained to perform more than 20 tasks, including recognizing scenes and objects, colorizing grey images, and solving puzzles. Some of the new networks are so large that analyzing any one of them would have been cost-prohibitive under the old method.
The researchers also conducted several sets of experiments on their networks that not only shed light on the nature of several computer-vision and computational-photography doink-head, but could also provide some evidence about the organization of the human brain.
Neural networks are so called because they loosely resemble the human nervous system, with large numbers of fairly simple but densely connected information-processing "nodes." Like neurons, a neural net's nodes receive information signals from their neighbors and then either "fire"—emitting their own signals—or don't. And as with neurons, the strength of a node's firing response can vary.
In both the new paper and the earlier one, the MIT researchers doctored neural networks trained to perform computer vision tasks so that they disclosed the strength with which individual nodes fired in response to different input images. Then they selected the 10 input images that provoked the strongest response from each node.
In the earlier paper, the researchers sent the images to workers recruited through Mechanical Turk, who were asked to identify what the images had in common. In the new paper, they use a computer system instead.

"We catalogued 1,100 visual concepts—things like the color green, or a swirly texture, or wood material, or a human face, or a bicycle wheel, or a snowy mountaintop," says David Bau, an MIT graduate student in electrical engineering and computer science and one of the paper's two first authors. "We drew on several data sets that other people had developed, and merged them into a broadly and densely labeled data set of visual concepts. It's got many, many labels, and for each label we know which pixels in which image correspond to that label."
The paper's other authors are Bolei Zhou, co-first author and fellow graduate student; Antonio Torralba, MIT professor of electrical engineering and computer science; Aude Oliva, CSAIL principal research scientist; and Aditya Khosla, who earned his PhD as a member of Torralba's group and is now the chief technology officer of the medical-computing company PathAI.
The researchers also knew which pixels of which images corresponded to a given network node's strongest responses. Today's neural nets are organized into layers. Data are fed into the lowest layer, which processes them and passes them to the next layer, and so on. With visual data, the input images are broken into small chunks, and each chunk is fed to a separate input node.
For every strong response from a high-level node in one of their networks, the researchers could trace back the firing patterns that led to it, and thus identify the specific image pixels it was responding to. Because their system could frequently identify labels that corresponded to the precise pixel clusters that provoked a strong response from a given node, it could characterize the node's behavior with great specificity.
The researchers organized the visual concepts in their database into a hierarchy. Each level of the hierarchy incorporates concepts from the level below, beginning with colors and working upward through textures, materials, parts, objects, and scenes. Typically, lower layers of a neural network would fire in response to simpler visual properties—such as colors and textures—and higher layers would fire in response to more complex properties.
But the hierarchy also allowed the researchers to quantify the emphasis that networks trained to perform different tasks placed on different visual properties. For instance, a network trained to colorize black-and-white images devoted a large majority of its nodes to recognizing textures. Another network, when trained to track objects across several frames of video, devoted a higher percentage of its nodes to scene recognition than it did when trained to recognize scenes; in that case, many of its nodes were in fact dedicated to object detection.
One of the researchers' experiments could conceivably shed light on a vexed question in neuroscience. Research involving human subjects with electrodes implanted in their brains to control severe neurological disorders has seemed to suggest that individual neurons in the brain fire in response to specific visual stimuli. This hypothesis, originally called the grandmother-neuron hypothesis, is more familiar to a recent generation of neuroscientists as the Jennifer-Aniston-neuron hypothesis, after the discovery that several neurological patients had neurons that appeared to respond only to depictions of particular Hollywood celebrities.
Many neuroscientists dispute this interpretation. They argue that shifting constellations of neurons, rather than individual neurons, anchor sensory discriminations in the brain. Thus, the so-called Jennifer Aniston neuron is merely one of many neurons that collectively fire in response to images of Jennifer Aniston. And it's probably part of many other constellations that fire in response to stimuli that haven't been tested yet.
Because their new analytic technique is fully automated, the MIT researchers were able to test whether something similar takes place in a neural network trained to recognize visual scenes. In addition to identifying individual network nodes that were tuned to particular visual concepts, they also considered randomly selected combinations of nodes. Combinations of nodes, however, picked out far fewer visual concepts than individual nodes did—roughly 80 percent fewer.
"To my eye, this is suggesting that neural networks are actually trying to approximate getting a grandmother neuron," Bau says. "They're not trying to just smear the idea of grandmother all over the place. They're trying to assign it to a neuron. It's this interesting hint of this structure that most people don't believe is that simple."
[Image: 1x1.gif] Explore further: Computer learns to recognize sounds by watching video
More information: Network Dissection: Quantifying Interpretability of Deep Visual Representations. 
Provided by: Massachusetts Institute of Technology

Read more at:[/url][url=]
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
Synapses in the brain mirror the structure of the visual world
July 12, 2017

[Image: synapsesinth.jpg]
Our brain is especially good at perceiving lines and contours even if they do not actually exist, such as the blue triangle in the foreground of this optical illusion. The pattern of neuronal connections in the brain supports this ability. Credit: University of Basel, Biozentrum
The research team of Prof. Sonja Hofer at the Biozentrum, University of Basel, has discovered why our brain might be so good at perceiving edges and contours. Neurons that respond to different parts of elongated edges are connected and thus exchange information. This can make it easier for the brain to identify contours of objects. The results of the study are now published in the journal Nature.


Exposure to a common visual illusion may enhance your ability to read fine print
July 12, 2017

[Image: exposuretoac.jpg]
Our ability to discriminate fine detail isn’t solely governed by the optics of our eyes. Credit: Dr Rob Jenkins
Exposure to a common visual illusion may enhance your ability to read fine print, according to new research from psychologists at the Universities of York and Glasgow.

[Image: ss-sing.jpg]
Perhaps the meter was discovered audibly??? 

[Image: sh.jpg]
Sammy Snake & Harry Hat Man

Sammy Snake loves to hiss,
so he hisses a lot,
hisses a lot, hisses a lot.
Sammy Snake loves to hiss,
so he hisses a lot.
There aren't many hisses he misses.

But the Hat Man hates noise
and hushes him up.
The Hat Man hates noise
and hushes him up.
The Hat Man says 'sh'
as he hushes.
'Sh, sh, sh!'.

The Ether Model & The Hand of God - Page 51 - Google Books Result

Now, the speed of light is 300,000,000 meters per second (in vacuo)—huge in comparison with thespeed of sound (in air) at 333 meters per second.

I-physics Iv Tm' 2006 Ed. - Page 110 - Google Books Result

The speed of sound is equivalent to twice your distance from the wall divided by ... to this measurement is: 1000 meters = 333 1/3 meters per second 3 seconds.

Wireless telegraphy and wireless telephony an elementary treatise
A.E. Kennelly - History
Spireri of Electromagnetic W acres The speed of sound waves in air we have seen to be in the neighborhood of 333 meters per second, (1090 feet per second or ...

Would you take note if eye snap my finger but you saw it first there [Image: sheep.gif] heard it second later here?

~333 meters per second because Many cultures seemed to know what a "Second" of time is.

Have you heard the word @ ~333???

RE: Top-Hats and Dunce Caps...Honestly, Of Whom am I Thinkin' ?

[Image: 35519135330_a050041e05_m.jpg]-EA
Sunday, October 19th, 2008, 12:30 am

This is the Education of Lincoln.
Courtesy of the Hidden Mission Members. [Image: bump.gif]
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
Chiral Eh Linke???
How Rite Eye Am.

Right where Sheep  we Left off.

Quote:"This is an incredibly exciting discovery. We can clearly conclude that the same breaking of symmetry can be observed in any physical system, whether it occurred at the beginning of the universe or is happening today, right here on Earth," said Prof. Dr. Karl Landsteiner, a string theorist at the Instituto de Fisica Teorica UAM/CSIC and co-author of the paper.

Read more at:

[Image: exopolZZZZZY_01.jpg]

Scientists observe gravitational anomaly on Earth
July 21, 2017

[Image: 1-scientistsob.jpg]
Prof. Dr. Karl Landsteiner, a string theorist at the Instituto de Fisica Teorica UAM/CSIC and co-author of the paper made this graphic to explain the gravitational anomaly. Credit: IBM Research
Modern physics has accustomed us to strange and counterintuitive notions of reality—especially quantum physics which is famous for leaving physical objects in strange states of superposition. For example, Schrödinger's cat, who finds itself unable to decide if it is dead or alive. Sometimes however quantum mechanics is more decisive and even destructive.

Symmetries are the holy grail for physicists. Symmetry means that one can transform an object in a certain way that leaves it invariant. For example, a round ball can be rotated by an arbitrary angle, but always looks the same. Physicists say it is symmetric under rotations. Once the symmetry of a physical system is identified it's often possible to predict its dynamics.
Sometimes however the laws of quantum mechanics destroy a symmetry that would happily exist in a world without quantum mechanics, i.e classical systems. Even to physicists this looks so strange that they named this phenomenon an "anomaly."
For most of their history, these quantum anomalies were confined to the world of elementary particle physics explored in huge accelerator laboratories such as Large Hadron Collider at CERN in Switzerland. Now however, a new type of materials, the so-called Weyl semimetals, similar to 3-D graphene, allow us to put the symmetry destructing quantum anomaly to work in everyday phenomena, such as the creation of electric current.
In these exotic materials electrons effectively behave in the very same way as the elementary particles studied in high energy accelerators. These particles have the strange property that they cannot be at rest—they have to move with a constant speed at all times. They also have another property called spin. It is like a tiny magnet attached to the particles and they come in two species. The spin can either point in the direction of motion or in the opposite direction.
[Image: 5971dd505d3f4.jpg]
An international team of scientists have verified a fundamental effect in a crystal that had been previously only thought to be observable in the deep universe. The experiments have verified a quantum anomaly that had been experimentally elusive before. The results are appearing in the journal Nature. Credit: Robert Strasser, Kees Scherer; collage: Michael Büker
When one speaks of right- and left-handed particles this property is called chirality. Normally the two different species of particles, identical except for their chirality (handedness), would come with separate symmetries attached to them and their numbers would be separately conserved. However, a quantum anomaly can destroy their peaceful coexistence and changes a left-handed particle into a right-handed one or vice-versa.

Appearing in a paper published today in Nature, an international team of physicists, material scientists and string theoreticians, have observed such a material, an effect of a most exotic quantum anomaly that hitherto was thought to be triggered only by the curvature of space-time as described by Einstein's theory of relativity. But to the surprise of the team, they discovered it also exists on Earth in the properties of solid state physics, which much of the computing industry is based on, spanning from tiny transistors to cloud data centers.
"For the first time, we have experimentally observed this fundamental quantum anomaly on Earth which is extremely important towards our understanding of the universe," said Dr. Johannes Gooth, an IBM Research scientist and lead author of the paper. "We can now build novel solid-state devices based on this anomaly that have never been considered before to potentially circumvent some of the problems inherent in classical electronic devices, such as transistors."
New calculations, using in part the methods of string theory, showed that this gravitational anomaly is also responsible for producing a current if the material is heated up at the same time a magnetic field is applied.
"This is an incredibly exciting discovery. We can clearly conclude that the same breaking of symmetry can be observed in any physical system, whether it occurred at the beginning of the universe or is happening today, right here on Earth," said Prof. Dr. Karl Landsteiner, a string theorist at the Instituto de Fisica Teorica UAM/CSIC and co-author of the paper.
IBM scientists predict this discovery will open up a rush of new developments around sensors, switches and thermoelectric coolers or energy-harvesting devices, for improved power consumption.
[Image: 1x1.gif] Explore further: New breakthrough discovery—every quantum particle travels backwards
More information: Johannes Gooth et al. Experimental signatures of the mixed axial–gravitational anomaly in the Weyl semimetal NbP, Nature (2017). DOI: 10.1038/nature23005 
Journal reference: Nature [Image: img-dot.gif] [Image: img-dot.gif]
Provided by: IBM

Read more at:[/url]

Itza Chiral in the observer effect spy role

This is an incredibly exciting discovery. We can clearly conclude that the same breaking of symmetry can be observed in any physical system

Quote:The particular type of Majorana fermion the research team observed is known as a “chiral” fermion because it moves along a one-dimensional path in just one direction. While the experiments that produced it were extremely difficult to conceive, set up and carry out, the signal they produced was clear and unambiguous, the researchers said.

JULY 20, 2017

An experiment proposed by Stanford theorists finds evidence for the Majorana fermion, a particle that’s its own antiparticle
In a discovery that concludes an 80-year quest, Stanford and University of California researchers found evidence of particles that are their own antiparticles. These 'Majorana fermions’  could one day help make quantum computers more robust. 

Hubble's New "Runaway Planet":
A Unique Opportunity for Testing
the Exploding Planet Hypothesis 

[Image: 1-ancientmassi.jpg]about the size of Ceres
Read more at:
The team studied samples from Martian meteorites and realized that an overabundance of rare metals—such as platinum, osmium and iridium—in the planet's mantle required an explanation. Such elements are normally captured in the metallic cores of rocky worlds, and their existence hinted that Mars had been pelted by asteroids throughout its early history. By modeling how a large object such as an asteroid would have left behind such elements, Mojzsis and Brasser explored the likelihood that a colossal impact could account for this metal inventory.
... Hyperdimensional Physics
Part I

For some time, we have been asked to provide an overview of a subject intimately connected with -- but not dependent upon -- the confirmation of "intelligent ruins at Cydonia," on Mars:

The arcane subject of "Hyperdimensional Physics."

Unknown to most current physicists and students of science (if not the general media and public), the beginnings of modern physics launched over 100 years ago by the so-called "giants" -- Helmholtz, Lord Kelvin, Faraday, Maxwell and many others -- laid a full and rich tradition in this currently little-known field: the open, heatedly debated scientific and philosophical premise that three-dimensional reality is only a subset of a series of higher, hyperspatial, additional dimensions, which control not only the physics of our very existence, from stars to galaxies to life itself ... but potentially, through time-variable changes in its foundations--
[Image: 1-scientistsob.jpg]
Scientists observe gravitational anomaly on Earth

July 21, 2017

Dramatic coming changes in our lives.

This bold theoretical and experimental era, at the very dawn of science as we know it, came to an abrupt end at the close of the 19th Century. That was when our currently accepted (and very different) view of "physics" -- everything from the "Big Bang" Expanding Universe Cosmology, to Relativistic limitations imposed by "flat" space and non-simultaneous time, complicated by a non-intuitive "Quantum Mechanics" of suddenly uncertain atomic "realities" -- all took a very different turn ... from where they had been headed. Imagine our surprise, when -- as part of our Enterprise Mission effort to verify the existence of intelligently-created ruins at "Cydonia" -- we suddenly realized we might have stumbled across the geometry of this same 19th Century, pre-Relativity "hyperdimensional physics"--

Lincoln... RCH has been putting positive check-marks on his hyper-D theory scoreboard.

[Image: hoaglandufodiaries.jpg]

Right Write Rite here on your neuron>>> .

Yes, Stu...  You have an "RCH Neuron" in your mind.

Update it now.

Quote:Individual place cells in the hippocampus respond to only a few spatial locations. The grid cells in the entorhinal complex, on the other hand, fire at multiple positions in the environment, such that specific sets are consecutively activated as an animal traverses its habitat.

New model for the origin of grid cells
July 21, 2017

[Image: s_27lines.jpg]

Ludwig Maximilian University of Munich neurobiologists present a new theory for the origin of the grid cells required for spatial orientation in the mammalian brain, which assigns a vital role to the timing of trains of signals they receive from neurons called place cells.

Nerve cells in the brain known as place cells and grid cells, respectively, play a crucial role in spatial navigation in mammals Individual place cells in the hippocampus respond to only a few spatial locations. The grid cells in the entorhinal complex, on the other hand, fire at multiple positions in the environment, such that specific sets are consecutively activated as an animal traverses its habitat. These activation patterns give rise to a virtual map, made up of a hexagonal arrangement of grid cells that reflect the relative distances between particular landmarks in the real world. The brain is therefore capable of constructing a virtual map which encodes its own position in space.

The Nobel Prize for Medicine and Physiology 2015 went to the discoverers of this system, which has been referred to as the brain's GPS. However, the developmental relationship between place cells and grid cells, as well as the mechanism of origin of grid cells and their disposition in hexagonal lattices remain unclear. Now LMU neurobiologists Professor Christian Leibold and his coworker Mauro Miguel Monsalve Mercado have proposed a new theoretical model, which for the first time provides a plausible model based on known biological processes. The model implies that the development of grid cells and their response fields depend on synaptic input from place cells. The new findings are described in the journal Physical Review Letters.
The authors of the new paper assign a central role in their model to correlations in the timing of the neuronal response sequences generated by different place cells. The members of these groups become active when the animal reaches certain locations in space, and they transmit nerve impulses in precisely coordinated temporal sequences, which follow a particular rhythmic patterns, and thereby encode relative spatial distances. Leibold and Monsalve Mercado have used a classical neuronal learning rule, known as Hebb's rule, to analyze the temporal correlations between the firing patterns of place cells and the organization of the grid cells. Hebb's rule states that repeated activation of two functionally coupled neurons in quick succession progressively enhances the efficiency of synaptic transmission between them. By applying this concept of activity-dependent synaptic plasticity to the correlated temporal firing patterns of place cells, the authors can account for the formation of the hexagonal dispositions of grid cells observed in freely navigating mammals.
"The models so far proposed to explain the development of grid cells on the basis of input from place cells were unspecific about the precises underlying biological mechanisms. We have now, for the first time, been able to construct a coherent model for the origin of grid cells which makes use of known biological mechanisms," says Christian Leibold. The new model implies that grid cells are generated by a neuronal learning process. This process exploits synaptic plasticity to transform temporal coordinated signaling between place cells into the hexagonal patterns of grid-cells reponses observed in the entorhinal complex. The model therefore predicts that the grid cells should first arise in the deep layers of the entorhinal cortex.

More information: Mauro M. Monsalve-Mercado et al. Hippocampal Spike-Timing Correlations Lead to Hexagonal Grid Fields, Physical Review Letters (2017). DOI: 10.1103/PhysRevLett.119.038101 
Journal reference: Physical Review Letters [Image: img-dot.gif] [Image: img-dot.gif]
Provided by: Ludwig Maximilian University of Munich

Read more at:

Quote:"It turns out that, for the purposes of navigating on a horizontal plane, the best coordinate system is indeed the hexagonal lattice that has been experimentally observed for the grid cells of rats,"

Why grid-cell lattices are hexagonal

April 30, 2015

[Image: whygridcelll.png]
Credit: pupes1 /
Specialized brain cells provide an internal coordinate system that enables mammals to orient in space. Scientists at LMU and Harvard University have now shown mathematically why these cells generate hexagonal lattices.

Neuronal grid cells play a crucial role in mammalian spatial navigation. As the animal moves through its environment, distinct sets of these cells are sequentially activated. Although each individual grid cell responds to multiple positions in space, the overall activation patterns have been found to form virtual hexagonal lattices. These lattices effectively serve as a set of coordinates, on to which the environment is mapped, thus allowing the animal to determine its precise position and navigate in real space. The biologists who characterized this fascinating biological system in rats received the Nobel Prize in Physiology in 2014 for their discoveries.

Andreas Herz (Professor of Computational Neuroscience at LMU) and his Munich colleague Dr. Martin Stemmler, in collaboration with Dr. Alexander Mathis at Harvard University, have now provided a mathematical rationale for the hexagonal symmetry of grid-cell activation patterns. Their work is described in the online journal eLife.

Advantages of a hexagonal code

The three neurobiologists have used a mathematical approach to explore the reasons for the lattice-like distribution of spatial codes. Their analyses demonstrate that the hexagonal symmetry characteristic of grid-cell activation patterns (and of more familiar structures such as the honeycomb) affords the highest possible spatial resolution. Furthermore, their work suggests how grid cells should be arranged in mammals other than rodents, such as bats and whales.

"It turns out that, for the purposes of navigating on a horizontal plane, the best coordinate system is indeed the hexagonal lattice that has been experimentally observed for the grid cells of rats," says Herz. "The analysis of the case for three-dimensional space is more complex," adds Martin Stemmler. "Here the optimal configuration resembles that of the pyramidal packing of stacks of oranges." Preliminary experimental evidence is compatible with this theoretical prediction. Recent studies carried out by researchers led by Professor Nachum Ulanovsky at the Weizmann Institute in Israel indeed suggest the existence of such a grid-cell lattice in bats flying through three-dimensional space

"Our findings indicate that the brain may well be capable of utilizing highly efficient grid-like coding schemes for the representation of diverse types of information. And lattice-like configurations provide enormous advantages when it comes to encoding complex objects, which require the registration of a plethora of features for their unambiguous characterization," says Alexander Mathis. Indeed, the researchers believe that, just as the discovery and investigation of [url=]grid cells has revolutionized our understanding of spatial coding in the brain, lattice-like patterns of neuronal activity are also likely to play an important role in other areas of neuroscience.

[Image: 1x1.gif] Explore further: Brain's GPS system influenced by shape of environment

More information: "Probable nature of higher-dimensional symmetries underlying mammalian grid-cell activity patterns" eLife 2015;10.7554/eLife.05979 DOI: 
Journal reference: eLife [Image: img-dot.gif] [Image: img-dot.gif]
Provided by: Ludwig Maximilian University of Munich

Along the vines of the Vineyard.
With a forked tongue the snake singsss...

Top-Hat Rationale.

Distance to the Sun...

Earth / Mars Ratio  1 : 1.52

Show values to . . .      3     4     5     6     7     8     9 significant figures.
edge a = 1

edge b = 1.52

edge c = 1.82
[Image: arittri.gif]

angle A = 33.3 degrees

angle B = 56.7 degrees

area = 0.76 square units

[Image: image.png]

distance from sun...

Distance from equator 

Aloha  ~ 19.5  Hi

Meme against the mainstream.

Recaps the synapse.
Along the vines of the Vineyard.
With a forked tongue the snake singsss...

Quote: "The analysis of the case for three-dimensional space is more complex," 
adds Martin Stemmler. 

"Here the optimal configuration resembles that of the pyramidal packing of stacks of oranges.

I remember proposing to Dr. Crater hexagonal pyramid stacked lattices.
However with a difference considering the optimum "electron spin angle".
The bases were hexagonal, but the height is constructed with specific base lengths,
to guarantee 6 tetrahedral corner angles of 70.528~ degrees = 
arctangent square root 8

This distributed the electron spin in a highly synergistic fashion,
when the latticing was dual hex pyramids base to base.

This article was most excellent in fantastic discovery on the gravitational anomaly

Read more at:


the hex pyramid latticing quote in the last post provoked me to look for old diagrams Hmm2

this one was unpublished because it is incomplete,
and was not the optimum choice of angles,
it was just producing visual studies of combining phi style geometries with tetrahedral elements,
as a practice to visualize other possibilities.

here we obtain an elongated hex base pyramid,
top one has one pair of faces with a sqrt root phi Khufu pyramid slope {etc}
and the second one below that has phi golden rectangle and tetrahedral rectangle geometries cross secting

One sloping length from apex to corner point {corner angle length - hypotenuse}
has the wild  dimension of: 
square root {4 phi + 2}

blah blah blah -- the study of the combined geometries was evolved for lattice stacking

[Image: MkPpmYc.jpg]
Quote:"What we saw was that for each type of movement, there is a particular pattern of brain activity, and that these patterns were organized in a specific manner" said Dr. Costa.

Hidden deep in the brain, a map that guides animals' movements
August 30, 2017

[Image: neuron.jpg]
Credit: CC0 Public Domain
New research has revealed that deep in the brain, in a structure called striatum, all possible movements that an animal can do are represented in a map of neural activity. If we think of neural activity as the coordinates of this map, then similar movements have similar coordinates, being represented closer in the map, while actions that are more different have more distant coordinates and are further away.

The study, led by researchers at Columbia University and the Champalimaud Centre for the Unknown, was published today in Neuron.
"From the ears to the toes and everything in between, every move the body makes is determined by a unique pattern of brain-cell activity, but until now, and using the map analogy, we only had some pieces of information, like single/isolated latitudes and longitudes but not an actual map. This study was like looking at this map for the first time." said Rui Costa, DVM, PhD, a neuroscientist and a principal investigator at Columbia's Mortimer B. Zuckerman Mind Brain Behavior Institute and investigator at the Champalimaud Research at the Champalimaud Centre for the Unknown, in Lisbon. Dr. Costa and his lab performed much of this work while at Champalimaud, before completing the analysis at Columbia.
A snapshot of neural activity
The brain's striatum is a structure that has been implicated in many brain processes, most notably in learning and selecting which movements to do. For example, a concert pianist harnesses her striatum to learn and play that perfect concerto. Early studies argued that cells in the striatum sent out two simple types of signals through different pathways, either 'go' or 'no go,' and it was this combination of these two signals—acting like a gas pedal and a brake—that drove movement. However, Dr. Costa and his team argued that the reality is far more complex, and that both types of neurons contribute to movement in a very specific way.
"What matters is not how much activity there is in each pathway, but rather the precise patterns of activity," said Dr. Costa. "In other words, which neurons are active at any particular time, and what sorts of movements, or behaviors, corresponded to that activity."
The key to observing neural activity during natural behavior was that the mice had to be able to move freely and naturally. To accomplish this, the team attached miniature, mobile microscopes to the heads of the mice. This allowed them to capture the individual activity patterns of up to 300 neurons in the striatum. At the same time, each mouse was equipped with an accelerometer, like a miniature Fitbit, which recorded the mouse's movements.

"We have recorded striatal neurons before, but here we have the advantage of imaging 200-300 neurons with single-cell resolution at the same time allowing for the study of population dynamics with great detail within a deep brain structure. Furthermore, here we genetically modified the mice so that neurons were visible when they were active, allowing us to measure specific neuronal populations. This gives us unprecedented access to the dynamics of a large population of neurons in a deep brain structure," says Gabriela Martins, postdoctoral researcher and one of the leading authors.
Towards understanding the striatal dynamics
Then, working with Liam Paninski, PhD, a statistician and a principal investigator at the Zuckerman Institute, the researchers devised a mathematical method of stripping out any background noise to the data. What they were left with was a clear window into the patterns of neural activity, which could serve as a basis for the complete catalog, or repertoire of movements.

"What we saw was that for each type of movement, there is a particular pattern of brain activity, and that these patterns were organized in a specific manner" said Dr. Costa.

In the striatum, there is an organization that is not random, where the neurons that are active together tend to be closer together in space. "This, again, implies that we can learn much more from the neuronal activity and how it relates to behavior when considering detailed ensemble patterns instead of looking at average activity." says Andreas Klaus, a postdoctoral researcher and one of the leading authors. This particular representation somehow maps the complete repertoire of possible actions. Similar actions we do are more similarly represented and actions that are more different are represented more differently. "This mapping reflects similarity in actions beyond aspects of movement speed," added Andreas Klaus.
Interpreting patterns of brain activity and eventually repairing them
But how can scientists read and interpret these patterns of brain activity? "Imagine looking at the brain activity when the mouse makes a slight turn to the right vs. a sharp turn. In even more abstract terms, if moving my right arm is more similar to walking than to jumping, then those would be represented more similarly. One of the challenges is finding out what does this mean. Why is the pattern more similar for similar actions? Is it because it's saying something about the body parts or muscles we're using? This is something we hope to explore for the future," says Dr. Costa.
And he added, "The precise description of the organization of activity in the striatum under normal conditions is the first step toward understand whether, and how, these dynamics are changed in disorders of movement, such as in Parkinson's disease. Experts tend to focus on disruptions to the amount of neural activity as playing a role in Parkinson's, but these results strongly suggest that it is the pattern of activity, and specifically disruptions to that pattern, that may be more critical."
This research marks a critical step toward a long-held scientific goal: deciphering how the brain generates behavior. It also offers clues as to what may happen in disorders characterized by disrupted or repetitive movements—including Parkinson's disease and obsessive-compulsive disorder.
[Image: 1x1.gif] Explore further: From brouhaha to coordination: Motor learning from the neuron's point of view
More information: "The spatiotemporal organization of the striatum encodes action space," Neuron (2017). DOI: 10.1016/j.neuron.2017.08.015 
Journal reference: Neuron [Image: img-dot.gif] [Image: img-dot.gif]
Provided by: The Zuckerman Institute at Columbia University

Astrophysicists convert moons and rings of Saturn into music
August 30, 2017

[Image: 2-universityof.png]
The orbital periods, scaled frequencies, and musical notes of Saturn's major moons. The frequencies have been increased by 27 octaves from their true values by astrophysicists at the University of Toronto so they can be heard by human ears. Credit: SYSTEM Sounds/NASA/JPL-Caltech/Elisabetta Bonora/Marco Faccin
After centuries of looking with awe and wonder at the beauty of Saturn and its rings, we can now listen to them, thanks to the efforts of astrophysicists at the University of Toronto (U of T).

"To celebrate the Grand Finale of NASA's Cassini mission next month, we converted Saturn's moons and rings into two pieces of music," says astrophysicist Matt Russo, a postdoctoral researcher at the Canadian Institute for Theoretical Astrophysics (CITA) in the Faculty of Arts & Science at U of T.
The conversion to music is made possible by orbital resonances, which occur when two objects execute different numbers of complete orbits in the same time, so that they keep returning to their initial configuration. The rhythmic gravitational tugs between them keep them locked in a tight repeating pattern which can also be converted directly into musical harmony.
"Wherever there is resonance there is music, and no other place in the solar system is more packed with resonances than Saturn," says Russo.
The Cassini spacecraft has been collecting data while orbiting Saturn since its arrival in 2004 and is now in the throes of a final death spiral. It will plunge into the planet itself on September 15 to avoid contaminating any of its moons.
[Image: 3-universityof.png]
The orbital periods of the six 1st order resonances of Janus that affect the ring system. The 1:1 resonance is with Janus' co-orbital moon Epimetheus. The corresponding frequencies of these resonances were scaled up by 23 octaves by astrophysicists at the University of Toronto, producing a musical scale. Credit: SYSTEM Sounds/NASA/JPL/Space Science Institute
Russo was joined by astrophysicist Dan Tamayo, a postdoctoral researcher at CITA and the Centre for Planetary Sciences at U of T Scarborough, and together they were able to play music with an instrument measuring over a million kilometers long. The musical notes and rhythms both come from the orbital motion of Saturn's moons along with the orbits of the trillions of small particles that make up the ring system.
"Saturn's magnificent rings act like a sounding board that launches waves at locations that harmonize with the planet's many moons, and some pairs of moons are themselves locked in resonances," says Tamayo.
Music of the moons and rings
For the first piece which follows Cassini's final plunge, the researchers increased the natural orbital frequencies of Saturn's six large inner moons by 27 octaves to arrive at musical notes. "What you hear are the actual frequencies of the moons, shifted into the human hearing range" says Russo. The team then used a state of the art numerical simulation of the moon system developed by Tamayo to play the resulting notes every time a moon completes an orbit.

The moon system has two orbital resonances which give rhythmic and harmonic structure to the otherwise unsteady lullaby-style melody. The first and third moons Mimas and Tethys are locked in a 2:1 resonance so that Mimas orbits twice for every orbit of Tethys. The same relationship links the orbits of the second and fourth moons Enceledus and Dione, and the combination of the two simple rhythms creates interesting musical patterns as they fall in and out of synchronicity.
[Image: 4-universityof.png]
A wood carving of Saturn's main ring system designed for the visually impaired, commissioned by astrophysicists at the University of Toronto. One will be able to feel many complex structures within the rings while also listening to their audio form. Credit: SYSTEM Sounds
"Since doubling the frequency of a note produces the same note an octave higher, the four inner moons produce only two different notes close to a perfect fifth apart," says Russo, who is also a graduate of U of T's Jazz performance program. "The fifth moon Rhea completes a major chord that is disturbed by the ominous entrance of Saturn's largest moon, Titan."
Russo and Tamayo are joined in the project by Toronto musician, and Matt's long-time bandmate, Andrew Santaguida. "Dan understands orbital resonances as deeply as anyone and Andrew is a music production wizard. My job is to connect these two worlds."
Titan actually gives the Cassini probe the final push which sends it hurtling towards its death in the heart of Saturn. The music follows Cassini's final flight over the ring system by converting the constantly increasing orbital frequencies of the rings into a dramatic rising pitch; the volume of the tone increases and decreases along with the observed bright and dark bands of the rings. The death of Cassini as it crashes into Saturn is heard as a final crash of a final piano chord, which was inspired by The Beatles' "A Day in the Life", in which a rich major chord follows a similarly tense crescendo.
In addition to the soundtrack, Russo has had a large wood carving made of Saturn's rings so people can follow along with their fingertips while listening. The carving will be part of a tactile-audio astronomy exhibit at the Canadian National Institute for the Blind's Night Steps fundraising event for the visually impaired in Toronto on September 15, the same day the Cassini mission is scheduled to end.

Moons And Rings Translated Into Music
Resonances of Janus translated into music
The second piece demonstrates the scales played by Janus and Epimetheus, two small irregular moons that share an orbit just outside Saturn's main ring system. Together they are an example of 1:1 resonance, the only one in the solar system. The pair orbit at slightly different distances from Saturn but with a difference that is so negligible they swap places every four years. The composition simulates the final few months of Cassini's mission, while Janus is inching closer to Epimetheus before stealing its place in 2018. Together, the two moons play a unison drone but with a constantly shifting rhythm that repeats every eight years. 
Russo played a C# note on his guitar once for every orbit while a cello sustains a note for each resonance within the rings.

"Each ring is like a circular string, being continuously bowed by Janus and Epimetheus as they chase each other around their shared orbit," says Russo. Cassini recently captured an image of one of the ripples this creates within the rings. To turn this into music, Russo and Santaguida used the brightness variations in this image to control the intensity of the cello.

Resonances Of Janus Translated Into Music
"Saturn's dancing moons now have a soundtrack," says Russo.
Russo, Tamayo and Santaguida are the same group who converted the recently discovered TRAPPIST-1 planetary system into music a few months ago. They've dubbed their astro-sonic side-project SYSTEM Sounds) and hope to continue exploring the universe for other evidence of naturally occurring harmonic resonance.
[Image: 1x1.gif] Explore further: Image: Saturn and rings, 7 June 2017
Provided by: University of Toronto

Read more at:[/url][url=]

Along the vines of the Vineyard.
With a forked tongue the snake singsss...
the 2 to 1 resonance of dione to Enceladus is pretty interesting.
Note the numbers they posted for the two moons ... especially the period in days.
They obviously erred a tad, 
Dionne's scaled frequency should be 567, since Enceladus is 1134.

[Image: 2-universityof.png]
Entanglement is an inevitable feature of reality
September 1, 2017 by Lisa Zyga feature

[Image: entanglement.jpg]
Credit: CC0 Public Domain
(—Is entanglement really necessary for describing the physical world, or is it possible to have some post-quantum theory without entanglement?

In a new study, physicists have mathematically proved that any theory that has a classical limit—meaning that it can describe our observations of the classical world by recovering classical theory under certain conditions—must contain entanglement. So despite the fact that entanglement goes against classical intuition, entanglement must be an inevitable feature of not only quantum theory but also any non-classical theory, even those that are yet to be developed.
The physicists, Jonathan G. Richens at Imperial College London and University College London, John H. Selby at Imperial College London and the University of Oxford, and Sabri W. Al-Safi at Nottingham Trent University, have published a paper establishing entanglement as a necessary feature of any non-classical theory in a recent issue of Physical Review Letters.
"Quantum theory has many strange features compared to classical theory," Richens told "Traditionally we study how the classical world emerges from the quantum, but we set out to reverse this reasoning to see how the classical world shapes the quantum. In doing so we show that one of its strangest features, entanglement, is totally unsurprising. This hints that much of the apparent strangeness of quantum theory is an inevitable consequence of going beyond classical theory, or perhaps even a consequence of our inability to leave classical theory behind."
Although the full proof is very detailed, the main idea behind it is simply that any theory that describes reality must behave like classical theory in some limit. This requirement seems pretty obvious, but as the physicists show, it imparts strong constraints on the structure of any non-classical theory.
Quantum theory fulfills this requirement of having a classical limit through the process of decoherence. When a quantum system interacts with the outside environment, the system loses its quantum coherence and everything that makes it quantum. So the system becomes classical and behaves as expected by classical theory.
Here, the physicists show that any non-classical theory that recovers classical theory must contain entangled states. To prove this, they assume the opposite: that such a theory does not have entanglement. Then they show that, without entanglement, any theory that recovers classical theory must be classical theory itself—a contradiction of the original hypothesis that the theory in question is non-classical. This result implies that the assumption that such a theory does not have entanglement is false, which means that any theory of this kind must have entanglement.
This result may be just the beginning of many other related discoveries, since it opens up the possibility that other physical features of quantum theory can be reproduced simply by requiring that the theory has a classical limit. The physicists anticipate that features such as information causality, bit symmetry, and macroscopic locality may all be shown to arise from this single requirement. The results also provide a clearer idea of what any future non-classical, post-quantum theory must look like.
"My future goals would be to see if Bell non-locality can likewise be derived from the existence of a classical limit," Richens said. "It would be interesting if all theories superseding classical theory must violate local realism. I am also working to see if certain extensions of quantum theory (such as higher order interference) can be ruled out by the existence of a classical limit, or if this limit imparts useful constraints on these 'post-quantum theories.'"
[Image: 1x1.gif] Explore further: Envisioning a future quantum internet
More information: Jonathan G. Richens, John H. Selby, and Sabri W. Al-Safi. "Entanglement is Necessary for Emergent Classicality in All Physical Theories." Physical Review Letters. DOI: 10.1103/PhysRevLett.119.080503 
Journal reference: Physical Review Letters

Read more at:[/url]

Identification of individuals by trait prediction using whole-genome sequencing data

September 6, 2017

[Image: wholegenomes.jpg]
Examples of real (Left) and predicted (Right) faces from the Human Longevity study predicting face and other physical traits from whole genome sequencing data. Credit: Human Longevity, Inc.
Researchers from Human Longevity, Inc. (HLI) have published a study in which individual faces and other physical traits were predicted using whole genome sequencing data and machine learning. This work, from lead author Christoph Lippert, Ph.D. and senior author J. Craig Venter, Ph.D., was published in the journal Proceedings of the National Academy of Sciences(PNAS).

The authors believe that, while the study offers novel approaches for forensics, the work has serious implications for data privacy, deidentification and adequately informed consent. The team concludes that much more public deliberation is needed as more and more genomes are generated and placed in public databases.
For the IRB approved study, 1,061 ethnically diverse people ranging in age from 18 to 82 participated by having their genomes sequenced to an average depth of at least 30x. Researchers also collected phenotype data in the form of 3-D facial images, voice samples, eye and skin color, age, height, and weight.
The team predicted eye color, skin color and sex with high accuracy, but other more complex genetic traits proved more difficult. The team believes their predictive models are sound, but that large cohorts are needed to make prediction more robust. The team also developed a machine learning algorithm called a maximum entropy algorithm, which had novelty in that it found an optimal combination of all predictive models to match whole-genome sequencing data with phenotypic and demographic data and enabled the correct identification of, on average, 8 out of 10 participants of diverse ethnicity, and 5 out of 10 African American or European participants.
Venter, HLI's co-founder, executive chairman and head of scientific strategy, commented, "We set out to do this study to prove that your genome codes for everything that makes you, you. This is clearly a proof of concept with a limited cohort but we believe that as we increase the numbers of people in this study and in the HLI database to hundreds of thousands we will be able to accurately predict all that can be predicted from individuals' genomes."
He added, "We are also concerned that the public and the research community at large are not adequately focused on the need for better safeguards and policies for individual privacy in the genomics era and are urging more analysis, better technical solutions, and continued discussion."
Lippert, data scientist at HLI, added, "This study shows the potential of imaging technologies to screen the traits of large numbers of individuals. Machine learning enables fully automated data interpretation and plays a crucial role in scientific discovery."
[Image: 1x1.gif] Explore further: Researchers conduct sequencing and de novo assembly of 150 genomes in Denmark
More information: Christoph Lippert et al. Identification of individuals by trait prediction using whole-genome sequencing data, Proceedings of the National Academy of Sciences (2017). DOI: 10.1073/pnas.1711125114 
Journal reference: Proceedings of the National Academy of Sciences[Image: img-dot.gif] [Image: img-dot.gif]
Provided by: Human Longevity, Inc.

Read more at:[url=]
Along the vines of the Vineyard.
With a forked tongue the snake singsss...

Discovery of a new mechanism for controlling memory
September 14, 2017

[Image: 5-discoveryofa.jpg]
Staining of receptors and pit for endocytosis, the process by which molecules are transported inside a cell. Credit: Jennifer Petersen/Daniel Choquet/IINS/CNRS Photo library
Researchers in Bordeaux recently discovered a new mechanism for storing information in synapses and a means of controlling the storage process. The breakthrough moves science closer to unveiling the mystery of the molecular mechanisms of memory and learning processes.

Sunday, October 19th, 2008, 12:30 am

This is the Education of Lincoln.
Courtesy of the Hidden Mission Members. [Image: bump.gif]

The research, carried out primarily by researchers at the Interdisciplinary Institute for Neurosciences (CNRS/Université de Bordeaux) and the Bordeaux Imaging Center appears in the 13 September 2017 edition of Nature.

Communication between neurons passes through over one million billion synapses, tiny structures the tenth of the width of a single hair, in an extremely complex process. Synaptic plasticity – the ability of synapses to adapt in response to neuronal activity – was discovered nearly 50 years ago, leading the scientific community to identify it as a vital functional component of memorisation and learning.

Neurotransmitter receptors – found at the synapse level – play a key role in the transmission of nerve messages. A few years ago, the team of researchers in Bordeaux discovered that neurotransmitter receptors were not immobile as thought previously, but in a constant state of agitation. They posited that controlling this agitation through neuronal activity could modulate the effectiveness of synaptic transmission by regulating the number of receptors present at a given time in a synapse.

The new research has taken the two teams further in their understanding of the basic mechanisms behind how information is stored in the brain. Scientists combined techniques based on chemistry, electrophysiology and high-resolution imaging to develop a new method to immobilise receptors at synaptic sites. This method successfully stops receptor movement, making it possible to study the impact of the immobilisation on brain activity and learning ability. It provides evidence that receptor movement is essential to synaptic plasticity as a response to intense neuronal activity.

[Image: 6-discoveryofa.jpg]

Pathways of neurotransmitter receptors followed by detecting single molecules at the surface of a rat hippocampal cultured neuron. Benjamin Compans/Daniel Choquet/IINS/CNRS Photo library

Researchers also explored the direct role of synaptic plasticity in learning. By teaching mice to recognise a specific environment, they show that halting receptor movement can be used to block the acquisition of this type of memory, confirming the role of synaptic plasticity in this process.

The discovery offers new perspectives on controlling memory. The memorisation protocol tested here activates a particular area of the brain: the hippocampus. The next step for researchers is to determine if the mechanism discovered can also be applied to other forms of learning and, by extension, to other areas of the brain. From a technical standpoint, it will be possible to develop new, reversible and light-sensitive methods of immobilizing receptors in order to better control the process.

[Image: 1x1.gif] Explore further: Proteins involved in brain's connectivity are controlled by multiple checkpoints

More information: A. C. Penn et al. Hippocampal LTP and contextual learning require surface diffusion of AMPA receptors, Nature (2017). DOI: 10.1038/nature23658 
Journal reference: Nature [Image: img-dot.gif] [Image: img-dot.gif]
Provided by: CNRS

Along the vines of the Vineyard.
With a forked tongue the snake singsss...
Study reveals breakthrough in decoding brain function
September 25, 2017 by Francis Mccabe

[Image: unlvstudyrev.jpg]
James M. Myman, assistant professor of Psychology. Credit: UNLV
If there's a final frontier in understanding the human body, it's definitely not the pinky. It's the brain.

After four years of lab testing and complex neuro-decoding, a research team led by UNLV psychology professor James Hyman has struck a major breakthrough that could open the floodgates for research into the anterior cingulate cortex, or ACC, and how human brains learn.

The research, published this summer in the neuroscience journal Neuron, offers new insight into the ACC's role in guiding the brain's response and adaptation to unexpected outcomes. (improv)
Search for "Planet X" officially comes up empty (Oh yeah?) 
The study also showed the first cellular correlates of the extensively studied human phenomena known as feedback negativity. Hyman had previously found in 2015 conclusive evidence that the ACC in rodent brains reacts in the same manner as in humans to reward probability and outcome expectancy. (gamble)
Search for "Planet X" officially comes up empty (Oh yeah?) 
The study garnered a special preview article in the journal from Bruno Averbeck, a leading expert in the field from the National Institutes of Health.
The function of the brain's ACC is heavily studied, but many scientists believe it contributes to behavioral adaptation, detection of conflict and responding to and managing emotional reactions.
According to Hyman, the ACC essentially creates expectations about what's going to happen. Then, when the result of our actions leads to an outcome, our brain assesses whether that outcome was the same as what we expected. The ACC is integrally involved in this process. If the outcome is not what we expected, the ACC reacts with a larger electrical charge - known as feedback negativity - than if the outcome was expected.
The research team showed that when an expected outcome was not delivered, a neural signal in the brain's ACC was detected. This signal offers clues to the cellular origin of feedback negativity, and that the phenomenon may be generated as the neurons in the ACC shift from encoding expected to actual outcomes.
Our brains are constantly doing this, Hyman said.
"Generally, the ACC always has a negative electrical change to outcomes, it's just the size of this change varies by whether the outcome was the expected one or not and whether it was better or worse than expected," said Hyman. "Every single thing we do involves making predictions about what's going to happen next. Usually facile little things, such as opening an unlocked door," Hyman said.

For instance, if you go to open what you believe to be an unlocked door by its handle, your ACC is predicting the outcome that the door will open and you will walk in. If the door handle is locked and it does not open as predicted, an electrical reaction occurs that is readable. The ACC will then learn from the unexpected outcome of its initial prediction.
Search for "Planet X" officially comes up empty (Oh yeah?) 
Don't gamble with improv.
Search for "Planet X" officially comes up empty (Oh yeah?) 

Now imagine you were playing a slot machine with a 75 percent chance of winning (we're pretending here). If the percentage was changed without you knowing to 25 percent, your ACC would still predict a positive outcome. When you start losing, the ACC would react to the unexpected outcome. And, most importantly, you would realize something's not right, learn from the outcome, and potentially adjust your behavior.

Through the course of the study, Hyman also discovered a correlation betweeen feedback-related negativity in in both human and rodent models.

"It took as few as two consecutive unexpected events for cells to change and start making the opposite prediction," Hyman said. The testing mirrored what has been done in humans and opens the possibility that findings from rodent models can contribute to our understanding of the ACC function in humans.

Additional research on the ACC could lead to new solutions to assist in the cognitive control problems that are associated with a host of psychiatric disorders such as depression, schizophrenia, and drug addiction.

According to Hyman, this discovery will help in further understanding our ability to detect the situations where we have the most learning. "Understanding those mechanics could make us learn faster," he said.

"A Novel Neural Prediction Error Found in Anterior Cingulate Cortex Ensembles" appeared in the July issue of the journal Neuron (Vol. 95, issue 2).

[Image: 1x1.gif] Explore further: Scientists find link between cognitive fatigue and effort and reward
Search for "Planet X" officially comes up empty (Oh yeah?) 

More information: James Michael Hyman et al. A Novel Neural Prediction Error Found in Anterior Cingulate Cortex Ensembles, Neuron (2017). DOI: 10.1016/j.neuron.2017.06.021 
Journal reference: Neuron [Image: img-dot.gif] [Image: img-dot.gif]
Provided by: University of Nevada, Las Vegas
Along the vines of the Vineyard.
With a forked tongue the snake singsss...

how to count craters and publish bullshit

[Image: C0OjUJiUoAA4v6s.jpg]
[Image: C0OjUJiUoAA4v6s.jpg]
That chart reminds me. Doh


Stu Algo Stu Bamf RCH all @ THM  Hi

Six degrees of separation: Why it is a small world after all
October 19, 2017

[Image: socialnetwork.jpg]
Social network diagram. Credit: Daniel Tenerife/Wikipedia
It's a small world after all - and now science has explained why. A study conducted by the University of Leicester and KU Leuven, Belgium, examined how small worlds emerge spontaneously in all kinds of networks, including neuronal and social networks, giving rise to the well-known phenomenon of "six degrees of separation".

Read more at:

Rise Of The Anomalists.
Lincoln never dreamed he would Wake up Star Spangled-Entangled.  Pennywise

[Image: john-kelly-frederica-wilson-Getty-AP-640x480.jpg]

Want to control your dreams? Here's how
October 19, 2017

[Image: sleep.jpg]
Credit: Vera Kratochvil/public domain
New research at the University of Adelaide has found that a specific combination of techniques will increase people's chances of having lucid dreams, in which the dreamer is aware they're dreaming while it's still happening and can control the experience.

Although many techniques exist for inducing lucid dreams, previous studies have reported low success rates, preventing researchers from being able to study the potential benefits and applications of lucid dreaming.
Dr Denholm Aspy's research in the University of Adelaide's School of Psychology is aimed at addressing this problem and developing more effective lucid dream induction techniques.
The results from his studies, now published in the journal Dreaming, have confirmed that people can increase their chances of having a lucid dream.
The study involved three groups of participants, and investigated the effectiveness of three different lucid dream induction techniques:
  1. reality testing – which involves checking your environment several times a day to see whether or not you're dreaming.

  2. wake back to bed – waking up after five hours, staying awake for a short period, then going back to sleep in order to enter a REM sleep period, in which dreams are more likely to occur.

  3. MILD (mnemonic induction of lucid dreams) – which involves waking up after five hours of sleep and then developing the intention to remember that you are dreaming before returning to sleep, by repeating the phrase: "The next time I'm dreaming, I will remember that I'm dreaming." You also imagine yourself in a lucid dream.
Among the group of 47 people who combined all three techniques, participants achieved a 17 percent success rate in having lucid dreams over the period of just one week – significantly higher compared to a baseline week where they didn't practise any techniques. Among those who were able to go to sleep within the first five minutes of completing the MILD technique, the success rate of lucid dreaming was much higher, at almost 46 percent of attempts.
"The MILD technique works on what we call 'prospective memory' – that is, your ability to remember to do things in the future. By repeating a phrase that you will remember you're dreaming, it forms an intention in your mind that you will, in fact, remember that you are dreaming, leading to a lucid dream," says Dr. Aspy, Visiting Research Fellow in the University's School of Psychology.
"Importantly, those who reported success using the MILD technique were significantly less sleep deprived the next day, indicating that lucid dreaming did not have any negative effect on sleep quality," he says.
"These results take us one step closer to developing highly effective lucid dream induction techniques that will allow us to study the many potential benefits of lucid dreaming, such as treatment for nightmares and improvement of physical skills and abilities through rehearsal in the lucid dream environment," Dr. Aspy says.
[Image: 1x1.gif] Explore further: Can we train ourselves to control our dreams?
More information: For more information and to take part in the study, see:
Denholm J. Aspy et al, Reality testing and the mnemonic induction of lucid dreams: Findings from the national Australian lucid dream induction study., Dreaming (2017). 
Provided by: University of Adelaide

Red-Pilled awake.

[Image: image.png]

Lincoln awoke in the Future Suture where eye sew it all up...

In ANU  World Order.

[Image: 2-insearchofth.jpg]
Along the vines of the Vineyard.
With a forked tongue the snake singsss...

Quote:The study involved three groups of participants, 
and investigated the effectiveness 
of three different lucid dream induction techniques

I have an awful success rate at lucid sex dreams  Cry

I ... almost ... always wake up when just the action gets naked and hot,

I do better at flying dreams.

Sometimes ... the lucid dream resists Nonono or constrains the Intent ... into waking up.
Especially flying,
or sex.

The best lucid dreams have Angels or Gods that visit.

Intent in training for lucid dreaming ... well you better have good intent.

I used to have recurrent lucid dreams {semi nightmares} when I was in my 30s and 40s,
that I was at a huge party late at night,
loaded with people everywhere party drinking heavily and partying loud and noisy.
I could taste the Scotch that I was drinking, 
and smell the cigarette and weed smoke in the party palace,
and there was coke and weed and hard liquor for everyone, 
and I was drunk and getting wasted, 
wandering about the party, 
looking for good sex. 
{memories from my twenties? Lol }

I swear that I would wake up with a fucking hangover.
man oh man
those lucid dreams gave me the heebie Reefer jeebies

(10-19-2008, 01:18 AM)EA Wrote: Since you Fashion yourself as an american icon online...

[Image: Lincoln-in-Top-Hat-1.jpg]

I will Fashion a Head-Dress for You then,to contain the thoughts that will burst from your new mind.

[Image: gettyimages-632317268_custom-89f9e18d3ec...00-c85.jpg]
[Image: Hominid-Lion.JPG]
"Grab 'em by the pussy" -Ares' Face Courtesy NASA/JPL July 25,1976.

The Outer-diameter of the 'Top hat' ICON brimband is ~3397 = 334 Pixels.

The Inner-diameter of the 'Top hat' ICON headband is ~1947 = 195 Pixels.

This 'Top hat' is given the right to our new member Lincoln in good faith,
as a true token of our appreciation  as educators.
[Image: GOP_2016_Trump_Fact_Check-e8198.jpg&w=480]
It is a Volume measure.
The Cylinder of this Top hat has these dimensions.

Outer Radius =
Diameter = 3,397 = 340 pixels
Inner Radius  =
Diameter = 1,947 =195 pixels

Height  = 3,333 = 333 pixels

Volume  = 39673.52

This "Top Hat" is crowned on a Sphere @~73.35 North or South.

It Has a Current Precedant That will show the Former President...

How a proper Head-dress is the Rise of the Anomalists. <img src="{SMILIES_PATH}/mellow.gif" alt="Mellow" title="mellow" />

I ain't no Betsy Ross,but This is a THREAD and I sure do know it needs a needle to Sew it all up in Lincoln's mind.

Astro Sight Hominid Eye Lionize.Genetic mirror backs this
Astro Site Prominent Line-wise.Geometric adhere for Axis
Astro Cite Dominant Quantise.Specific WORD and "Font Size"

More information: Jeff A. Stogsdill et al, Astrocytic neuroligins control astrocyte morphogenesis and synaptogenesis, Nature (2017). DOI: 10.1038/nature24638 

Startup to train robots like puppets

November 8, 2017 by Robert Sanders
[Image: startuptotra.jpg]
Tianhao Zhang trains a robot to manipulate wires using VR tele-operation. Credit: University of California - Berkeley
Robots today must be programmed by writing computer code, but imagine donning a VR headset and virtually guiding a robot through a task, like you would move the arms of a puppet, and then letting the robot take it from there.

Star-shaped brain cells orchestrate neural connections
November 8, 2017

[Image: starshapedbr.jpg]
An astrocyte (blue) grown in a dish with neurons forms an intricate, star-shaped structure. The locations of neurons' synaptic proteins are marked in green and purple (neurons themselves are not visible). Overlapping green and purple proteins represent the locations of a synapses. Credit: Jeff Stogsdill, Duke University
Brains are made of more than a tangled net of neurons. Star-like cells called astrocytes diligently fill in the gaps between neural nets, each wrapping itself around thousands of neuronal connections called synapses. This arrangement gives each individual astrocyte an intricate, sponge-like structure.

New research from Duke University finds that astrocytes are much more than neurons' entourage. Their unique architecture is also extremely important for regulating the development and function of synapses in the brain.
When they don't work right, astrocyte dysfunction may underlie neuronal problems observed in devastating diseases like autism, schizophrenia and epilepsy.
The Duke team identified a family of three proteins that control the web-like structure of each astrocyte as it grows and encases neuronal structures such as synapses. Switching off one of these proteins not only limited the complexity of the astrocytes, but also altered the nature of the synapses between neurons they touched, shifting the delicate balance between excitatory and inhibitory neural connections.
"We found that astrocytes' shape and their interactions with synapses are fundamentally important for brain function and can be linked to diseases in a way that people have neglected until now," said Cagla Eroglu, an associate professor of cell biology and neurobiology at Duke. The research was published in the Nov. 9 issue of Nature.
Astrocytes have been around almost as long as brains have. Even simple invertebrates like the crumb-sized roundworm C. elegans has primitive forms of astrocytes cloaking their neural synapses. As our brains have evolved into complex computational machines, astrocyte structure has also grown more elaborate.
But the complexity of astrocytes is dependent on their neuronal companions. Grow astrocytes and neurons together in a dish, and the astrocytes will form intricate star-shaped structures. Grow them alone, or with other types of cells, and they come out stunted.
To find out how neurons influence astrocyte shape, Jeff Stogsdill, a recent PhD graduate in Eroglu's lab, grew the two cells together while tweaking neurons' cellular signaling mechanisms. He was surprised to find that even if he outright killed the neurons, but preserved their structure as a scaffold, the astrocytes still beautifully elaborated on them.
[Image: 1-starshapedbr.jpg]
A 3-D-printed model of a single astrocyte from a mouse brain shows the sponge-like structure of these cells. Credit: Katherine King, Duke University
"It didn't matter if the neurons were dead or alive—either way, contact between astrocytes and neurons allowed the astrocyte to become complex," Stogsdill said. "That told us that there are interactions between the cell surfaces that are regulating the process."

(03-03-2014, 11:57 PM)EA Wrote: Improv manifest: Get in the retina
Quote:To correct with a metaphorical lens(es)

I think I need "Tri-Focals"

Watch this Space...

Strange State of Matter Found in Chicken's Eye

By Megan Gannon, News Editor   |   February 27, 2014 02:19pm ET

[Image: disordered-hyperuniformity.jpg?1393269653]

This diagram depicts the spatial distribution of the five types of light-sensitive cells known as cones in the chicken retina. Scientists have proposed that this arrangement could be a new state of matter, called disordered hyperuniformity.
Credit: Courtesy of Joseph Corbo and Timothy Lau, Washington University in St. Louis


Never before seen in biology, a state of matter called "disordered hyperuniformity" has been discovered in the eye of a chicken.

This arrangement of particles appears disorganized over small distances but has a hidden order that allows material to behave like both a crystal and a liquid.

Quote:The discovery came as researchers were studying cones, tiny light-sensitive cells that allow for the perception of color, in the eyes of chickens.

For chickens and other birds that are most active during the daytime, these photoreceptors come in four different color varieties — violet, blue, green and red — and a fifth type for detecting light levels, researchers say. Each type of cone is a different size.

These cells are crammed into a single tissue layer on the retina. Many animals have cones arranged in an obvious pattern. Insect cones, for example, are laid out in a hexagonal scheme. The cones in chicken eyes, meanwhile, appear to be in disarray.

[Image: rodcone.gif]  [Image: 9344691561_78cdd276cc_o.jpg]

But researchers who created a computer model to mimic the arrangement of chicken cones discovered a surprisingly tidy configuration.

Around each cone is a so-called exclusion region that bars other cones of the same variety from getting too close. This means each cone type has its own uniform arrangement, but the five different patterns of the five different cone types are layered on top of each other in a disorderly way, the researchers say.

"Because the cones are of different sizes it's not easy for the system to go into a crystal or ordered state," study researcher Salvatore Torquato, a professor of chemistry at Princeton University, explained in a statement. "The system is frustrated from finding what might be the optimal solution, which would be the typical ordered arrangement. While the pattern must be disordered, it must also be as uniform as possible. Thus, disordered hyperuniformity is an excellent solution."

Materials in a state of disordered hyperuniformity are like crystals in that they keep the density of particles consistent across large spatial distances, Torquato and colleagues said. But these systems are also like liquids, because they have the same physical properties in all directions.

Researchers say this may be the first time disordered hyperuniformity has been observed in a biological system; previously it had only been seen in physical systems like liquid helium and simple plasmas.

For chicken eyes, the researchers speculate this cone arrangement allows the birds to sample incoming light evenly. Engineers may be able to take inspiration from disordered hyperuniformity in nature to create optical circuits and light detectors that are sensitive or resistant to certain light wavelengths, the researchers say. Their findings were detailed on Feb. 24 in the journal Physical Review E.

There you go watcher. That is a nice coincidental answer from the Improvisphere that leads us either  on a path of infinite what's so special about: 19.5,etc,etc,etc...

Or straight back to the simplest structure.

This is why I seek to set "STANDARDS".

So everyone is on the same page.

"Goldberg polyhedra", even at the cost of confusing others.

"disordered hyperuniformity" even at the cost of confusing other others.

An infinite new set of polyhedra looks like a crystaliquid to a rooster in the dawns early light.
Metaphorical clathrin

[Image: rooster-crowing-2.jpg]


Gnosis just needs to Get in the retina of the mind's eye.

Twilight trick: A new type of cell has been found in the eye of a deep-sea fish
November 8, 2017

[Image: deepseafishr.jpg]
The two pearlside species studied, Maurolicus muelleri (top) and Maurolicus mucronatus (bottom). Credit: Dr Fanny de Busserolles / The University of QUeensland
A new type of cell has been found in the eye of a deep-sea fish, and scientists say the discovery opens a new world of understanding about vision in a variety of light conditions.

University of Queensland scientists found the new cell type in the deep-sea pearlside fish (Maurolicus spp.), which have an unusual visual system adapted for twilight conditions.
Dr Fanny de Busserolles at UQ's Queensland Brain Institute said the retina of most vertebrate animals - including humans - contained two photoreceptor types: rods for vision in dim light, and cones for daytime vision. Each had different light-sensitive proteins.
"Deep-sea fish, which live at ocean depths below 200m, are generally only active in the dark, so most species have lost all their cones in favour of light-sensitive rods," Dr de Busserolles said.
Pearlsides differed in that they were mostly active at dusk and dawn, close to the water's surface where light levels are intermediate.
"Previously it was thought that pearlsides had retinas composed entirely of rods, but our new study has found this isn't the case," Dr de Busserolles said.
"Humans use their cones during the day our rods at night, but during twilight, although not ideal, we use a combination of both.
"Pearlsides, being active mainly during twilight, have developed a completely different solution.
"Instead of using a combination of rods and cones, they combine aspects of both cells into a single and more efficient photoreceptor type."
The researchers found that the cells - which they have termed "rod-like cones" for their shapes under the microscope - were tuned perfectly to the pearlsides' specific light conditions.
Research leader Professor Justin Marshall said the study was significant.
"It improves understanding of how different animals see the world and how vision might have helped them to conquer even the most extreme environments, including the deep sea," Professor Marshall said.
"Humans love to classify everything into being either black or white.
"However our study shows the truth might be very different from previous theories.
"More comprehensive studies, and caution, are needed when categorising photoreceptor cells into cones and rods."
The study is published in Science Advances.
[Image: 1x1.gif] Explore further: New function for rods in daylight
More information: F. de Busserolles el al., "Pushing the limits of photoreception in twilight conditions: The rod-like cone retina of the deep-sea pearlsides," Science Advances (2017). 10.1126/sciadv.aao4709 
Journal reference: Science Advances [Image: img-dot.gif] [Image: img-dot.gif]
Provided by: University of Queensland

Read more at:[url=][/url]
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
I wonder if that "disordered hyperuniformity" aka liquid crystal like properties,
has a function to engage ESP or clairvoyance ... intuitive communication ...
like in
'... they read each other eyes ...'

or in pineal based perception,

when inductive processing and thinking transcends into exponential intuition.

How did documented Russian healers in the 50s and 60s work their magic?

paying intention TM

'Mind's eye blink' proves 'paying attention' is not just a figure of speech

November 27, 2017 by David Salisbury
"disordered hyperuniformity"
[Image: mindseyeblin.jpg]
When your attention shifts, your brain 'blinks.' Credit: Keith Wood, Vanderbilt University
When your attention shifts from one place to another, your brain blinks. The blinks are momentary unconscious gaps in visual perception and came as a surprise to the team of Vanderbilt psychologists who discovered the phenomenon while studying the benefits of attention.

"Attention is beneficial because it increases our ability to detect visual signals even when we are looking in a different direction," said Assistant Professor of Psychology Alex Maier, who directed the study. "The 'mind's eye blinks' that occur every time your attention shifts are the sensory processing costs that we pay for this capability."

Details of their study are described in a paper titled "Spiking suppression precedes cued attentional enhancement of neural responses in primary visual cortex" published online Nov. 23 by the journal Cerebral Cortex.

"There have been several behavior studies in the past that have suggested there is a cost to paying attention. But our study is the first to demonstrate a sensory brain mechanism underlying this phenomenon," said first author Michele Cox, who is a psychology doctoral student at Vanderbilt.

The research was conducted with macaque monkeys that were trained to shift their attention among different objects on a display screen while the researchers monitored the pattern of neuron activity taking place in their brains. Primates are particularly suited for the study because they can shift their attention without moving their eyes. Most animals do not have this ability.

[Image: 1-mindseyeblin.jpg]
Michele Cox in Maier Lab. Credit: John Russell / Vanderbilt
"We trained macaques to play a video game that rewarded them with apple juice when they paid attention to certain visual objects. Once they became expert at the game, we measured the activity in their visual cortex when they played," said Maier.

By combining advanced recording techniques that simultaneously track large numbers of neurons with sophisticated computational analyses, the researchers discovered that the activity of the neurons in the visual cortex were momentarily disrupted when the game required the animals to shift their attention. They also traced the source of the disruptions to parts of the brain involved in guiding attention, not back to the eyes.

Mind's eye blink is closely related to "attentional blink" that has been studied by Cornelius Vanderbilt Professor of Psychology David Zald and Professor of Psychology René Marois. Attentional blink is a phenomenon that occurs when a person is presented with a rapid series of images. If the spacing between two images is too short, the observer doesn't detect the second image. In 2005, Zald determined that the time of temporary blindness following violent or erotic images was significantly longer than it is for emotionally neutral images.

[Image: 1x1.gif] Explore further: Training changes the way the brain pays attention

More information: Michele A Cox et al, Spiking Suppression Precedes Cued Attentional Enhancement of Neural Responses in Primary Visual Cortex, Cerebral Cortex (2017). DOI: 10.1093/cercor/bhx305 

Journal reference: Cerebral Cortex [Image: img-dot.gif] [Image: img-dot.gif]
Provided by: Vanderbilt University

paying intention TM

Biologists create beetle with functional extra eye

November 13, 2017 by Kevin Fryling

[Image: indianaunive.jpg]
The creation of three-eyed beetles through a new technique developed at IU provides scientists a new way to investigate the genetic mechanisms responsible for the evolutionary emergence of new physical traits. Credit: Eduardo Zattara
On "Game of Thrones," a three-eyed raven holds the secrets of the past, present and future in a vast fantasy kingdom. But for real-world biologists, a "three-eyed beetle" may offer a true glimpse into the future of studying evolutionary development.

Using a simple genetic tool, IU scientists have intentionally grown a fully functional extra eye in the center of the forehead of the common beetle. Unraveling the biological mechanisms behind this occurrence could help researchers understand how evolution draws upon pre-existing developmental and genetic "building blocks" to create novel complex traits, or "old" traits in novel places.

The study's results appear in the journal of the Proceedings of the National Academy of Sciences. The work also provides deeper insights into an earlier experiment that accidentally produced an extra eye as part of a study to understand how the insect head develops.

"Developmental biology is beautifully complex in part because there's no single gene for an eye, a brain, a butterfly's wing or a turtle's shell," said Armin P. Moczek, a professor in the IU Bloomington College of Arts and Sciences' Department of Biology. "Instead, thousands of individual genes and dozens of developmental processes come together to enable the formation of each of these traits.

"We've also learned that evolving a novel physical trait is much like building a novel structure out of Legos, by re-using and recombining 'old' genes and developmental processes within new contexts."

As a consequence, the evolution of novel features often requires many fewer genetic changes than biologists originally thought.

But unlike rearranging and combining toy plastic bricks to form a new structure, Moczek said it's unclear what biological mechanisms guide the construction of new physical traits under some circumstances but not others.

"You can make new things over and over or in new places using the same old set of 'bricks,'" he said. "But in Legos, we know the rules of assembly: which pieces go together and which things don't. In biology, we still struggle to understand the respective counterparts."

One of the ways that scientists have sought to get a clearer view of this process is by coaxing the growth of "ectopic" organs - or organs that form on the wrong part of the body. Early work in the field has focused on the formation of fruit fly eyes in the wrong place, such as on the wing or leg. However, these experiments required activating major regulatory genes in the new location, a technique that is limited to only a few study organisms. The resulting "eyes" were also never fully functional.

By contrast, the new IU-led study reports on the formation of an extra functional eye—technically, a "fusion" of two sets of extra eyes—following the knockdown of a single gene, a technique widely available to scientists in most organisms. The unexpected formation of a complex, functional eye in a novel location in the process is "a remarkable example of the ability of developmental systems to channel massive perturbations toward orderly and functional outcomes," Moczek said.

To create a fully functional eye in the center of a beetle's head, Moczek's team deactivated a single gene called orthodenticle, or odt, which their research has previously shown to play a role in instructing the formation of the head during development.

"This study experimentally disrupts the function of a single, major gene," Moczek said. "And, in response to this disruption, the remainder of head development reorganizes itself to produce a highly complex trait in a new place: a compound eye in the middle of the head.

"Moreover, the darn thing actually works!"

To confirm the eye was a true extra eye, the IU team conducted multiple tests to prove the structure had the same cell types, expressed the same genes, grew proper nerve connections and elicited the same behavioral response as a normal eye. What makes the results so exciting—beyond the eye's Frankenstein novelty—is the relatively simple genetic technique used to achieve the gene knockdown, said IU postdoctoral researcher Eduardo E. Zattara, who is lead author on the study.

Moczek said the findings also go beyond this application to help address fundamental questions in development, evolution and medicine. For example, understanding how complex organs organize their growth and integration into the body are central challenges medical sciences must overcome to develop artificial organs for research and transplantation.

"The use of ectopic eyes is a highly accessible paradigm to study all of this, across many types of organisms," Zattara said. "We regard this study as really opening the door to new avenues of investigation in multiple disciplines."

[Image: 1x1.gif] Explore further: 'Cyclops' beetles hint at solution to 'chicken-and-egg' problem in novel trait evolution

More information: Eduardo E. Zattara et al, Development of functional ectopic compound eyes in scarabaeid beetles by knockdown oforthodenticle, Proceedings of the National Academy of Sciences (2017). DOI: 10.1073/pnas.1714895114 

Journal reference: Proceedings of the National Academy of Sciences [Image: img-dot.gif] [Image: img-dot.gif]
Provided by: Indiana University

Read more at:
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
paying intention TM

Quote:"The amygdala is a part of the brain associated with survival—fight or flight.
It acts as a gateway regulating what we pay attention to,"   said Dr. Daniel C. Krawczyk

Quote:"Numerous studies have revealed that the amygdala is critical for consciously and non-consciously processing facial expressions,
and a smaller set of studies has revealed a role for the amygdala in Tobias Owen assessing whether an individual appears to be trustworthy or not. To Bias Own
[Image: 072516-face-on-mars-630x375.jpg]
Our study, however, is the first to indicate that the amygdala actually responds more selectively to faces than the fusiform face area," Young said.
"These findings lead us to believe that the amygdala may be getting a 'preview' before the brain's primary visual cortex sends the signal to the fusiform face area."

Amygdala may play bigger than expected role in facial recognition
February 12, 2018 by Emily Bywaters, University of Texas at Dallas

[Image: amyg.png]
Location of the amygdala in the human brain. Image: Wikipedia.
New research from the Center for BrainHealth at The University of Texas at Dallas reveals that the amygdala may play a larger role in the brain's ability to recognize faces than previously thought.

In a study published in Neuropsychologia, scientists found that the amygdala responded more specifically to faces than the fusiform face area (FFA), part of the brain traditionally known for facial recognition.
"The amygdala is a part of the brain associated with survival—fight or flight. It acts as a gateway regulating what we pay attention to," said Dr. Daniel C. Krawczyk, deputy director of the Center for BrainHealth and associate professor in the School of Behavioral and Brain Sciences. "We would expect the amygdala to be activated in the presence of scary or threatening faces—something that our brain might perceive as potentially impeding our survival. However, we were surprised to find how active the amygdala is in the presence of emotionally neutral faces."
The research included 69 participants, between 19 and 65 years old, who had a traumatic brain injury (TBI) at least six months beforehand. More than half of the participants had some symptoms of post-traumatic stress disorder.
"This finding highlights the importance of social cognition, which includes the ability to recognize faces. This process is key for our survival," said Krawczyk, the study's co-author and the Debbie and Jim Francis Chair in Behavioral Brain Sciences.
The results are similar to a prior study that was conducted with individuals without TBI.
"While this study helps to further elucidate the role of the amygdala in visual recognition and memory, it is not exclusive to TBI patients, and indicates that further research with other patient populations is warranted," said Dr. Leanne R. Young, executive director of the Brain Performance Institute at the Center for BrainHealth, who led the study.
In the study, functional magnetic resonance imaging measured changes in the blood-oxygen-level-dependent signal in the left and right amygdala as participants viewed a series of neutral faces and scenes. Participants were instructed to concentrate on pictures of faces and scenes, faces or scenes, or both simultaneously.
Evidence of face-selective activity in the amygdala was found in 60 percent of participants, demonstrating that the amygdala strongly responds to neutral faces. The amygdala was less responsive to scene stimuli than the FFA. This face-specificity is of particular interest for neuroimaging tasks used to evaluate the impact of frontal lobe impairments on strategic attention.
"Numerous studies have revealed that the amygdala is critical for consciously and non-consciously processing facial expressions, and a smaller set of studies has revealed a role for the amygdala in assessing whether an individual appears to be trustworthy or not. Our study, however, is the first to indicate that the amygdala actually responds more selectively to faces than the fusiform face area," Young said.

"These findings lead us to believe that the amygdala may be getting a 'preview' before the brain's primary visual cortex sends the signal to the fusiform face area."

[Image: 1x1.gif] Explore further: Brain damage can make sideways faces more memorable, and give us 'emotion blindness'
More information: Leanne R. Young et al. Amygdala activation as a marker for selective attention toward neutral faces in a chronic traumatic brain injury population, Neuropsychologia (2017). DOI: 10.1016/j.neuropsychologia.2017.08.026

Journal reference: Neuropsychologia [Image: img-dot.gif] [Image: img-dot.gif]
Provided by: University of Texas at Dallas

Quote:"We showed the fast-spiking interneurons act like gatekeepers for plasticity," said Scott Owen, PhD, staff scientist in Kreitzer's laboratory at Gladstone. "They restrict when plasticity can occur, meaning that they can prevent changes in the connection strength between neurons. This is crucial for learning and memory and, more specifically, for enabling the basal ganglia to remember how to perform tasks."
Ultimately, the scientists explained how the interneurons function to improve the efficiency of procedural learning.

[Image: 24560744396_72b4dd5d53_b.jpg]

New study explains how your brain helps you learn new skills

February 8, 2018, Gladstone Institutes
[Image: 1-brain.jpg]
Credit: CC0 Public Domain
Even if you haven't ridden your bike in years, you probably remember how to do so without giving it much thought. If you're a skilled piano player, odds are you can easily sit down and play a song you've rehearsed before. And, when you drive to work, you're likely not actively thinking about your movements.

The skills needed to perform any of these activities are stored in your brain as procedural memories. Researchers from the Gladstone Institutes uncovered how a special type of neuron improves the efficiency of this type of learning. Their findings were published online today in the scientific journal Cell.
The scientists initially wanted to show how the specialized brain cells, called fast-spiking interneurons, cause movement disorders, such as Tourette's syndrome, dystonia, and dyskinesia. As it turns out, that isn't the case. But their work led them to an even greater discovery.
A Path to Unexpected Findings
The team, led by Gladstone Senior Investigator Anatol C. Kreitzer, PhD, was trying to understand the basic mechanisms of the basal ganglia, which are a group of interconnected neurons in the brain that control movement and are associated with decision-making and action selection. Fast-spiking interneurons represent only about 1 percent of the neurons in that brain region, but are known to have an outsized role in organizing the circuit activity. 
The leading hypothesis in the field was that these interneurons were involved in motor control, and that their loss might be related to movement disorders.
"After 2 years of experiments showing us the contrary, we finally convinced ourselves that the hypothesis was wrong," said Kreitzer, who is also a professor of physiology and neurology at UC San Francisco. "It's not that the interneurons aren't at all involved, but their loss doesn't cause the symptoms we thought it would. That was a big surprise."
Instead, they discovered that the interneurons are much more important for learning and memory, and potentially more closely related to psychiatric disease than movement disorders.
Kreitzer's team found that the interneurons play a fundamental role in brain plasticity, which is the brain's ability to strengthen or weaken connections between neurons. By doing so, the brain can store information and procedural memory.
"We showed the fast-spiking interneurons act like gatekeepers for plasticity," said Scott Owen, PhD, staff scientist in Kreitzer's laboratory at Gladstone. "They restrict when plasticity can occur, meaning that they can prevent changes in the connection strength between neurons. This is crucial for learning and memory and, more specifically, for enabling the basal ganglia to remember how to perform tasks."
Ultimately, the scientists explained how the interneurons function to improve the efficiency of procedural learning.
A New Principle with Broad Implications
Based on their discovery, Kreitzer and his team revised their assumptions about how fast-spiking interneurons may function elsewhere, suggesting that the neurons are critical for learning in other areas of the brain, too.
"Now that we've identified a new principle for how the interneurons can control plasticity, our study is a first step in better understanding the mechanisms involved in other brain regions as well," said Kreitzer. "We believe our findings can be used as a general guide to determine how these neurons affect all neural circuits; the way that manifests itself in terms of behavior or disease will be different across different brain regions."
In other parts of the brain, these same neurons are known to be crucial for processing sensory input, such as vision or touch, and their dysfunction is associated with bipolar disorder and schizophrenia. Fast-spiking interneurons could be a key factor in controlling the efficiency of the learning process in those systems as well.
The paper "Fast-Spiking Interneurons Supply Feedforward Control of Bursting, Calcium, and Plasticity for Efficient Learning" was published online by Cell on February 8, 2018.
[Image: 1x1.gif] Explore further: Delayed development of fast-spiking neurons linked to Fragile X
More information: Scott F. Owen et al. Fast-Spiking Interneurons Supply Feedforward Control of Bursting, Calcium, and Plasticity for Efficient Learning, Cell (2018). DOI: 10.1016/j.cell.2018.01.005

Study identifies neurons that fire at the beginning and end of a behavior as it becomes a habit
  February 8, 2018 by Anne Trafton, Massachusetts Institute of Technology

[Image: 5-studyidentif.jpg]
Our daily lives include hundreds of routine habits, made up of many smaller actions, such as picking up our toothbrush, squeezing toothpaste onto it, and then lifting the brush to our mouth. This process of grouping behaviors together into a single routine is known as “chunking.” MIT neuroscientists have now found that certain neurons in the brain are responsible for marking the beginning and end of these chunked units of behavior. Credit: Chelsea Turner/MIT
Our daily lives include hundreds of routine habits. Brushing our teeth, driving to work, or putting away the dishes are just a few of the tasks that our brains have automated to the point that we hardly need to think about them.
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
(12-07-2008, 09:41 AM)Sunday, October 19th, 2008, 12:30 am Wrote: This is the Education of Lincoln.
Courtesy of the Hidden Mission Members.  


i was just wondering what this ton of crap was doing in this forum, it has nothing to do with the planets, although it is weird, yes weird, but utterly unreviewable,

[Image: image.png]

seems like it should be in wooks "o horseshit" thread instead of here

Seams/Seems like eye got it all sewed up...Condensed.

Quote:Tobias Owen "This is likely because when we recall a memory, To Buy as Sewn it's a condensed version of the original experience. To Bias Own

[Image: total_recall_wallpaper.jpg]

Can't get an image out of your head? Your eyes are helping to keep it there

February 14, 2018, Baycrest Centre for Geriatric Care

[Image: 58e552a2bf75d.jpg]
Credit: CC0 Public Domain
Even though you are not aware of it, your eyes play a role in searing an image into your brain, long after you have stopped looking at it.

Through brain imaging, Baycrest scientists have found evidence that the brain uses eye movements to help people recall vivid moments from the past, paving the way for the development of visual tests that could alert doctors earlier about those at risk for neurodegenerative illnesses.

The study, recently published in the journal Cerebral Cortex, found that when people create a detailed mental image in their head, not only do their eyes move in the same way as when they first saw the picture, their brains showed a similar pattern of activity.

"There's a theory that when you remember something, it's like the brain is putting together a puzzle and reconstructing the experience of that moment from separate parts," says Dr. Bradley Buchsbaum, senior author on the study, scientist at Baycrest's Rotman Research Institute (RRI) and psychology professor at the University of Toronto. "The pattern of eye movements is like the blueprint that the brain uses to piece different parts of the memory together so that we experience it as a whole."

This is the first time a direct connection has been established between a person's eye movements and patterns of brain activity, which follows up on previous studies linking what we see to how we remember.

In the study, researchers used a mathematical algorithm to analyze the brain scans and eye movements of 16 young adults between the ages of 20 to 28. Individuals were shown a set of 14 distinct images for a few seconds each. They were asked to remember as many details of the picture as possible so they could visualize it later on.


Participants were then cued to mentally visualize the images within an empty rectangular box shown on the screen.

[Image: 11988602235_0ea153ab19_o.jpg] "The pattern of eye movements is like the blueprint that the brain uses to piece different parts of the memory together so that we experience it as a whole."

Brain imaging and eye-tracking technology simultaneously captured the brain activity and eye movements of the participants as they memorized and remembered the pictures.


The study, led by RRI graduate student Michael Bone, discovered the same pattern of eye movements and brain activation, but compressed, when the picture was memorized and then remembered.

"This is likely because when we recall a memory, it's a condensed version of the original experience. For example, if a marriage proposal took two minutes, when we picture this memory in our head, we re-experience it in a much shorter timeframe," says Dr. Buchsbaum. "The eye movements are like a short-hand code that your brain runs through to trigger the memory."

By looking at the patterns of eye movement and brain activity, researchers were able to identify which image a person was remembering during the task.

As next steps, the study will explore distinguishing whether the eye movements lead the brain to reactivate the memory or vice versa. Having a greater understanding of this causal relationship could inform the creation of a diagnostic tool using the eyes to catch when a person's memory is headed down an unhealthy path, adds Dr. Buchsbaum.

[Image: 1x1.gif] Explore further: Long-term memories made with meaningful information

More information: Michael B Bone et al, Eye Movement Reinstatement and Neural Reactivation During Mental Imagery, Cerebral Cortex (2018). DOI: 10.1093/cercor/bhy014

Journal reference: Cerebral Cortex [Image: img-dot.gif] [Image: img-dot.gif]
Provided by: Baycrest Centre for Geriatric Care [Image: img-dot.gif] [Image: img-dot.gif]

paying intention TM

Quote: Wrote:"The amygdala is a part of the brain associated with survival—fight or flight.
It acts as a gateway regulating what we pay attention to,"   said Dr. Daniel C. Krawczyk

Quote: Wrote:"Numerous studies have revealed that the amygdala is critical for consciously and non-consciously processing facial expressions,
and a smaller set of studies has revealed a role for the amygdala in Tobias Owen assessing whether an individual appears to be trustworthy or not. To Bias Own
[Image: 072516-face-on-mars-630x375.jpg]
Our study, however, is the first to indicate that the amygdala actually responds more selectively to faces than the fusiform face area," Young said.
"These findings lead us to believe that the amygdala may be getting a 'preview' before the brain's primary visual cortex sends the signal to the fusiform face area."
I will Fashion a Head-Dress for You then,to contain the thoughts that will burst from your new mind. -Ireland To Harris

Forum Jump:

Users browsing this thread: 1 Guest(s)