Thread Rating:
  • 2 Vote(s) - 4.5 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Top-Hats and Dunce Caps...Honestly, Of Whom am I Thinkin' ?
Our soul's activity comes from two main organs..imho..the Brain and Heart.

The CREATOR placed the ENTIRE Universe is subject to "waves" of energy.

It is how we can create the "Consensus Reality" between humans.

Now, how one is brought up, life experiences, and personal activities are PART of that...but also I believe that the Creation of the Universe is also still at work. Angel 

Bob... Ninja Assimilated
"The Light" - Jefferson Starship-Windows of Heaven Album
I'm an Earthling with a Martian Soul wanting to go Home.   
You have to turn your own lightbulb on. ©stevo25 & rhw007
Reply
(04-04-2018, 08:20 PM)EA Wrote: Some researchers thought that these ultra-slow waves were no more than an artifact of the MRI technique itself. MRI gauges brain activity indirectly by measuring the flow of oxygen-rich blood over a period of seconds, a very long timescale for an organ that sends messages at one-tenth to one-hundredth of a second. Rather than a genuinely slow process, the reasoning went, the waves could be the sum of many rapid electrical signals over a relatively long time.
First author Anish Mitra, PhD, and Andrew Kraft, PhD - both MD/PhD students at Washington University - and colleagues decided to approach the mystery of the ultra-slow waves using two techniques that directly measure electrical activity in mice brains. In one, they measured such activity on the cellular level. In the other, they measured electrical activity layer by layer along the outer surface of the brain.
They found that the waves were no artifact: Ultra-slow waves were seen regardless of the technique, and they were not the sum of all the faster electrical activity in the brain.
Instead, the researchers found that the ultra-slow waves spontaneously started in a deep layer of mice's brains and spread in a predictable trajectory. As the waves passed through each area of the brain, they enhanced the electrical activity there. Neurons fired more enthusiastically when a wave was in the vicinity.
Moreover, the ultra-slow waves persisted when the mice were put under general anesthesia, but with the direction of the waves reversed.

"There is a very slow process that moves through the brain to create temporary windows of opportunity for long-distance signaling," Mitra said. "The way these ultra-slow waves move through the cortex is correlated with enormous changes in behavior, such as the difference between conscious and unconscious states."

[Image: Sketch-of-the-different-types-of-wakes-a...ship-3.png]
The fact that the waves' trajectory changed so dramatically with state of consciousness suggests that ultra-slow waves could be fundamental to how the brain functions.

[Image: jfkkub.jpg]
 If brain areas are thought of as boats bobbing about on a slow-wave sea, the choppiness and direction of the sea surely influences how easily a message can be passed from one boat to another, and how hard it is for two boats to coordinate their activity.


The researchers now are studying whether abnormalities in the trajectory of such ultra-slow waves could explain some of the differences seen on MRI scans between healthy people and people with neuropsychiatric conditions such as dementia and depression.

"If you look at the brain of someone with schizophrenia, you don't see a big lesion, but something is not right in how the whole beautiful machinery of the brain is organized," said Raichle, who is also a professor of biomedical engineering, of neurology, of neuroscience and of psychological and brain sciences. "What we've found here could help us figure out what is going wrong. These very slow waves are unique, often overlooked and utterly central to how the brain is organized. That's the bottom line."
[Image: 1x1.gif] Explore further: 'Waves' of neural activity give new clues about Alzheimer's
More information: Neuron (2018). DOI: 10.1016/j.neuron.2018.03.015

Journal reference: Neuron [/url]
[b]Provided by:
Washington University School of Medicine
[/b]


right where we left off Arrow


A heavy working memory load may sink brainwave 'synch'

April 5, 2018, Massachusetts Institute of Technology

[Image: aheavyworkin.jpg]
Three regions are involved in producing visual working memory, but when the system breaks down above its maximum capacity. Credit: Picower Institute for Learning and Memory
Everyday experience makes it obvious - sometimes frustratingly so - that our working memory capacity is limited. We can only keep so many things consciously in mind at once. The results of a new study may explain why: They suggest that the "coupling," or synchrony, of brain waves among three key regions breaks down in specific ways when visual working memory load becomes too much to handle.

"When you reach capacity there is a loss of feedback coupling," said senior author Earl Miller, Picower Professor of Neuroscience at MIT's Picower Institute for Learning and Memory. That loss of synchrony means the regions can no longer communicate with each other to sustain working memory.
Maximum working memory capacity - for instance the total number of images a person can hold in working memory at the same time - varies by individual but averages about four, Miller said. Researchers have correlated working memory capacity with intelligence.
Understanding what causes working memory to have an intrinsic limit is therefore important because it could help explain the limited nature of conscious thought and optimal cognitive performance, Miller said.
And because certain psychiatric disorders can lower capacity, said Miller and lead author Dimitris Pinotsis, a research affiliate in Miller's lab, the findings could also explain more about how such disorders interfere with thinking.
"Studies show that peak load is lower in schizophrenics and other patients with neurological or psychiatric diseases and disorders compared to healthy people," Pinotsis said. "Thus, understanding brain signals at peak load can also help us understand the origins of cognitive impairments."
The study's other author is Timothy Buschman, assistant professor at the Princeton University Neuroscience Institute and a former member of the Miller lab.
How working memory stops working
The new study published in the journal Cerebral Cortex is a detailed statistical analysis of data the Miller lab recorded when animal subjects played a simple game: They had to spot the difference when they were shown a set of squares on a screen and then, after a brief blank screen, a nearly identical set in which one square had changed color. The number of squares involved, hence the working memory load of each round, varied so that sometimes the task exceeded the animals' capacity.

As the animals played, the researchers measured the frequency and timing of brain waves produced by ensembles of neurons in three regions presumed to have an important - though as yet unknown - relationship in producing visual working memory: the prefrontal cortex (PFC), the frontal eye fields (FEF), and the lateral intraparietal area (LIP).
The researchers' goal was to characterize the crosstalk among these three areas, as reflected by patterns in the brain waves, and to understand specifically how that might change as load increased to the point where it exceeded capacity.
Though the researchers focused on these three areas, they didn't know how they might work with each other. Using sophisticated mathematical techniques, they tested scores of varieties of how the regions "couple," or synchronize, at high- and low-frequencies. The "winning" structure was whichever one best fit the experimental evidence.
[Image: 1-aheavyworkin.jpg]
Three brain regions, the the prefrontal cortex (PFC), the frontal eye fields (FEF), and the lateral intraparietal area (LIP), share feedback and feedforward signals in visual working memory. Credit: Pinotsis et. al.
"It was very open ended," Miller said. "We modeled all different combinations of feedback and feedforward signals among the areas and waited to see where the data would lead."
They found that the regions essentially work as a committee, without much hierarchy, to keep working memory going. They also found changes as load approached and then exceeded capacity.
"At peak memory load, the brain signals that maintain memories and guide actions based on these memories, reach their maximum," Pinotsis said. "Above this peak, the same signals break down."
In particular, above capacity the PFC's coupling to other regions at low frequency stopped, Miller said.
Other research suggests that the PFC's role might be to employ low-frequency waves to provide the feedback the keeps the working memory system in synch. When that signal breaks down, Miller said, the whole enterprise may as well. That may explain why memory capacity has a finite limit. In prior studies, he said, his lab has observed that the information in neurons degrades as load increases, but there wasn't an obvious cut-off where working memory would just stop functioning.
"We knew that stimulus load degrades processing in these areas, but we hadn't seen any distinct change that correlated with reaching capacity," he said. "But we did see this with feedback coupling. It drops off when the subjects exceeded their capacity. The PFC stops providing feedback coupling to the FEF and LIP."
Two sides to the story

Because the study game purposely varied where the squares appeared on the left or right side of the visual field, the data also added more evidence for a discovery Miller and colleagues first reported back in 2009: Visual working memory is distinct for each side of the visual field. People have independent capacities on their left and their right, research has confirmed.
The Miller Lab is now working on a new study that tracks how the three regions interact when working memory information must be shared across the visual field.

The insights Miller's lab has produced into visual working memory led him to start the company SplitSage, which last month earned a patent for technology to measure people's positional differences in visual working memory capacity. The company hopes to use insights from Miller's research to optimize heads-up displays in cars and to develop diagnostic tests for disorders like dementia among other applications. Miller is the company's chief scientist and Buschman is chair of the advisory board.
The more scientists learn about how working memory works, and more generally about how brain waves synchronize higher level cognitive functions, the more ways they may be able to apply that knowledge to help people, Miller said.
"If we can figure out what things rhythms are doing and how they are doing them and when they are doing them, we may be able to find a way to strengthen the rhythms when they need to be strengthened," he said.
[Image: 1x1.gif] Explore further: Neuroscientists suggest a model for how we gain volitional control of what we hold in our minds
More information: Dimitris A Pinotsis et al, Working Memory Load Modulates Neuronal Coupling, Cerebral Cortex (2018). DOI: 10.1093/cercor/bhy065

Journal reference: Cerebral Cortex [Image: img-dot.gif] [Image: img-dot.gif]
Provided by: Massachusetts Institute of Technology



Recall:   three key regions
Dimitris A Pinotsis et al, Working Memory Load Modulates Neuronal Coupling, Cerebral Cortex (2018). DOI: 10.1093/cercor/bhy065

[url=https://flic.kr/p/24x8NbK][Image: 40390095375_862d483cd2_b.jpg]
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
Reply
....
you can overcome
"peak load memory" and the breakdown of brain capacity they refer to {last post},
by
unique inspiration -- and the inspiration can be maintained almost indefinitely,
as long as there is a prime motivation factor,
similar to a spiritual experience accelerating the inspiration.  

unique inspiration -- 
which catalyzes -- exponential insight applied to memory load focus -- perceptual multi tasking ...

sometimes can border on obsessive compulsive when younger

but no matter what .. at some point of exponential over accomplished brain function,
there has to be some ... rest ... sleep on it ... 
balanced inspiration,
but at some point ... there has to be some ... crash and burnout ... fizzzzzzle ... and ... poof 

been there done that
too many times
but I always enjoy the inspiration

A physics professor challenged me to do my harmonic code once ... 
on the Cabibbo angle -- 13.04 degrees.
I was quite inspired.
No more than a few hours later ...
I found a fabulous angle tangent good to 10 decimals -- 13.04000000 degrees

Reefer

and I didn't crash and burn.

...
Reply
Quote:[b]Recall:
Because the study game purposely varied where the squares appeared on the left or right side of the visual field, the data also added more evidence for a discovery Miller and colleagues first reported back in 2009: Visual working memory is distinct for each side of the visual field. People have independent capacities on their left and their right, research has confirmed.

[/b]The Miller Lab is now working on a new study that tracks how the three regions interact when working memory information must be shared across the visual field.



A cosmic gorilla effect could blind the detection of aliens


Date:April 10, 2018Source:FECYT - Spanish Foundation for Science and Technology

[Image: 180410132835_1_540x360.jpg]
Inside the Occator crater of the dwarf planet Ceres appears a strange structure, looking like a square inside a triangle.
[Image: 40390095375_862d483cd2_b.jpg]

Credit: NASA / JPL-Caltech

A well-known experiment with young people bouncing a ball showed that when an observer focuses on counting the passes, he does not detect if someone crosses the stage disguised as a gorilla. According to researchers at the University of Cádiz (Spain), something similar could be happening to us when we try to discover intelligent non-earthly signals, which perhaps manifest themselves in dimensions that escape our perception, such as the unknown dark matter and energy.
One of the problems that have long intrigued experts in cosmology is how to detect possible extraterrestrial signals. Are we really looking in the right direction? Maybe not, according to the study that the neuropsychologists Gabriel de la Torre and Manuel García, from the University of Cádiz, publish in the journal Acta Astronautica.
"When we think of other intelligent beings, we tend to see them from our perceptive and conscience sieve; however we are limited by our sui generis vision of the world, and it's hard for us to admit it," says De la Torre, who prefers to avoid the terms 'extraterrestrial' or aliens by its Hollywood connotations and use another more generic, as 'non-terrestrial'.
"What we are trying to do with this differentiation is to contemplate other possibilities -- he says-, for example, beings of dimensions that our mind cannot grasp; or intelligences based on dark matter or energy forms, which make up almost 95% of the universe and which we are only beginning to glimpse. There is even the possibility that other universes exist, as the texts of Stephen Hawking and other scientists indicate."
The authors state that our own neurophysiology, psychology and consciousness can play an important role in the search for non-terrestrial civilizations; an aspect that they consider has been neglected until now.
In relation to this, they conducted an experiment with 137 people, who had to distinguish aerial photographs with artificial structures (buildings, roads ...) from others with natural elements (mountains, rivers ...). In one of the images, a tiny character disguised as a gorilla was inserted to see if the participants noticed.
This test was inspired by the one carried out by the researchers Christopher Chabris and Daniel Simons in the 90s to show the inattention blindness of the human being. A boy in a gorilla costume could walk in front of a scene, gesticulating, while the observers were busy in something else (counting the ball passes of players in white shirts), and more than half did not notice.
"It is very striking, but very significant and representative at the same time, how our brain works," says De la Torre, who explains how the results were similar in the case of his experiment with the images. "In addition, our surprise was greater," he adds, "since before doing the test to see the inattentional blindness we assessed the participants with a series of questions to determine their cognitive style (if they were more intuitive or rational), and it turned out that the intuitive individuals identified the gorilla of our photo more times than those more rational and methodical."
"If we transfer this to the problem of searching for other non-terrestrial intelligences, the question arises about whether our current strategy may result in us not perceiving the gorilla," stresses the researcher, who insists: "Our traditional conception of space is limited by our brain, and we may have the signs above and be unable to see them. Maybe we're not looking in the right direction."
Another example presented in the article is an apparently geometric structure that can be seen in the images of Occator, a crater of the dwarf planet Ceres famous for its bright spots. "Our structured mind tells us that this structure looks like a triangle with a square inside, something that theoretically is not possible in Ceres," says De la Torre, "but maybe we are seeing things where there are none, what in psychology is called pareidolia."
However, the neuropsychologist points out another possibility: "The opposite could also be true. We can have the signal in front of us and not perceive it or be unable to identify it. If this happened, it would be an example of the cosmic gorilla effect. In fact, it could have happened in the past or it could be happening right now."
Three types of intelligent civilizations
In their study, the authors also pose how different classes of intelligent civilizations could be. They present a classification with three types based on five factors: biology, longevity, psychosocial aspects, technological progress and distribution in space.
An example of Type 1 civilizations is ours, which could be ephemeral if it mishandles technology or planetary resources, or if it does not survive a cataclysm. But it could also evolve into a Type 2 civilization, characterized by the long longevity of its members, who control quantum and gravitational energy, manage space-time and are able to explore galaxies.
"We were well aware that the existing classifications are too simplistic and are generally only based on the energy aspect. The fact that we use radio signals does not necessarily mean that other civilizations also use them, or that the use of energy resources and their dependence are the same as we have," the researchers point out, recalling the theoretical nature of their proposals.
The third type of intelligent civilization, the most advanced, would be constituted by exotic beings, with an eternal life, capable of creating in multidimensional and multiverse spaces, and with an absolute dominion of dark energy and matter.

Journal Reference:
  1. Gabriel G. De la Torre, Manuel A. Garcia. The cosmic gorilla effect or the problem of undetected non terrestrial intelligent signals. Acta Astronautica, 2018; 146: 83 DOI: 10.1016/j.actaastro.2018.02.036
FECYT - Spanish Foundation for Science and Technology. "A cosmic gorilla effect could blind the detection of aliens." ScienceDaily. ScienceDaily, 10 April 2018. <www.sciencedaily.com/releases/2018/04/180410132835.htm>.


Tobias Owen Sheep  To Bias Own

Quote:Recall:
Because the study game purposely varied where the squares appeared on the left or right side of the visual field, the data also added more evidence for a discovery Miller and colleagues first reported back in 2009: Visual working memory is distinct for each side of the visual field. People have independent capacities on their left and their right, research has confirmed.


The Miller Lab is now working on a new study that tracks how the three regions interact when working memory information must be shared across the visual field.


The emotions we feel may shape what we see

April 11, 2018, Association for Psychological Science

[Image: 5863ab20c0cdf.jpg]
Credit: CC0 Public Domain
Our emotional state in a given moment may influence what we see, according to findings published in Psychological Science, a journal of the Association for Psychological Science. In two experiments, researchers found that participants saw a neutral face as smiling more when it was paired with an unseen positive image.

The research shows that humans are active perceivers, say psychological scientist Erika Siegel of the University of California, San Francisco and her coauthors.
"We do not passively detect information in the world and then react to it - we construct perceptions of the world as the architects of our own experience. Our affective feelings are a critical determinant of the experience we create," the researchers explain. "That is, we do not come to know the world through only our external senses - we see the world differently when we feel pleasant or unpleasant."
In previous studies, Siegel and colleagues found that influencing people's emotional states outside of conscious awareness shifted their first impressions of neutral faces, making faces seem more or less likeable, trustworthy, and reliable. In this research, they wanted to see if changing people's emotional states outside awareness might actually change how they see the neutral faces.
Using a technique called continuous flash suppression, the researchers were able to present stimuli to participants without them knowing it. In one experiment, 43 participants had a series of flashing images, which alternated between a pixelated image and a neutral face, presented to their dominant eye. At the same time, a low-contrast image of a smiling, scowling, or neutral face was presented to their nondominant eye - typically, this image will be suppressed by the stimulus presented to the dominant eye and participants will not consciously experience it.
At the end of each trial, a set of five faces appeared and participants picked the one that best matched the face they saw during the trial.
The face that was presented to participants' dominant eye was always neutral. But they tended to select faces that were smiling more as the best match if the image that was presented outside of their awareness showed a person who was smiling as opposed to neutral or scowling

In a second experiment, the researchers included an objective measure of awareness, asking participants to guess the orientation of the suppressed face.
[Image: image049.jpg]
Those who correctly guessed the orientation at better than chance levels were not included in subsequent analyses. Again, the results indicated that unseen positive faces changed participants' perception of the visible neutral face.


Given that studies often show negative stimuli as having greater influence on behavior and decision making, the robust effect of positive faces in this research is intriguing and an interesting area for future exploration, the researchers note.
Siegel and colleagues add that their findings could have broad, real-world implications that extend from everyday social interactions to situations with more severe consequences, such as when judges or jury members have to evaluate whether a defendant is remorseful.
Ultimately, these experiments provide further evidence that what we see is not a direct reflection of the world but a mental representation of the world that is infused by our emotional experiences.
[/url]
More information: Erika H. Siegel et al, Seeing What You Feel: Affect Drives Visual Perception of Structurally Neutral Faces, Psychological Science (2018). [url=http://dx.doi.org/10.1177/0956797617741718]DOI: 10.1177/0956797617741718


Journal reference: Psychological Science [Image: img-dot.gif] [Image: img-dot.gif]
Provided by: Association for Psychological Science
https://medicalxpress.com/news/2018-04-emotions.html
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
Reply
walking backwards Sheep looking forwards.

Scientists discover hidden structure of enigmatic 'backwards' neural connections
April 16, 2018, Champalimaud Centre for the Unknown

[Image: 34-scientistsdi.jpg]
The existence of 'backwards' neural connections linking distant areas of the neocortex -- the part of the brain responsible for higher cognitive functions -- have baffled scientists for decades. Credit: Marques et al.
For decades, the neuroscience community has been baffled by the existence of dense connections in the brain that seem to be going "backwards." These connections, which span extensively across distant areas of the neocortex, are clearly conveying important information. But until now, the organization of the connections, and therefore their possible role, was largely unknown.

In a study published today in the scientific journal Nature Neuroscience, scientists at the Champalimaud Centre for the Unknown in Lisbon report for the first time that these connections form an exquisitely organised map of the visual space and provide important insights into how they may be involved in visual perception.
"Our current understanding of the visual system suggests a hierarchical model," explains Leopoldo Petreanu, the leading researcher of the study. "According to this model, lower structures receive an image from the eyes, which is then processed and relayed forward to higher structures of the neocortex for the extraction of key features, such as contours, objects, and so on."
Petreanu adds, "This could have been a great model, if it weren't for the elephant in the room—that there are as many, if not more, connections that go backward, from higher to lower areas. The function of these so-called feedback connections has been a mystery for neuroscientists for decades."
Previous attempts to elucidate the nature of these connections made things even more confusing. "Feedback connections are very messy," says Petreanu. "Under the microscope, they look like an extensive mesh of wires intertwined like a spaghetti bowl. And to make matters even worse, intermingled wires encode a variety of signals. It really wasn't clear whether there was any order in this mess."
Many theories have been proposed for the role of these feedback connections in cognition, including attention, expectation and awareness. However, it was impossible to tell which of the theories were true since the connectivity map was unknown.
To solve the mystery, Petreanu, together with Tiago Marques and Julia Nguyen, the first co-authors of the study, used a unique method that was developed by Petreanu a few years ago. With this method, the researchers measured the activity in the actual connection points made between higher and lower structures.
"This method has provided us with groundbreaking insight into how feedback connections are organised and how this organisation might shape visual perception," says Marques. "Hidden in the tangle of wires we found that there is a beautiful organisation, where feedback connections target specific neurons in lower structures depending on the signals they carry."

But what, exactly, is this organisation, and what might be its role in visual perception? Petreanu and Marques report several insights that shed light onto this long-standing mystery.
Feedback connections tell the big picture
The first insight occurred when the researchers asked whether the connections follow any particular pattern. Their guess was that they do.
"In many separate structures of the visual system, beginning with the eye itself, neighboring neurons encode neighboring areas of the visual space. This way, the individual structures contain an almost one-to-one map of the image," Marques explains.
This map exists in the primary visual cortex (also called V1), which is the entry point of visual information to the neocortex. This was the researchers' starting point. They asked: whether feedback connections matched the visual map encoded in V1.
"The answer we found was yes and no," Marques says. "The majority of feedback inputs formed the same spatial map as the areas they connected to in V1. In other words, the V1 and feedback maps were superimposed on each other. This observation had already been reported in other species, such as primates, so we weren't surprised. However, in the mouse, we also observed something new. The feedback connections also encoded information from further locations in the visual space. Since the technique we used is novel and has only been applied here, it is likely that this might be found in the future in other species as well."
This finding suggests that feedback signals sent from higher cortical areas are used to provide lower structures with context. "According to the hierarchical structure of the visual system, lower structures would only have access to local, low-level information," Marques explains. "What the feedback connections give them is the whole picture. This way, the activity of neurons in lower structures can be altered according to the current context. This type of contextual information is very important for visual perception. For instance, a round, green shape seen at a distance would be readily identified as a tennis ball when seen in the context of a tennis court, or as an apple if seen in the context of a fruit bowl."
Telling the brain where not to look
This first discovery motivated the researchers to look even further into what other types of information the feedback connections might be sending to V1. This time, they asked whether these connections might help V1 neurons find objects. "The world is made up of objects," Petreanu explains. "The phone in your hand, the cars on the road, these are all objects that are defined by continuous lines. Therefore, it's not surprising that neurons in the visual system care a lot about these lines."
How could feedback connections help accentuate the lines that make up objects? There are two possibilities—they can either amplify the activity in V1 where the lines are, or they can dampen activity where they are not supposed to be.
"We found that the second option is the most likely to be true," says Petreanu. "The feedback connections were abundant in V1 in areas outside the lines. We therefore hypothesise that this organisation is probably silencing neurons in the areas that lie outside the line, and thereby enhancing the contrast between objects and their surroundings."
Next, the researchers asked whether feedback connections might participate in motion detection. To their surprise, they found not only that they do, but that they use the same strategy to do it. "This time, the visual feature was different, but the feedback connections played the same role," says Marques. "We observed that feedback connections that respond to moving objects were enriched in V1 in regions opposite to the direction of movement."
Together, these results suggest something of a clairvoyant role for these feedback connections. How do they know which neurons should be active at any given moment in time?
"We believe that these results imply that this set of feedback connections learn through experience what to expect from the world and then use this knowledge to shape incoming visual information," says Petreanu. "In the world, objects are defined by continuous lines, not scattered dots, and moving objects tend to maintain their trajectory, not move around randomly. So feedback connections try to accentuate these particular features that they have learned to anticipate. Surprisingly, they do so by pointing to locations that are opposite to the expected ones."
From biological vision to machine vision
The results of Petreanu and Marques provide an important piece of the puzzle of how the neocortex is organized and suggest how visual perception could be generated in the brain. According to Petreanu, these findings not only contribute to our understanding of biology, but might also carry implications for the field of machine vision.
"The relationship between machine vision and neuroscience has always been a close one," says Petreanu. "Our knowledge of how the circuitry of the brain, and in particular the neocortex, is organized, has helped inspire doink-head that have been increasingly more successful in enabling machines to 'see.""
According to Petreanu, while current machine vision doink-head are pretty good they can not yet match the performance of humans. "Paralleling the neuroscientists' understanding, modern machine vision doink-head usually don't make use of feedback connections. Our findings might inspire new doink-head that will take advantage of these connections, which might make the future arrive a bit sooner," he concludes.
[Image: 1x1.gif] Explore further: Neural connections mapped with unprecedented detail
More information: Tiago Marques et al, The functional organization of cortical feedback inputs to primary visual cortex, Nature Neuroscience (2018). DOI: 10.1038/s41593-018-0135-z

Journal reference: Nature Neuroscience [Image: img-dot.gif] [Image: img-dot.gif]
Provided by: Champalimaud Centre for the Unknown






We think we're the first advanced earthlings—but how do we really know?

April 16, 2018, University of Rochester


[Image: wethinkweret.jpg]
How do we really know there weren't previous industrial civilizations on Earth that rose and fell long before human beings appeared? That's the question posed in a scientific thought experiment by University of Rochester astrophysicist Adam …more
Imagine if, many millions of years ago, dinosaurs drove cars through cities of mile-high buildings. A preposterous idea, right? Over the course of tens of millions of years, however, all of the direct evidence of a civilization—its artifacts and remains—gets ground to dust. How do we really know, then, that there weren't previous industrial civilizations on Earth that rose and fell long before human beings appeared?



It's a compelling thought experiment, and one that Adam Frank, a professor of physics and astronomy at the University of Rochester, and Gavin Schmidt, the director of the NASA Goddard Institute for Space Studies, take up in a paper published in the International Journal of Astrobiology.

"Gavin and I have not seen any evidence of another industrial civilization," Frank explains. But by looking at the deep past in the right way, a new set of questions about civilizations and the planet appear: What geological footprints do civilizations leave? Is it possible to detect an industrial civilization in the geological record once it disappears from the face of its host planet? "These questions make us think about the future and the past in a much different way, including how any planetary-scale civilization might rise and fall."

In what they deem the "Silurian Hypothesis," Frank and Schmidt define a civilization by its energy use. Human beings are just entering a new geological era that many researchers refer to as the Anthropocene, the period in which human activity strongly influences the climate and environment. In the Anthropocene, fossil fuels have become central to the geological footprint humans will leave behind on Earth. By looking at the Anthropocene's imprint, Schmidt and Frank examine what kinds of clues future scientists might detect to determine that human beings existed. In doing so, they also lay out evidence of what might be left behind if industrial civilizations like ours existed millions of years in the past.

Human beings began burning fossil fuels more than 300 years ago, marking the beginnings of industrialization. The researchers note that the emission of fossil fuels into the atmosphere has already changed the carbon cycle in a way that is recorded in carbon isotope records. Other ways human beings might leave behind a geological footprint include:
  • Global warming, from the release of carbon dioxide and perturbations to the nitrogen cycle from fertilizers
  • Agriculture, through greatly increased erosion and sedimentation rates
  • Plastics, synthetic pollutants, and even things such as steroids, which will be geochemically detectable for millions, and perhaps even billions, of years
  • Nuclear war, if it happened, which would leave behind unusual radioactive isotopes
"As an industrial civilization, we're driving changes in the isotopic abundances because we're burning carbon," Frank says. "But burning fossil fuels may actually shut us down as a civilization. What imprints would this or other kinds of industrial activity from a long dead civilization leave over tens of millions of years?"

 

The questions raised by Frank and Schmidt are part of a broader effort to address climate change from an astrobiological perspective, and a new way of thinking about life and civilizations across the universe. Looking at the rise and fall of civilizations in terms of their planetary impacts can also affect how researchers approach future explorations of other planets.

"We know early Mars and, perhaps, early Venus were more habitable than they are now, and conceivably we will one day drill through the geological sediments there, too," Schmidt says. "This helps us think about what we should be looking for."

Schmidt points to an irony, however: if a civilization is able to find a more sustainable way to produce energy without harming its host planet, it will leave behind less evidence that it was there.

"You want to have a nice, large-scale civilization that does wonderful things but that doesn't push the planet into domains that are dangerous for itself, the civilization," Frank says. "We need to figure out a way of producing and using energy that doesn't put us at risk."

That said, the earth will be just fine, Frank says. It's more a question of whether humans will be.

Can we create a version of civilization that doesn't push the earth into a domain that's dangerous for us as a species?

"The point is not to 'save the earth,'" says Frank. "No matter what we do to the planet, we're just creating niches for the next cycle of evolution. But, if we continue on this trajectory of using fossil fuels and ignoring the climate change it drives, we human beings may not be part of Earth's ongoing evolution."

[Image: 1x1.gif] Explore further: Earth as hybrid planet: New classification places Anthropocene era in astrobiological context

More information: Gavin A. Schmidt et al. The Silurian hypothesis: would it be possible to detect an industrial civilization in the geological record?, International Journal of Astrobiology (2018). DOI: 10.1017/S1473550418000095


Journal reference: International Journal of Astrobiology [Image: img-dot.gif] [Image: img-dot.gif]
Provided by: University of Rochester


Read more at: https://phys.org/news/2018-04-advanced-e...t.html#jCp
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
Reply
[Image: avatar_52.jpg?dateline=1429826092]
A tip o' the hat to ya! Wook!  
Thank you for your instruction.



Trump's divisive pick to run NASA wins narrow confirmation
April 19, 2018 by Seth Borenstein


[Image: nasalogo.gif]
NASA's latest nail-biting drama was far from orbit as the Senate narrowly confirmed President Donald Trump's choice of a tea party congressman to run the space agency in an unprecedented party-line vote.



In a 50-49 vote Thursday, Oklahoma Rep. James Bridenstine, a Navy Reserve pilot,
[Image: gerald-rannels-last-pic-shared-xmas-2017.jpg]
was confirmed as NASA's 13th administrator, an agency that usually is kept away from partisanship. His three predecessors—two nominated by Republicans—were all approved unanimously. Before that, one NASA chief served under three presidents, two Republicans and a Democrat.

The two days of voting were as tense as a launch countdown.

A procedural vote Wednesday initially ended in a 49-49 tie—Vice President Mike Pence, who normally breaks a tie, was at Trump's Mar-a-Lago estate in Florida—before Arizona Republican Jeff Flake switched from opposition to support, using his vote as leverage to address an unrelated issue.

Thursday's vote included the drama of another delayed but approving vote by Flake, a last-minute no vote by Illinois Democrat Tammy Duckworth—who wheeled onto the floor with her 10-day-old baby in tow—and the possibility of a tie-breaker by Pence, who was back in town.

NASA is a couple years away from launching a new giant rocket and crew capsule to replace the space shuttle fleet that was retired in 2011.

"I look forward to working with the outstanding team at NASA to achieve the president's vision for American leadership in space," Bridenstine said in a NASA release after the vote.

Democrats opposing Bridenstine said his outspoken divisiveness, earlier rejection of mainstream climate change science and lack of space experience made him unqualified. Republicans praised him as a qualified war hero.

"His record of behavior in the Congress is as divisive as any in Washington, including his attacks on members of this body from his own party," Florida Democrat Bill Nelson said. "It's hard to see how that record will endear, and by extension NASA, him to Congress, and most importantly, endear him to the American people. "

Sen. Edward Markey, a Massachusetts Democrat, cited past Bridenstine comments that rejected mainstream climate science, invoking the movie "Apollo 13."

"Houston, we have a problem," Markey said. "NASA's science, NASA's mission and American leadership will be in jeopardy under Congressman Bridenstine's leadership."

During his confirmation hearing, Bridenstine said he acknowledges that global warming is real and man-made, but wouldn't say that it was mostly human-caused, as the overwhelming majority of scientists and scientific literature do. And Bridenstine told Nelson, "I want to make sure that NASA remains, as you said, apolitical."

Texas Republican Ted Cruz praised the NASA nominee as "a war hero."

"NASA needs a strong leader and it will have that strong leader in Jim Bridenstine," Cruz said.

Sean O'Keefe, who was NASA chief under President George W. Bush and was confirmed unanimously, said the close vote "is a consequence of an erosion of comity in the Congress, particularly in the Senate. Political fights will always break out, but now most policy choices are more likely to emerge based on the party with the majority than the power of the idea."

Alan Ladwig, a top NASA political appointee under Democrats, said this was a case of both party politics and a divisive nominee who doesn't accept science.

[/url]Explore further: Senate committee narrowly backs Trump pick for NASA chief


Read more at: https://phys.org/news/2018-04-trump-divi...w.html#jCp




Neuroscientists train a deep neural network to process sounds like humans do

April 19, 2018, Massachusetts Institute of Technology

[Image: 800px-brain_surface_gyri.svg.jpg]
The Primary Auditory Cortex is highlighted in magenta, and has been known to interact with all areas highlighted on this neural map. Credit: Wikipedia.
Using a machine-learning system known as a deep neural network, MIT researchers have created the first model that can replicate human performance on auditory tasks such as identifying a musical genre.

This model, which consists of many layers of information-processing units that can be trained on huge volumes of data to perform specific tasks, was used by the researchers to shed light on how the human brain may be performing the same tasks.
"What these models give us, for the first time, is machine systems that can perform sensory tasks that matter to humans and that do so at human levels," says Josh McDermott, the Frederick A. and Carole J. Middleton Assistant Professor of Neuroscience in the Department of Brain and Cognitive Sciences at MIT and the senior author of the study. "Historically, this type of sensory processing has been difficult to understand, in part because we haven't really had a very clear theoretical foundation and a good way to develop models of what might be going on."
The study, which appears in the April 19 issue of Neuron, also offers evidence that the human auditory cortex is arranged in a hierarchical organization, much like the visual cortex. In this type of arrangement, sensory information passes through successive stages of processing, with basic information processed earlier and more advanced features such as word meaning extracted in later stages.
MIT graduate student Alexander Kell and Stanford University Assistant Professor Daniel Yamins are the paper's lead authors. Other authors are former MIT visiting student Erica Shook and former MIT postdoc Sam Norman-Haignere.
Modeling the brain
When deep neural networks were first developed in the 1980s, neuroscientists hoped that such systems could be used to model the human brain. However, computers from that era were not powerful enough to build models large enough to perform real-world tasks such as object recognition or speech recognition.
Over the past five years, advances in computing power and neural network technology have made it possible to use neural networks to perform difficult real-world tasks, and they have become the standard approach in many engineering applications. In parallel, some neuroscientists have revisited the possibility that these systems might be used to model the human brain.

"That's been an exciting opportunity for neuroscience, in that we can actually create systems that can do some of the things people can do, and we can then interrogate the models and compare them to the brain," Kell says.
The MIT researchers trained their neural network to perform two auditory tasks, one involving speech and the other involving music. For the speech task, the researchers gave the model thousands of two-second recordings of a person talking. The task was to identify the word in the middle of the clip. For the music task, the model was asked to identify the genre of a two-second clip of music. Each clip also included background noise to make the task more realistic (and more difficult).
After many thousands of examples, the model learned to perform the task just as accurately as a human listener.
"The idea is over time the model gets better and better at the task," Kell says. "The hope is that it's learning something general, so if you present a new sound that the model has never heard before, it will do well, and in practice that is often the case."
The model also tended to make mistakes on the same clips that humans made the most mistakes on.
The processing units that make up a neural network can be combined in a variety of ways, forming different architectures that affect the performance of the model.
The MIT team discovered that the best model for these two tasks was one that divided the processing into two sets of stages. The first set of stages was shared between tasks, but after that, it split into two branches for further analysis—one branch for the speech task, and one for the musical genre task.
Evidence for hierarchy
The researchers then used their model to explore a longstanding question about the structure of the auditory cortex: whether it is organized hierarchically.
In a hierarchical system, a series of brain regions performs different types of computation on sensory information as it flows through the system. It has been well documented that the visual cortex has this type of organization. Earlier regions, known as the primary visual cortex, respond to simple features such as color or orientation. Later stages enable more complex tasks such as object recognition.
However, it has been difficult to test whether this type of organization also exists in the auditory cortex, in part because there haven't been good models that can replicate human auditory behavior.
"We thought that if we could construct a model that could do some of the same things that people do, we might then be able to compare different stages of the model to different parts of the brain and get some evidence for whether those parts of the brain might be hierarchically organized," McDermott says.
The researchers found that in their model, basic features of sound such as frequency are easier to extract in the early stages. As information is processed and moves farther along the network, it becomes harder to extract frequency but easier to extract higher-level information such as words.
To see if the model stages might replicate how the human auditory cortex processes sound information, the researchers used functional magnetic resonance imaging (fMRI) to measure different regions of auditory cortex as the brain processes real-world sounds. They then compared the brain responses to the responses in the model when it processed the same sounds.
They found that the middle stages of the model corresponded best to activity in the primary auditory cortex, and later stages corresponded best to activity outside of the primary cortex. This provides evidence that the auditory cortex might be arranged in a hierarchical fashion, similar to the visual cortex, the researchers say.
"What we see very clearly is a distinction between primary auditory cortex and everything else," McDermott says.
The authors now plan to develop models that can perform other types of auditory tasks, such as determining the location from which a particular sound came, to explore whether these tasks can be done by the pathways identified in this model or if they require separate pathways, which could then be investigated in the brain.

Explore further: [url=https://medicalxpress.com/news/2018-02-visual-cues-amplify.html]Visual cues amplify sound
Provided by Massachusetts Institute of Technology



Neuroscientists train a deep neural network to process sounds like humans do
  looks like / same hear  
Researchers find the brain processes sight and sound in same manner

April 18, 2018, Georgetown University Medical Center

[Image: 2-brain.jpg]
Credit: Wikimedia Commons
Although sight is a much different sense than sound, Georgetown University Medical Center neuroscientists have found that the human brain learns to make sense of these stimuli in the same way.

The researchers say in a two-step process, neurons in one area of the brain learn the representation of the stimuli, and another area categorizes that input so as to ascribe meaning to it—like first seeing just a car without a roof and then analyzing that stimulus in order to place it in the category of "convertible." Similarly, when a child learns a new word, it first has to learn the new sound and then, in a second step, learn to understand that different versions (accents, pronunciations, etc.) of the word, spoken by different members of the family or by their friends, all mean the same thing and need to be categorized together.
"A computational advantage of this scheme is that it allows the brain to easily build on previous content to learn novel information," says the study's senior investigator, Maximilian Riesenhuber, PhD, a professor in Georgetown University School of Medicine's Department of Neuroscience. Study co-authors include first author, Xiong Jiang, PhD; graduate student Mark A. Chevillet; and Josef P. Rauschecker, PhD, all Georgetown neuroscientists.
Their study, published in Neuron, is the first to provide strong evidence that learning in vision and audition follows similar principles. "We have long tried to make sense of senses, studying how the brain represents our multisensory world," says Riesenhuber.
In 2007, the investigators were first to describe the two-step model in human learning of visual categories, and the new study now shows that the brain appears to use the same kind of learning mechanisms across sensory modalities.
The findings could also help scientists devise new approaches to restore sensory deficits, Rauschecker, one of the co-authors, says.
"Knowing how senses learn the world may help us devise workarounds in our very plastic brains," he says. "If a person can't process one sensory modality, say vision, because of blindness, there could be substitution devices that allow visual input to be transformed into sounds. So one disabled sense would be processed by other sensory brain centers."
[Image: 1-howdoesthebr.jpg]
Functional MRI response from a representative subject during a listening task. Credit: Xiong Jiang, Georgetown University
The 16 participants in this study were trained to categorize monkey communication calls— real sounds that mean something to monkeys, but are alien in meaning to humans. The investigators divided the sounds into two categories labeled with nonsense names, based on prototypes from two categories: so-called "coos" and "harmonic arches." Using an auditory morphing system, the investigators were able to create thousands of monkey call combinations from the prototypes, including some very similar calls that required the participants to make fine distinctions between the calls. Learning to correctly categorize the novel sounds took about six hours.

Before and after training, fMRI data were obtained from the volunteers to investigate changes in neuronal tuning in the brain that were induced by categorization training. Advanced fMRI techniques, functional magnetic resonance imaging rapid adaptation (fMRI-RA) and multi-voxel pattern analysis, were used along with conventional fMRI and functional connectivity analyses. In this way, researchers were able to see two distinct sets of changes: a representation of the monkey calls in the left auditory cortex, and tuning analysis that leads to category selectivity for different types of calls in the lateral prefrontal cortex.
"In our study, we used four different techniques, in particular fMRI-RA and MVPA, to independently and synergistically provide converging results. This allowed us to obtain strong results even from a small sample," says co-author Jiang.
Processing sound requires discrimination in acoustics and tuning changes at the level of the auditory cortex, a process that the researchers say is the same between humans and animal communication systems. Using monkey calls instead of human speech forced the participants to categorize the sounds purely on the basis of acoustics rather than meaning.
"At an evolutionary level, humans and animals need to understand who is friend and who is foe, and sight and sound are integral to these judgments," Riesenhuber says.
[Image: 1x1.gif] Explore further: After learning new words, brain sees them as pictures
More information: Neuron (2018). DOI: 10.1016/j.neuron.2018.03.014

Journal reference: Neuron [Image: img-dot.gif] [Image: img-dot.gif]
Provided by: Georgetown University Medical Center

Right where we Left off

R.I.P.
Wook
There is a Wook neuron write here on my mind.
Processing.
Cry
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
Reply
...

Not recommended for Stu while he is counting craters.
Psychedelic Saturn Audio.

shut your eyes and listen to this ... a few times ... :

https://saturn.jpl.nasa.gov/resources/78...enceladus/

video / audio link unavailable for copy and paste, you have to listen at the link,
somebody should make a copy.

Sound of Saturn: Radio Emissions of the Planet and Enceladus

New research from the up-close Grand Finale orbits of NASA’s Cassini mission 
shows a surprisingly powerful interaction of plasma waves  Hi
moving from Saturn to its moon Enceladus. 

Researchers converted the recording of plasma waves into a “whooshing” audio file Jawdrop
that we can hear -- 
in the same way a radio translates electromagnetic waves into music. 

Much like air or water, plasma (the fourth state of matter) generates waves to carry energy. 
The recording was captured by the Radio Plasma Wave Science (RPWS) instrument Sept. 2, 2017, 
two weeks before Cassini was deliberately plunged into the atmosphere of Saturn.

...
Reply
Quote:somebody should make a copy.

Actually you can download the mp4 file at that link:



https://saturn.jpl.nasa.gov/system/downl...4-1230.mp4

I downloaded the link I just put above and works fine.  Now to Bong7bp and put it on loop Split_spawn

Bob... Ninja Assimilated
"The Light" - Jefferson Starship-Windows of Heaven Album
I'm an Earthling with a Martian Soul wanting to go Home.   
You have to turn your own lightbulb on. ©stevo25 & rhw007
Reply
Not recommended for  the "Other" Stu while he is quantum creators


Lol. To coin a phrase:

***Whenever you Meant-Shun Dawn Old Krapps name or screen name Arrow  a l g o  it appears as 'doink head!
doink-head- Wikipedia
https://en.wikipedia.org/wiki/Algorithm
In mathematics and computer science, an algorithm is an unambiguous specification of how to solve a class of problems. doink-head can perform calculation, ...
Recall: Linke is Left eh Stu 2?

Parrondo's paradox with a three-sided coin
July 11, 2018 by Lisa Zyga, Phys.org feature

[Image: parrondosgame.jpg]
In a quantum version of a Parrondo’s game played with a three-state coin (a qutrit), the two losing strategies (a) and (b) are combined into a winning strategy ©. Credit: Rajendran et al. ©2018 EPL

Physicists have demonstrated that Parrondo's paradox—an apparent paradox in which two losing strategies combine to make a winning strategy—can emerge as a coin game with a single coin in the quantum realm, but only when the coin has three states (heads, tails, and a side) rather than the conventional two.



In general, Parrondo's paradox, also called a Parrondo's game, only works when the two losing strategies are somehow dependent on each other and are combined in such a way as to change the conditions that lead to them losing. Ever since it was discovered by physicist Juan Parrondo in 1996, Parrondo's paradox has found applications in engineering, finance, and evolutionary biology, among other areas.

One of the simplest ways to implement a Parrondo's game is described in this Wikipedia entry. Suppose you have $100, and you can choose to play any combination of two games. In the first game, you lose $1 every time you play. In the second game, you win $3 if you have an even number of dollars left, and you lose $5 if you have an odd number of dollars left. If you only play the first game or only play the second game, you will eventually lose all your money, so playing each game by itself is a losing strategy. However, if you alternate between the two games, starting with the second game, then you will win $2 for every two games you play, so the two losing strategies can combine into a winning strategy.

In the new study, physicists Jishnu Rajendran and Colin Benjamin at the National Institute of Science Education and Research, HBNI, in India, have demonstrated a Parrondo's game using a three-state coin, which they represent with a qutrit, a quantum system with three states.

"Parrondo's games have been seen in a classical context," Benjamin told Phys.org. "Our aim in this work was to show how to implement it in a quantum context, in particular in a quantum walk. Unfortunately, the quantum version of this game when implemented with a single coin (qubit) in a quantum walk failed in the asymptotic limits. What we show in this work is that a qutrit can implement this Parrondo's game in a quantum walk."
[Image: banksy_aristorat.jpg] Right where  Sheep  Eye Left off...

In the quantum walk, a player starts at the origin and moves either right (positive direction) or left (negative direction) according to the result of a coin toss. If heads, the player moves right; if tails, left; and if the result is "side," then the player interprets that as a "wait state" and stays in the same place. As the qutrit is a quantum system, it can also be in a superposition of these states, in which case the player moves to a corresponding position, somewhere in between a full step left or right. At the end of the game, if the probability that the player is found to the right of the origin is greater than the probability of being found to the left of the origin, the player wins. Otherwise, they lose.

 

Using some of the standard methods in particle physics to define the concepts of a coin toss and game rules with a superposition of states, the physicists demonstrated several examples of games that result in losing when played individually, but when combined in an alternating sequence result in a winning outcome. They also demonstrated examples of the reverse. For example, two games that result in a win and a draw when played individually can result in a losing outcome when combined.

The physicists also showed that, although it's not possible to implement a Parrondo's game using a single two-sided coin (qubit), it is possible to implement a Parrondo's game using two two-sided coins (two qubits). The additional states essentially provide additional flexibility with which to combine strategies that can overcome the conditions of losing.

Given the broad applications of classical Parrondo's games, the physicists expect that quantum version may lead to new insight into designing quantum doink-head.

"Parrondo's game is a recipe for proving one need not always search for a winning strategy (or algorithm) in a game," Benjamin said. "Classically, there are many applications of Parrondo's games, ranging from explaining physiological processes in the cell to increasing our understanding of Brownian motors and even in diversified portfolio investing. Classically, Parrondo's paradox has been shown to work using classical random walks.

"Implementing a Parrondo's game in a quantum walk would have implications for devising better or faster quantum doink-head. An algorithm which uses quantum principles like superposition and/or entanglement is a quantum algorithm. An algorithm, if it can be implemented on a quantum walk, would be more lucrative than one which can only be implemented on a classical random walk. As quantum walks spread quadratically faster than classical random walks, an algorithm implemented on a quantum walk will take much shorter time to complete than one on a classical random walk. Further, the successful implementation of Parrondo's game on a quantum walk provides an algorithmic explanation for quantum ratchets [systems that have motion in one direction only]."

[Image: 1x1.gif] Explore further: Quantum strategy offers game-winning advantages, even without entanglement

More information: Jishnu Rajendran and Colin Benjamin. "Playing a true Parrondo's game with a three-state coin on a quantum walk." EPL. DOI: 10.1209/0295-5075/122/40004
Also at arXiv:1710.04033 [quant-ph]


Journal reference: Europhysics Letters (EPL)


Read more at: https://phys.org/news/2018-07-parrondo-p...n.html#jCp
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
Reply
"Our aim in this work was to show how to implement it in a quantum context, in particular in a quantum walk. Unfortunately, the quantum version of this game when implemented with a single coin (qubit) in a quantum walk failed in the asymptotic limits. What we show in this work is that a qutrit can implement this Parrondo's game in a quantum walk."


This post is from over 6 months ago...

http://thehiddenmission.com/forum/showth...#pid241146

I can't remember why I used "asymptotic".
Reply
Wednesday, December 20th, 2017, 10:19 pm
"Asymptotic Involution"

[Image: JHlpgfM.png]


[Image: g40889.gif]
Recall: = ~19.5

Quote:Duck!


Issue No. 04 - July/August (2002 vol. 22)
ISSN: 0272-1716
pp: 88-97
DOI Bookmark: http://doi.ieeecomputersociety.org/10.11...02.1016702
Andrew Glassner's
[Image: g40888.gif]
ABSTRACT
When ducks swim on a smooth deep lake, they create a V-shaped ripple of waves behind them. Boats and ships do the same thing, as do human swimmers. I saw a duck swimming across a glass-smooth pond a few weeks ago, and I wondered what it might be like if I could choreograph a flock of trained ducks to swim as I wanted. Could the ducks be induced to make interesting patterns out of their overlapping wakes?
INDEX TERMS
CITATION

A. Glassner's, "Duck!," in IEEE Computer Graphics and Applications, vol. 22, no. , pp. 88-97, 2002.
doi:10.1109/MCG.2002.1016702





Asymptote
In analytic geometry, an asymptote of a curve is a line such that the distance between the curve and the line approaches zero as one or both of the x or y coordinates tends to infinity

[Image: g40883.gif]

asymptotic

or as·ymp·tot·i·cal
[as-im-tot-ik or as-im-tot-i-kuh l]
See more synonyms on Thesaurus.comadjective Mathematics.
  1. of or relating to an asymptote.
  2. (of a function) approaching a given value as an expression containing a variable tends to infinity.
  3. (of two functions) so defined that their ratio approaches unity as the independent variable approaches a limit or infinity.

in·vo·lu·tion
ˌinvəˈlo͞oSHən/
noun
noun: involution; plural noun: involutions
  1. 1.
    Physiology
    the shrinkage of an organ in old age or when inactive, e.g., of the uterus after childbirth.
  2. 2.
    Mathematics
    a function, transformation, or operator that is equal to its inverse, i.e., which gives the identity when applied to itself.
Quote: I can't remember why I used "asymptotic".


Recall: a function, transformation, or operator that is equal to its inverse, i.e.,
...Right where  Sheep  we Left off...
[Image: images?q=tbn:ANd9GcQTHfVe5iXR5oZs2ZFmLg0...QqcbjaBrSw]
Recall: which gives the identity when applied to itself.

It writes itself... Arrow
[Image: images?q=tbn:ANd9GcT9F0rqUiaJTwFvCjynCwx...I1quYLFtxw]
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
Reply
How could you find an ACE in a five card face down drawing?

Bob... Ninja Assimilated
"The Light" - Jefferson Starship-Windows of Heaven Album
I'm an Earthling with a Martian Soul wanting to go Home.   
You have to turn your own lightbulb on. ©stevo25 & rhw007
Reply
"I can't remember why I used "asymptotic". "

Not only that...but I partially knew what Involution meant,
and only connected the two words for surreal effect.

If I read you correctly, 
"asymptotic involution"
is real...
yet seems to imply that chaos increases as unity approaches...?
Reply
Kinda looks like a webbed duck foot interfacing with the sub-surface of water.
 
[Image: JHlpgfM.png]

Quote:How could you find an ACE in a five card face down drawing?

Bob... [Image: ninja.gif] [Image: assimilated.gif]

How can you duck a canard as the wild-card?





face down drowning is Cry  as face down drawing was.
[Image: images?q=tbn:ANd9GcQTHfVe5iXR5oZs2ZFmLg0...QqcbjaBrSw] Aye Eye  Sheep   ain't A.I. [Image: images?q=tbn:ANd9GcT9F0rqUiaJTwFvCjynCwx...I1quYLFtxw]
 
Don't fear the improvi-sphere.

youareaduck
[Image: g40888.gif]
This post particularly collapsed  wave.

Full Circle.
Quote:If I read you correctly, 
"asymptotic involution"
is real...
yet seems to imply that chaos increases as unity approaches...?
[Image: duck-ht-2-er-180720_hpMain_16x9_992.jpg]
Don't gamble with improv.
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)