Thread Rating:
  • 1 Vote(s) - 1 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Daggnabbit! Am I a hologram ? Are you too?Virtually @ Mt. Gerizim?
#34
New, Similar Experiments Dramatically Achieve Rainer Plaga Suggestion To Prove Parallel Universes

In 1997, physicist Rainer Plaga wrote a paper suggesting an experiment to prove parallel universes exist. It was never tried. Marshall Barnes has now executed his own specific tests in Grandview Heights, OH, that achieve the same, incredible goal.

[b]YELLOW SPRINGS, Ohio - March 1, 2017 - PRLog -- Research and development engineer, Marshall Barnes', laser experiments to test a different version

of so-called retrocausality, are simultaneously proving parallel universes can be contacted. The results are consistent across two different but related experiments. Not only that, but Marshall extends the many-worlds interpretation into direct relationship with John A. Wheeler's participatory universe model, and shrugs off all interpretations of the behavior of particles on a quantum level, that result in retrocausality, as being a result of these particles "knowing" what they're doing. Instead, Marshall argues it's the universe or "omniverse" behind the results observed as part of the participatory function Wheeler had posited.[/b]


[Image: 12613870-rachel-retroworldality-test-pho...t-side.png]
[b]Rachel RetroWorldality test photo of "laser spot from nowhere" (far right side)[/b]

[b]In Marshall's experiment, various laser pulses are fired toward a 2 way mirror acting as a beam splitter, sending the reflection beam toward a side wall detection area, and the beam that continues, called the transmission beam, encounters a fan. The fan acts as a high speed shutter, blocking some of the pulses and allows others to continue, hitting the back wall. A laptop video camera captures all action, making it possible to see results, frame by frame. However, there are times when no laser is on and clearly, a laser spot is seen on the side wall as if it had been reflected there.[/b]

[b]This is a first time ever, historic achievement, originally reported in his paper, Retrocausality, Wheeler, Delayed Choice, and Simulation Theory Reinterpreted (seehttps://www.academia.edu/30445299/Retrocausality_Wheeler_... ) which upon further review, is simultaneously related with a number of experiments that were not so long ago in the news.
[/b]



According to a 2006 MSNBC article by Alan Boyle, Time-travel physics seems stranger than science fiction, Columbia physicist Brian Greene, described exactly the time travel scenario Marshall's experiment exhibits.  "Causality can be changed, sending the universe down different forks in the road. You could go back and shoot your father, creating a universe where you were never born. But it wouldn't be the same universe you came from. You'd just be an alien visitor from a different reality, living out a scenario that's called the "many-worlds interpretation."



A 1995 article for New Scientist magazine, Talking to the World Next Door, by physicist, John Gribbin, discusses a proposed plan by German astrophysicist Rainer Plaga to detect a parallel universe, a goal directly related to Marshall's successful experiments.



"Plaga suggests an experiment in which...if the photon is detected, a laser pulse would automatically be fired into an ion stored in a magnetic trap. This will excite the ion into an energetic state. So an experimenter who finds the ion to be excited even though no photon has been detected will know that this is because the laser beam was triggered in the world next door."



Marshall (see http://lanyrd.com/profile/paranovation/bio ) explains, "In a similar fashion, a laser pulse that isn't reflected by the 2 way mirror and just barely goes through the fan, suffers the classic 'which way path' change, striking the detection wall area for the reflected beams. That act signifies the universe has split, with the laser point on the wall in the new universe when there is no source laser for it. With the whole affair on film, that settles it."



It's said Plaga's test would mean energy transfer is possible between parallel worlds because there is no violation of the conservation of energy, which remains conserved for the whole universe and not required for single parallel branches. Marshall's filmed experiment shows a clear indication of energy transferring from a parallel world, like Plaga's proposal, which would pass just enough information between the two worlds to confirm their parallel existence and nothing more.



According to a 2016 MSNBC article by Boyle, an attempt by physicist John G. Cramer to  "flip a switch that would have an effect not only on photons going through a complicated set-up of lasers and mirrors, but also on entangled photons that had gone through the set-up about 50 microseconds earlier." failed to prove sending messages to the past through quantum non-locality was possible.



But, according to Boyle, "conceptually, the effect would be a little like sending Marty McFly back in time to make sure his mom married his dad in 'Back to the Future'." Marshall immediately sees similarities.



"That is so similar to the portion of my paper stating, 'In my test, the laser point appears from nowhere, having tunneled from the past but into the future where it strikes the wall as it would had it been part of the initial reflection beam, but the film proves that it was not. That past is clear, however...the present that follows has a new past because that laser did show up, but we know the original is still preserved in the original universe where the photon didn't show up out of nowhere'."



"In doing the research for Paradox Lost, (see http://www.blurb.com/b/5622324-paradox-lost-the-public-edition) my 2013 special report to select members of Congress, on time travel, I reconciled the one last aspect no one else had dealt with - how time travel to the past in a parallel universe is possible when you weren't there to begin with. Marrying Wheeler's participatory universe model to the many-worlds theory, made it all work seamlessly and my Rachel and Emily RetroWorldality tests prove it. This is how reality works, for real".


[b][/b]
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
Reply
#35
...

well, the experiments have to be replicated by another scientist first,

Marshall Barnes has a long standing issue with Hawking



Quote:First, a clarification - I never said Stephen Hawking lacked intelligence. 
I have said that he’s not the greatest mind on the planet Nonono 
just its most intelligent comedian. Rofl

I have the right to say that, as I have caught more errors that Hawking has made than anyone else.


https://www.quora.com/Is-there-any-merit...en-Hawking

...
Reply
#36
Well daggnabbit! speak of the devil... LilD


Stephen Hawking appears as hologram in Hong Kong
March 25, 2017

[Image: renownedphys.jpg]
Renowned physicist Stephen Hawking, 75, appears via hologram to address an audience in Hong Kong
Renowned physicist Stephen Hawking has spoken to a Hong Kong audience by hologram, showcasing the growing reach of a technology which is making inroads into politics, entertainment and business.



The British scientist appeared Friday before an audience of hundreds who cheered and snapped pictures with their phones as he discussed his career and answered questions about the possibility of life on other planets, the use of technology in education and the impact of Brexit on Britain.
The 75-year-old said the election of US President Donald Trump was one in a string of "right-wing successes" that would have grave implications for the future of scientific innovation and discovery.
"With Brexit and Trump... we are witnessing a global revolt against experts," he said, making his first appearance in Hong Kong since 2006.
The swing to the right has come at a time when the world is facing multiple environmental crises, from global warming to deforestation, he added.
"The answers to these problems will come from science and technology," he said.
Hawking suffers from amyotrophic lateral sclerosis (ALS), a form of motor neurone disease that attacks the nerves controlling voluntary movement, leaving him paralysed and able to communicate only via a computer speech synthesiser.
[Image: 1-renownedphys.jpg]
Renowned physicist Stephen Hawking has spoken to a Hong Kong audience by hologram, showcasing the growing reach of a technology which is making inroads into politics, entertainment and business
The event was organised by Chinese gaming company NetDragon Websoft, in partnership with ARHT Media, which creates digital human holograms of celebrities including spiritual guru Deepak Chopra, motivational speaker Tony Robbins and slain rapper The Notorious B.I.G..
The technology which allows a human being to appear and interact with audiences in multiple locations simultaneously is gradually expanding its presence.
French far-left candidate Jean-Luc Melenchon appeared to supporters by hologram last month in a technological first for a presidential campaign in France.
[Image: 1x1.gif] Explore further: French presidential campaign goes high-tech with hologram rally


Read more at: https://phys.org/news/2017-03-stephen-hawking-hologram-hong-kong.html#jCp[url=https://phys.org/news/2017-03-stephen-hawking-hologram-hong-kong.html#jCp][/url]
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
Reply
#37
"With Brexit and Trump... we are witnessing a global revolt against experts," he said, making his first appearance in Hong Kong since 2006. The swing to the right has come at a time when the world is facing multiple environmental crises, from global warming to deforestation, he added. "The answers to these problems will come from science and technology," he said.

Gee that might come as comforting except for that global revolt we're having against experts... What if experts have already done more to get us INTO a mess than they will ever do to get us OUT of a mess by blaring more empty expert promises out of their expert pie holes? 

Congratulations, Mr. Hawking! You've won 
The Nobel Pie Prize! 

[Image: pie-in-the-face-literally-hAtA15-clipart.jpg]
Stephen Hawking gets his first look close-up at the event horizon
of lemon custard.
"Work and pray, live on hay, you'll get Pie In The Sky when you die." - Joe Hill, "The Preacher and the Slave" 1911
Reply
#38
Quote:PW:
Gee that might come as comforting except for that global revolt we're having against experts... What if experts have already done more to get us INTO a mess than they will ever do to get us OUT of a mess by blaring more empty expert promises out of their expert pie holes? 

Pi Whole. Arrow
how Write/Right/Rite you are. Holycowsmile


Experts/Excerpts
Quote:"Predatory publishing is becoming an organised industry," said Pisanski...

Their rise "threatens the quality of scholarship," she added.


Read more at: https://phys.org/news/2017-03-publish-schemes-rampant-science-journals.html#jCp


'Pay to publish' schemes rampant in science journals
March 22, 2017

[Image: paper.jpg]
Credit: Charles Rondeau/public domain
Dozens of scientific journals appointed a fictive scholar to their editorial boards on the strength of a bogus resume, researchers determined to expose "pay to publish" schemes reported Wednesday.



One journal snared in the sting operation offered the imaginary applicant a 60/40 split—60 percent for the journal—of fees collected from scientists seeking to publish their research.
Universities have famously become "publish or perish" ecosystems, making many academics desperate to get their work into print.
Several publications assigned the phantom editor to an unpaid, top-level position.
"It is our pleasure to add your name as our editor-in-chief for this journal, with no responsibilities," responded one within days.
"Many predatory journals hoping to cash in seem to aggressively and indiscriminately recruit academics to build legitimate-looking editorial boards," Katarzyna Pisanski, a social scientist at the University of Wroclaw, Poland, wrote in Nature.
In this case, the publishers padding their mastheads failed to notice that their new recruit's name—Anna O. Szust—translates as "Anna, a fraud" in Polish.
Despite this inside joke, the probe of academic integrity at hundreds of science journals—some reputed, others already on a blacklist—was dead serious.
"Although pranksters have successfully placed fictional characters on editorial boards, no one has examined the issue systematically," Pisanski noted.
"We did."
Pisanski and three colleagues concocted the fake application—supported by a cover letter, a CV boasting phony degrees, and a list of non-existent book chapters—and sent it to 360 peer-reviewed social science publications.
In the peer-review process, journals ask outside experts to assess the methodology and importance of submissions before accepting then.
Predatory journals
The journals were drawn equally from three directories: one listing reputable titles available through subscriptions, with a second devoted to "open access" publications.
The third was a blacklist—compiled by University of Colorado librarian Jeffrey Beall—of known or suspected "predatory journals" that make money by extracting fees from authors.
The number of these highly dubious publications has exploded in recent years, number at least 10,000.
Indeed, 40 of the 48 journals that took the bait and offered a position to the fictitious Anna O. figured on Beall's list, which has since been taken offline.
The other eight were from the open-access registry.
No one made any attempt to contact the university listed on the fake CV, and few probed her obviously spotty experience.
One journal suggested "Ms. Fraud" organise a conference after which presenters would be charged for a special issue.
"Predatory publishing is becoming an organised industry," said Pisanski, who decided not to name-and-shame the journals caught out by the sting.
Their rise "threatens the quality of scholarship," she added.
Even after the researchers contacted all the journals to inform them that Anna O. Szust did not really exist, her name continued to appear on the editorial board of 11—including one to which she had not even applied.
None of the journals from the most select directory fell in the trap, and a few sent back tartly worded answers.
"One does not become an editor by sending in a CV," came one reply.
"These positions are filled because a person has a high research profile and a solid research record."
[Image: 1x1.gif] Explore further: Who will keep predatory science journals at bay now that Jeffrey Beall's blog is gone?
Journal reference: Nature


Read more at: https://phys.org/news/2017-03-publish-sc...s.html#jCp[url=https://phys.org/news/2017-03-publish-schemes-rampant-science-journals.html#jCp][/url]
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
Reply
#39
Technique makes more efficient, independent holograms
April 4, 2017

[Image: techniquemak.jpg]
A single metasurface encodes two separate holograms. When illuminated with one direction of polarized light, the metasurface projects an image of a cartoon dog. When illuminated with the perpendicular direction of light, the metasurface …more
Not far from where Edwin Land—the inventor of the Polaroid camera—made his pioneering discoveries about polarized light, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) are continuing to unlock the power of polarization.


Recently, a team of researchers led by Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering, encoded multiple holographic images in a metasurface that can be unlocked separately with differently polarized light.
This advancement could improve holograms for anti-fraud protection and entertainment, as well as offer more control over the manipulation and measurement of polarization. The research was published in Physical Review Letters.
"The novelty of this type of metasurface is that for the first time we have been able to embed vastly different images that don't look at all like each other—like a cat and a dog—and access and project them independently using arbitrary states of polarization," said Capasso, the senior author of the paper.
Polarization is the path along which light vibrates. Previous research from the Capasso lab used nanostructures sensitive to polarization to produce two different images encoded in the metasurface. However, those images were dependent on one another, meaning both were created but only one appeared in the field of vision.
[Image: 1-techniquemak.jpg]
This hologram is one of two different holographic images encoded in a metasurface that can be unlocked separately with differently polarized light Credit: The Capasso Lab/Harvard SEAS
The metasurface made of titanium dioxide, a widely available material, consists of an array of polarization-sensitive pillars—also called nanofins—that redirect the incident light. Unlike previous arrays, which were uniform in size, these nanofins vary in orientation, height and width, depending on the encoded images.
"Each nanofin has different, precisely controllable polarization properties," said Noah Rubin, co-first author of the paper and graduate student in the Capasso Lab. "You use this library of elements to design the encoded image."
[Image: 2-techniquemak.jpg]
This hologram is one of two different holographic images encoded in a metasurface that can be unlocked separately with differently polarized light Credit: The Capasso Lab/Harvard SEAS
Different polarizations read different elements.
"This metasurface can be encoded with any two images, and unlocked by any two polarizations, so long as they are perpendicular to each other," said Rubin. "You can also embed different functionalities. It can be a lens for one polarization and if you go to a different polarization, it can be a hologram. So, this work is general statement about what can be done with metasurfaces and enables new optics for polarization."
"This is another powerful example of metasurfaces," said Capasso. "It allows you to compress a number of functionalities, which would normally spread over several components, and put them all in a single optical element."
[Image: 1x1.gif] Explore further: Nanotechnology improves holographic capabilities by encoding light polarization
Journal reference: Physical Review Letters [Image: img-dot.gif] [Image: img-dot.gif]
Provided by: Harvard John A. Paulson School of Engineering and Applied Sciences



Read more at: https://phys.org/news/2017-04-technique-efficient-independent-holograms.html#jCp[url=https://phys.org/news/2017-04-technique-efficient-independent-holograms.html#jCp][/url]
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
Reply
#40
World's thinnest hologram paves path to new 3-D world
May 18, 2017

[Image: 591d33e9318fd.jpg]
An Australian-Chinese research team has created the world's thinnest hologram, paving the way towards the integration of 3D holography into everyday electronics like smart phones, computers and TVs.



Interactive 3D holograms are a staple of science fiction - from Star Wars to Avatar - but the challenge for scientists trying to turn them into reality is developing holograms that are thin enough to work with modern electronics.
Now a pioneering team led by RMIT University's Distinguished Professor Min Gu has designed a nano-hologram that is simple to make, can be seen without 3D goggles and is 1000 times thinner than a human hair.
"Conventional computer-generated holograms are too big for electronic devices but our ultrathin hologram overcomes those size barriers," Gu said.
"Our nano-hologram is also fabricated using a simple and fast direct laser writing system, which makes our design suitable for large-scale uses and mass manufacture.
"Integrating holography into everyday electronics would make screen size irrelevant - a pop-up 3D hologram can display a wealth of data that doesn't neatly fit on a phone or watch.
"From medical diagnostics to education, data storage, defence and cyber security, 3D holography has the potential to transform a range of industries and this research brings that revolution one critical step closer."



Credit: RMIT University
Conventional holograms modulate the phase of light to give the illusion of three-dimensional depth. But to generate enough phase shifts, those holograms need to be at the thickness of optical wavelengths.
The RMIT research team, working with the Beijing Institute of Technology (BIT), has broken this thickness limit with a 25 nanometre hologram based on a topological insulator material - a novel quantum material that holds the low refractive index in the surface layer but the ultrahigh refractive index in the bulk.
The topological insulator thin film acts as an intrinsic optical resonant cavity, which can enhance the phase shifts for holographic imaging.
Dr Zengyi Yue, who co-authored the paper with BIT's Gaolei Xue, said: "The next stage for this research will be developing a rigid thin film that could be laid onto an LCD screen to enable 3D holographic display.
"This involves shrinking our nano-hologram's pixel size, making it at least 10 times smaller.
"But beyond that, we are looking to create flexible and elastic thin films that could be used on a whole range of surfaces, opening up the horizons of holographic applications."
The research is published in the journal Nature Communications on 18 May.
[Image: 1x1.gif] Explore further: Stretchable hologram can switch between multiple images
More information: Nature Communications (2017). DOI: 10.1038/NCOMMS15354 
Journal reference: Nature Communications [Image: img-dot.gif] [Image: img-dot.gif]
Provided by: RMIT University



Read more at: https://phys.org/news/2017-05-world-thinnest-hologram-paves-path.html#jCp[/url]


RHW007 posted in another thread this:  LilD

Today, 03:04 am
Here is post that will take awhile to get through EVERYTHING here. 

Stretchy Holograms Could Power 3D, Morphing Projections

By Edd Gent, Live Science Contributor | May 17, 2017 11:21am ET


[Image: aHR0cDovL3d3dy5saXZlc2NpZW5jZS5jb20vaW1h...1zLmpwZWc=]

Researchers have developed holograms made of stretchy materials that could enable holographic animation.
Credit: American Chemical Society 

Holograms are a staple of science fiction, but the kinds of 3D, multicolored moving images floating in midair from movies like "Star Wars" are still a long way from reality. Now, though, researchers have developed the world’s first stretchable hologram, which could one day enable holographic animation, according to a new study.
In real life, [url=http://www.livescience.com/34652-hologram.html]holograms
 are more like paintings or photographs. They are effectively recordings of a 3D light field. When lit properly, they project a reproduction of the original object. Confusingly, the term refers to both the physical structure the image is recorded on as well as the resulting projection.
Almost all holograms contain a recording of just a single image, but now scientists at the University of Pennsylvania, in Philadelphia, have built a hologram on flexible polymer material that can hold several images. As the material is stretched, the different images are displayed one after the other, the researchers said. [Science Fact or Fiction? The Plausibility of 10 Sci-Fi Concepts]
"The question we asked is, Can we encode multiple bits of information in a hologram?" Ritesh Agarwal, research leader and professor of materials science and engineering, told Live Science. "It's an important piece of work, because it's the first time someone's shown you can record multiple holographic images, and by just stretching the polymer, you can basically change the image."
The members of the group relied on so-called metasurfaces to build their hologram. These are materials with a structure that has been carefully engineered at the nanoscale level to bend, reflect or distort electromagnetic radiation, with the aim of achieving specific goals like magnification or cloaking.
In this case, the researchers created an array of gold nanorods and embedded them in a flexible polymer called PDMA. The orientation of the rods is carefully calculated on a computer to determine how they reflect light, and therefore what holographic image they project, the scientists said.
The rods were also carefully designed so that stretching the PDMA base material changes the distance between the rods in a predictable way, so that the resulting holographic image morphs from one shape into another
Metasurfaces have already been used to create 3D and multicolored holograms, and even ones that can switch between pairs of holographic images by changing the polarization of the light they are illuminated with.
But, this requires bulky optical equipment to be readjusted, and the hologram can only accommodate two images, the researchers said. The new hologram that Agarwal and his colleagues developed measured on the scale of a few micrometers and could only hold three images, but the only limit is its size, they said. (One micrometer is equivalent to one-thousandth of a millimeter.)
Building larger holograms would allow many more holographic images to be recorded onto them, meaning they could store much more information than a standard hologram of the same size, the researchers said. This could even open up the possibility of creating a kind of holographic flip-book animation, they added.
"The information-carrying capacity increases tremendously," Agarwal said. "And as you make the hologram bigger and bigger, the interference between the images decreases dramatically, and even a very small amount of stretch would flip the image, so animation is possible."
Agarwal said these capabilities, described in a study published online May 10 in the journal Nano Letters, could have applications in virtual-reality products, flat-screen displays and optical communication devices. It could also lead to more secure holograms on credit cards that morph into a different image when they are bent, he said, which would be much harder to counterfeit.
The research team is not only working on holograms, though. Last year, the scientists combined metasurfaces with flexible materials to create a lens that can zoom 1.7 times when it is stretched.
This approach could produce much more compact instruments than traditional zoom lenses, which could be useful in small devices like mobile phones. The U.S. military has expressed interest in the stretchy lens, because it could replace the bulky telescoping lenses that snipers use, Agarwal said.
His group has now received funding to look at using so-called phase-change materials to build a hologram that can change shape in real time in response to electrical signals, which could finally usher in the kind of holographic display seen in "Star Wars."

Source: http://www.livescience.com/59141-stretch...tions.html
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
Reply
#41
Business Insider

Scientists Have Found a Way to Photograph People Through Walls Using Wi-Fi
They're watching.
[img=34x0]http://www.sciencealert.com/images/art-mar-15/BI-logo-50x50.gif[/img]
DAVE MOSHER, BUSINESS INSIDER 23 MAY 2017

[Image: binoculars-red-watching_1024.jpg]
Wi-Fi can pass through walls. This fact is easy to take for granted, yet it's the reason we can surf the web using a wireless router located in another room.

However, not all of that microwave radiation makes it to or from our phones, tablets, and laptops. Routers scatter and bounce their signal off objects, illuminating our homes and offices like invisible light bulbs.


Now, German scientists have found a way to exploit this property to take holograms, or 3D photographs, of objects inside of a room - from outside of the room.

"It can basically scan a room with someone's Wi-Fi transmission," Philipp Holl, a 23-year-old undergraduate physics student at the Technical University of Munich, told Business Insider.

Holl initially built the device as part of his bachelor thesis with the help of his academic supervisor, Friedemann Reinhard. Later on the two submitted a study about their technique to the journal Physical Review Letters, which published their paper in early May.

Holl says the technology is only in prototype stage at this point, and has limited resolution, but is excited about its promise.
"If there's a cup of coffee on a table, you may see something is there, but you couldn't see the shape," Holl says. "But you could make out the shape of a person, or a dog on a couch. Really any object that's more than 4 centimetres in size."

How to see through walls with Wi-Fi

The ability to see through walls using Wi-Fi has been around for years.

Some setups can detect home intruders or track moving objects with one or two Wi-Fi antennas. Others use an array of antennas to build two-dimensional images. But Holl says no one has used Wi-Fi to make a 3D hologram of an entire room and the stuff inside of it.

"Our method gives you much better images, since we record much more signal. We scan the whole plane of a room," he says.

Holl's method differs from the others in few significant ways.

[img=676x0]http://www.sciencealert.com/images/2017-05/wifi-imaging-1.jpg[/img][Image: wifi-imaging-1.jpg]

Philip Holls and Friedemann Reinhard/Physical Review Letters


First, it uses two antennas: one fixed in place, and another that moves. The fixed antenna records a Wi-Fi field's background, or reference, for the spot it's placed in. Meanwhile, the other antenna is moved by hand to record the same Wi-Fi field from many different points.

"These antennas don't need to be big. They can be very small, like the ones in a smartphone," Holl says.

Second, both antennas not only record the intensity (or brightness) of a Wi-Fi signal, but also its phase: a property of light that comes from the fact it's a wave. Laser light is all one phase, for example, while an incandescent bulb puts out a mix of different phases of light.

Similar to lasers, Wi-Fi routers emit microwave radiation in one phase.

Finally, the signals from both antennas are simultaneously fed into a computer, and software teases out the differences of intensity and phase "more or less in real-time," says Holl.

This is where the magic happens: The software builds many two-dimensional images as one antenna is waved around, then stacks them together in a 3D hologram. And because Wi-Fi travels through most walls, those holograms are of objects inside a room.

Holl and Reinhard's first holograms are of a shiny metal cross placed in front of a Wi-Fi router:

[Image: wifi-imaging-2.jpg]

Philip Holls and Friedemann Reinhard/Physical Review Letters



The resulting images may not look like much, but they prove the concept works: the moving antenna can capture Wi-Fi shadows and reflections of objects in 3D, right through a wall.
[img=676x0]http://www.sciencealert.com/images/2017-05/wifi-imaging-3.jpg[/img][Image: wifi-imaging-3.jpg]


Philip Holls and Friedemann Reinhard/Physical Review Letters



Above is a Wi-Fi hologram of a cross. Holl's technique can capture the WiFi shadow cast by the object (left) through a wall.

The applications for Holl's Wi-Fi holography, he says, are pretty expansive. Adding an array of reference antennas, say, inside of a truck, might help rescue workers detect people in rubble left by an earthquake - or spy agencies see if anyone is home.

"You could probably use a drone to map out the inside of an entire building in 20 to 30 seconds," he said.

Holl created the video below to show how his team's technology works:




http://www.sciencealert.com/scientists-h...using-wifi
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
Reply
#42
Quote:"It's not a hologram, it's really three-dimensionally structured light."  Holycowsmile


Better than Star Wars: Chemistry discovery yields 3-D table-top objects crafted from light
July 11, 2017 by Margaret Allen

[Image: betterthanst.jpg]
The set-up for the SMU 3-D light pad includes this ultraviolet projector as well as a visible projector. The two project patterns of light into a chamber of photoactivatable dye. Wherever the UV light intersects with the green light it generates a 3-dimensional image inside the chamber. Credit: SMU
A scientist's dream of 3-D projections like those he saw years ago in a Star Wars movie has led to new technology for making animated 3-D table-top objects by structuring light.



The new technology uses photoswitch molecules to bring to life 3-D light structures that are viewable from 360 degrees, says chemist Alexander Lippert, Southern Methodist University, Dallas, who led the research.
The economical method for shaping light into an infinite number of volumetric objects would be useful in a variety of fields, from biomedical imaging, education and engineering, to TV, movies, video games and more.
"Our idea was to use chemistry and special photoswitch molecules to make a 3-D display that delivers a 360-degree view," Lippert said. "It's not a hologram, it's really three-dimensionally structured light."
Key to the technology is a molecule that switches between non-fluorescent and fluorescent in reaction to the presence or absence of ultraviolet light.
The new technology is not a hologram, and differs from 3-D movies or 3-D computer design. Those are flat displays that use binocular disparity or linear perspective to make objects appear three-dimensional when in fact they only have height and width and lack a true volume profile.
"When you see a 3-D movie, for example, it's tricking your brain to see 3-D by presenting two different images to each eye," Lippert said. "Our display is not tricking your brain—we've used chemistry to structure light in three actual dimensions, so no tricks, just a real three-dimensional light structure. We call it a 3-D digital light photoactivatable dye display, or 3-D Light Pad for short, and it's much more like what we see in real life."
At the heart of the SMU 3-D Light Pad technology is a "photoswitch" molecule, which can switch from colorless to fluorescent when shined with a beam of ultraviolet light.
The researchers discovered a chemical innovation for tuning the photoswitch molecule's rate of thermal fading—its on-off switch—by adding to it the chemical amine base triethylamine.
Now the sky is the limit for the new SMU 3-D Light Pad technology, given the many possible uses, said Lippert, an expert in fluorescence and chemiluminescence—using chemistry to explore the interaction between light and matter.


For example, conference calls could feel more like face-to-face meetings with volumetric 3-D images projected onto chairs. Construction and manufacturing projects could benefit from rendering them first in 3-D to observe and discuss real-time spatial information. For the military, uses could include tactical 3-D replications of battlefields on land, in the air, under water or even in space.
Volumetric 3-D could also benefit the medical field.
"With real 3-D results of an MRI, radiologists could more readily recognize abnormalities such as cancer," Lippert said. "I think it would have a significant impact on human health because an actual 3-D image can deliver more information."
Unlike 3-D printing, volumetric 3-D structured light is easily animated and altered to accommodate a change in design. Also, multiple people can simultaneously view various sides of volumetric display, conceivably making amusement parks, advertising, 3-D movies and 3-D games more lifelike, visually compelling and entertaining.
Lippert and his team report on the new technology and the discovery that made it possible in the article "A volumetric three-dimensional digital light photoactivatable dye display," published in the journal Nature Communications.
Co-authors are Shreya K. Patel, lead author, and Jian Cao, both students in the SMU Department of Chemistry.
Genesis of an idea—cinematic inspiration
The idea to shape light into volumetric animated 3-D objects came from Lippert's childhood fascination with the movie "Star Wars." Specifically he was inspired when R2-D2 projects a hologram of Princess Leia. Lippert's interest continued with the holodeck in "Star Trek: The Next Generation."





From watching Star Wars as a child, SMU chemist Dr. Alex Lippert brought to life his dream of crafting animated 3-D shapes from light.Using photoswitch chemistry his lab constructed light shapes into structures that have volume and are viewable from 360 degrees, making them useful for biomedical imaging, teaching, engineering, TV, movies, video games and more. Credit: (SMU)
"As a kid I kept trying to think of a way to invent this," Lippert said. "Then once I got a background in chemistry molecules that interact with light, and an understanding of photoswitches, it finally dawned on me that I could take two beams of light and use chemistry to manipulate the emission of light."
Key to the new technology was discovering how to turn the chemical photoswitch off and on instantly, and generating light emissions from the intersection of two different light beams in a solution of the photoactivatable dye, he said.
SMU graduate student in chemistry Jian Cao hypothesized the activated photoswitch would turn off quickly by adding the base. He was right.
"The chemical innovation was our discovery that by adding one drop of triethylamine, we could tune the rate of thermal fading so that it instantly goes from a pink solution to a clear solution," Lippert said. "Without a base, the activation with UV light takes minutes to hours to fade back and turn off, which is a problem if you're trying to make an image. We wanted the rate of reaction with UV light to be very fast, making it switch on. We also wanted the off-rate to be very fast so the image doesn't bleed."
SMU 3-D Light Pad
In choosing among various photoswitch dyes, the researchers settled on N-phenyl spirolactam rhodamines. That particular class of rhodamine dyes was first described in the late 1970s and made use of by Stanford University's Nobel prize-winning W.E. Moerner.
The dye absorbs light within the visible region, making it appropriate to fluoresce light. Shining it with UV radiation, specifically, triggers a photochemical reaction and forces it to open up and become fluorescent.
Turning off the UV light beam shuts down fluorescence, diminishes light scattering, and makes the reaction reversible—ideal for creating an animated 3-D image that turns on and off.
"Adding triethylamine to switch it off and on quickly was a key chemical discovery that we made," Lippert said.
To produce a viewable image they still needed a setup to structure the light.
Structuring light in a table-top display
The researchers started with a custom-built, table-top, quartz glass imaging chamber 50 millimeters by 50 millimeters by 50 millimeters to house the photoswitch and to capture light.
Inside they deployed a liquid solvent, dichloromethane, as the matrix in which to dissolve the N-phenyl spirolactam rhodamine, the solid, white crystalline photoswitch dye.
Next they projected patterns into the chamber to structure light in two dimensions. They used an off-the-shelf Digital Light Processing (DLP) projector purchased at Best Buy for beaming visible light.
The DLP projector, which reflects visible light via an array of microscopically tiny mirrors on a semiconductor chip, projected a beam of green light in the shape of a square. For UV light, the researchers shined a series of UV light bars from a specially made 385-nanometer Light-Emitting Diode projector from the opposite side.
Where the light intersected and mixed in the chamber, there was displayed a pattern of two-dimensional squares stacked across the chamber. Optimized filter sets eliminated blue background light and allowed only red light to pass.
To get a static 3-D image, they patterned the light in both directions, with a triangle from the UV and a green triangle from the visible, yielding a pyramid at the intersection, Lippert said.
From there, one of the first animated 3-D images the researchers created was the SMU mascot, Peruna, a racing mustang.
[Image: 1-betterthanst.jpg]
SMU chemist Dr. Alex Lippert and his lab developed the SMU 3-D light pad (shown here). It includes an ultraviolet projector and a visible projector, which project patterns of light into a chamber of photoactivatable dye. Wherever the UV light intersects with the green light it generates a 3-dimensional image inside the chamber. Credit: SMU
"For Peruna—real-time 3-D animation—SMU undergraduate student Shreya Patel found a way to beam a UV light bar and keep it steady, then project with the green light a movie of the mustang running," Lippert said.
So long Renaissance
Today's 3-D images date to the Italian Renaissance and its leading architect and engineer.
"Brunelleschi during his work on the Baptistery of St. John was the first to use the mathematical representation of linear perspective that we now call 3-D. This is how artists used visual tricks to make a 2-D picture look 3-D," Lippert said. "Parallel lines converge at a vanishing point and give a strong sense of 3-D. It's a useful trick but it's striking we're still using a 500-year-old technique to display 3-D information."
The SMU 3-D Light Pad technology, patented in 2016, has a number of advantages over contemporary attempts by others to create a volumetric display but that haven't emerged as commercially viable.
Some of those have been bulky or difficult to align, while others use expensive rare earth metals, or rely on high-powered lasers that are both expensive and somewhat dangerous.
The SMU 3-D Light Pad uses lower light powers, which are not only cheaper but safer. The matrix for the display is also economical, and there are no moving parts to fabricate, maintain or break down.
Lippert and his team fabricated the SMU 3-D Light Pad for under $5,000 through a grant from the SMU University Research Council.
"For a really modest investment we've done something that can compete with more expensive $100,000 systems," Lippert said. "We think we can optimize this and get it down to a couple thousand dollars or even lower."
Next Gen: SMU 3-D Light Pad 2.0
The resolution quality of a 2-D digital photograph is stated in pixels. The more pixels, the sharper and higher-quality the image. Similarly, 3-D objects are measured in voxels—a pixel but with volume. The current 3-D Light Pad can generate more than 183,000 voxels, and simply scaling the volume size should increase the number of voxels into the millions—equal to the number of mirrors in the DLP micromirror arrays.
For their display, the SMU researchers wanted the highest resolution possible, measured in terms of the minimum spacing between any two of the bars. They achieved 200 microns, which compares favorably to 100 microns for a standard TV display or 200 microns for a projector.
The goal now is to move away from a liquid vat of solvent for the display to a solid cube table display. Optical polymer, for example, would weigh about the same as a TV set. Lippert also toys with the idea of an aerosol display.
The researchers hope to expand from a monochrome red image to true color, based on mixing red, green and blue light. They are working to optimize the optics, graphics engine, lenses, projector technology and photoswitch molecules.
"I think it's a very fascinating area. Everything we see—all the color we see—arises from the interaction of light with matter," Lippert said. "The molecules in an object are absorbing a wavelength of light and we see all the rest that's reflected. So when we see blue, it's because the object is absorbing all the red light. What's more, it is actually photoswitch molecules in our eyes that start the process of translating different wavelengths of light into the conscious experience of color. That's the fundamental chemistry and it builds our entire visual world. Being immersed in chemistry every day—that's the filter I'm seeing everything through."
The SMU discovery and new technology, Lippert said, speak to the power of encouraging young children.
"They're not going to solve all the world's problems when they're seven years old," he said. "But ideas get seeded and if they get nurtured as children grow up they can achieve things we never thought possible."
[Image: 1x1.gif] Explore further: Researchers use laser-generated bubbles to create 3-D images in liquid
More information: Shreya K. Patel et al, A volumetric three-dimensional digital light photoactivatable dye display, Nature Communications (2017). DOI: 10.1038/ncomms15239 
Journal reference: Nature Communications [Image: img-dot.gif] [Image: img-dot.gif]
Provided by: Southern Methodist University



Read more at: https://phys.org/news/2017-07-star-wars-...s.html#jCp[/url][url=https://phys.org/news/2017-07-star-wars-chemistry-discovery-yields.html#jCp]
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
Reply
#43
Engineers create brighter, full-color holograms that can be viewed with low light
July 19, 2017

[Image: 1-hologramstak.jpg]
University of Utah electrical and computer engineering associate professor Rajesh Menon shows off a new 2D hologram that can be displayed with just a flashlight. His team has discovered a way to create inexpensive full-color 2-D and 3-D holograms that are far more realistic, brighter and can be viewed at wider angles than current holograms. Credit: Dan Hixson/University of Utah College of Engineering
Technology developed by a team of University of Utah electrical and computer engineers could make the holographic chess game R2-D2 and Chewbacca played in "Star Wars" a reality.



Read more at: https://phys.org/news/2017-07-brighter-full-color-holograms-viewed.html#jCp
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
Reply
#44
Holographic imaging could be used to detect signs of life in space
July 21, 2017 by Robert Perkins

[Image: 1-holographici.jpg]
Plumes water ice and vapor spray from many locations near the south pole of Saturn's moon Enceladus, as documented by the Cassini-Huygens mission. Credit: NASA/JPL/Space Science Institute
We may be capable of finding microbes in space—but if we did, could we tell what they were, and that they were alive?


This month the journal Astrobiology is publishing a special issue dedicated to the search for signs of life on Saturn's icy moon Enceladus. Included is a paper from Caltech's Jay Nadeau and colleagues offering evidence that a technique called digital holographic microscopy, which uses lasers to record 3-D images, may be our best bet for spotting extraterrestrial microbes.
No probe since NASA's Viking program in the late 1970s has explicitly searched for extraterrestrial life—that is, for actual living organisms. Rather, the focus has been on finding water. Enceladus has a lot of water—an ocean's worth, hidden beneath an icy shell that coats the entire surface. But even if life does exist there in some microbial fashion, the difficulty for scientists on Earth is identifying those microbes from 790 million miles away.
"It's harder to distinguish between a microbe and a speck of dust than you'd think," says Nadeau, research professor of medical engineering and aerospace in the Division of Engineering and Applied Science. "You have to differentiate between Brownian motion, which is the random motion of matter, and the intentional, self-directed motion of a living organism."
Enceladus is the sixth-largest moon of Saturn, and is 100,000 times less massive than Earth. As such, Enceladus has an escape velocity—the minimum speed needed for an object on the moon to escape its surface—of just 239 meters per second. That is a fraction of Earth's, which is a little over 11,000 meters per second.
Enceladus's minuscule escape velocity allows for an unusual phenomenon: enormous geysers, venting water vapor through cracks in the moon's icy shell, regularly jet out into space. When the Saturn probe Cassini flew by Enceladus in 2005, it spotted water vapor plumes in the south polar region blasting icy particles at nearly 2,000 kilometers per hour to an altitude of nearly 500 kilometers above the surface. Scientists calculated that as much as 250 kilograms of water vapor were released every second in each plume. Since those first observations, more than a hundred geysers have been spotted. This water is thought to replenish Saturn's diaphanous E ring, which would otherwise dissipate quickly, and was the subject of a recent announcement by NASA describing Enceladus as an "ocean world" that is the closest NASA has come to finding a place with the necessary ingredients for habitability.


Water blasting out into space offers a rare opportunity, says Nadeau. While landing on a foreign body is difficult and costly, a cheaper and easier option might be to send a probe to Enceladus and pass it through the jets, where it would collect water samples that could possibly contain microbes.
Assuming a probe were to do so, it would open up a few questions for engineers like Nadeau, who studies microbes in extreme environments. Could microbes survive a journey in one of those jets? If so, how could a probe collect samples without destroying those microbes? And if samples are collected, how could they be identified as living cells?



Professor Jay Nadeau describes her lab's work and proposal to use new microscopes on spacecraft that could visit the icy moons of Enceladus (Saturn) and Europa (Jupiter) and to collect and search water samples for life. Credit: California Institute of Technology
The problem with searching for microbes in a sample of water is that they can be difficult to identify. "The hardest thing about bacteria is that they just don't have a lot of cellular features," Nadeau says. Bacteria are usually blob-shaped and always tiny—smaller in diameter than a strand of hair. "Sometimes telling the difference between them and sand grains is very difficult," Nadeau says.
Some strategies for demonstrating that a microscopic speck is actually a living microbe involve searching for patterns in its structure or studying its specific chemical composition. While these methods are useful, they should be used in conjunction with direct observations of potential microbes, Nadeau says.
"Looking at patterns and chemistry is useful, but I think we need to take a step back and look for more general characteristics of living things, like the presence of motion. That is, if you see an E. coli, you know that it is alive—and not, say, a grain of sand—because of the way it is moving," she says. In earlier work, Nadeau suggested that the movement exhibited by many living organisms could potentially be used as a robust, chemistry-independent biosignature for extraterrestrial life. The motion of living organisms can also be triggered or enhanced by "feeding" the microbes electrons and watching them grow more active.
To study the motion of potential microbes from Enceladus's plumes, Nadeau proposes using an instrument called a digital holographic microscope that has been modified specifically for astrobiology.
In digital holographic microscopy, an object is illuminated with a laser and the light that bounces off the object and back to a detector is measured. This scattered light contains information about the amplitude (the intensity) of the scattered light, and about its phase (a separate property that can be used to tell how far the light traveled after it scattered). With the two types of information, a computer can reconstruct a 3-D image of the object—one that can show motion through all three dimensions.
"Digital holographic microscopy allows you to see and track even the tiniest of motions," Nadeau says. Furthermore, by tagging potential microbes with fluorescent dyes that bind to broad classes of molecules that are likely to be indicators of life—proteins, sugars, lipids, and nucleic acids—"you can tell what the microbes are made of," she says.
To study the technology's potential utility for analyzing extraterrestrial samples, Nadeau and her colleagues obtained samples of frigid water from the Arctic, which is sparsely populated with bacteria; those that are present are rendered sluggish by the cold temperatures.
With holographic microscopy, Nadeau was able to identify organisms with population densities of just 1,000 cells per milliliter of volume, similar to what exists in some of the most extreme environments on Earth, such as subglacial lakes. For comparison, the open ocean contains about 10,000 cells per milliliter and a typical pond might have 1-10 million cells per milliliter. That low threshold for detection, coupled with the system's ability to test a lot of samples quickly (at a rate of about one milliliter per hour) and its few moving parts, makes it ideal for astrobiology, Nadeau says.
Next, the team will attempt to replicate their results using samples from other microbe-poor regions on Earth, such as Antarctica.
[Image: 1x1.gif] Explore further: Could a dedicated mission to Enceladus detect microbial life there?
More information: Manuel Bedrossian et al, Digital Holographic Microscopy, a Method for Detection of Microorganisms in Plume Samples from Enceladus and Other Icy Worlds, Astrobiology (2017). DOI: 10.1089/ast.2016.1616 
Journal reference: Astrobiology [Image: img-dot.gif] [Image: img-dot.gif]
Provided by: California Institute of Technology



Read more at: https://phys.org/news/2017-07-holographic-imaging-life-space.html#jCp[/url][url=https://phys.org/news/2017-07-holographic-imaging-life-space.html#jCp]
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
Reply
#45
Quote:a surface that when illuminated with a laser straight on (thus, at 0 degrees) projects a hologram of the Caltech logo but when illuminated from an angle of 30 degrees projects a hologram of the logo of the Department of Energy-funded Light-Material Interactions in Energy Conversion Energy Frontier Research Center

Illumined.

Two Am I a holograms Are you too? in one surface
December 12, 2017



[Image: twoholograms.jpg]
RE: Daggnabbit! Am I a hologram ? Are you too?
Nanoposts of varying shapes can act as pixels in two different holograms. Credit: Andrei Faraon/Caltech

A team at Caltech has figured out a way to encode more than one holographic image in a single surface without any loss of resolution. The engineering feat overturns a long-held assumption that a single surface could only project a single image regardless of the angle of illumination.

The technology hinges on the ability of a carefully engineered surface to reflect light differently depending on the angle at which incoming light strikes that surface.
Holograms are three-dimensional images encoded in two-dimensional surfaces. When the surface is illuminated with a laser, the image seems to pop off the surface and becomes visible. Traditionally, the angle at which laser light strikes the surface has been irrelevant—the same image will be visible regardless. That means that no matter how you illuminate the surface, you will only create one hologram.
Led by Andrei Faraon, assistant professor of applied physics and materials science in the Division of Engineering and Applied Science, the team developed silicon oxide and aluminum surfaces studded with tens of millions of tiny silicon posts, each just hundreds of nanometers tall. (For scale, a strand of human hair is 100,000 nanometers wide.) Each nanopost reflects light differently due to variations in its shape and size, and based on the angle of incoming light.
That last property allows each post to act as a pixel in more than one image: for example, acting as a black pixel if incoming light strikes the surface at 0 degrees and a white pixel if incoming light strikes the surface at 30 degrees.
"Each post can do double duty. This is how we're able to have more than one image encoded in the same surface with no loss of resolution," says Faraon (BS '04), senior author of a paper on the new material published by Physical Review X on December 7.
"Previous attempts to encode two images on a single surface meant arranging pixels for one image side by side with pixels for another image. This is the first time that we're aware of that all of the pixels on a surface have been available for each image," he says.
As a proof of concept, Faraon and Caltech graduate student Seyedeh Mahsa Kamali (MS '17) designed and built a surface that when illuminated with a laser straight on (thus, at 0 degrees) projects a hologram of the Caltech logo but when illuminated from an angle of 30 degrees projects a hologram of the logo of the Department of Energy-funded Light-Material Interactions in Energy Conversion Energy Frontier Research Center, of which Faraon is a principal investigator.
The process was labor intensive. "We created a library of nanoposts with information about how each shape reflects light at different angles. Based on that, we assembled the two images simultaneously, pixel by pixel," says Kamali, the first author of the Physical Review X paper.
Theoretically, it would even be possible to encode three or more images on a single surface—though there will be fundamental and practical limits at a certain point. For example, Kamali says that a single degree of difference in the angle of incident light probably cannot be enough to create a new high-quality image. "We are still exploring just how far this technology can go," she says.
Practical applications for the technology include improvements to virtual-reality and augmented-reality headsets. "We're still a long way from seeing this on the market, but it is an important demonstration of what is possible," Faraon says.
[Image: 1x1.gif] Explore further: Computer chip technology repurposed for making reflective nanostructures
More information: Seyedeh Mahsa Kamali et al, Angle-Multiplexed Metasurfaces: Encoding Independent Wavefronts in a Single Metasurface under Different Illumination Angles, Physical Review X (2017). DOI: 10.1103/PhysRevX.7.041056

Journal reference: Physical Review X [Image: img-dot.gif] [Image: img-dot.gif]
Provided by: California Institute of Technology


Read more at: https://phys.org/news/2017-12-holograms-...e.html#jCp

[url=https://phys.org/news/2017-12-holograms-surface.html#jCp][/url]
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
Reply
#46
It’s time to get ready for augmented reality

January 11, 2018 2.36pm EST

[Image: adk-holographic-displays-everywhere.jpg]

The world’s largest annual consumer technology show — CES 2018 in Las Vegas — ends today and some of the most exciting gadgets this year were on display in the augmented reality (AR) marketplace.

This follows the news, announced in December, that 2018 will be the year the previously secretive company Magic Leap joins the likes of Microsoft, Meta, ODG, Mira and DAQRI to launch an AR headset.




At the same time we are seeing Apple, Google, Facebook, Snap and others rushing to release platforms for smartphone-based AR.

But this is only the beginning of the AR computing future. New AR technologies are set to change industries – from construction to retail – and transform the way we interact with the digital world in everyday life.
What is augmented reality?

Augmented reality (sometimes also referred to as “mixed reality”) is the technique of adding computer graphics to a user’s view of the physical world.

You might have experienced this on your smartphone if you played the game Pokémon GO. Or perhaps you have tried placing furniture in your house using the IKEA Place app or the AR View feature on Amazon’s smartphone app.


[Image: file-20180111-46718-dv275g.jpg?ixlib=rb-...4&fit=clip]
IKEA Place is an augmented reality application that lets people experiment with how furniture would look in their home before they buy it. IKEA Place
[/url]
But placing objects on the floor near you – whether furniture or monsters – is only a taste of what mainstream AR technologies could offer in the future.

The real potential for this new computing platform comes when computer graphics merge with, and behave in ways consistent with, their physical surroundings.

This is not just a challenge of matching the same lighting, or ensuring physical objects occlude synthetic ones.
Computer-generated objects will increasingly become more interactive (responding to voice, gesture and even touch), more persistent over time (enabling users to leave a virtual object next to a physical one for someone else to find), and develop a greater understanding of the objects in their physical surroundings (such that they immediately react to changes in the environment).

A simple example of the trend of graphics merging with the physical environment is the difference between playing a game like Minecraft as an isolated and self-contained digital board game [url=https://giphy.com/gifs/e3-GnuXQlrLpS2vS]sitting only on your dining table
and playing such a game on any surface in your home.

[Image: file-20180111-60744-1pllat6.png?ixlib=rb...4&fit=clip]
Playing Minecraft using the HoloLens augmented reality headset enhances the possibilities of play. The potential for AR systems is greater when they understand and interact with the whole environment. Microsoft

Two systems that show how tightly computer graphics can align with the real world were announced at CES: Nvidia’s new Drive platform and WayRay’s holographic car navigation system.

Both aim to augment the road, buildings and other objects ahead of a vehicle, using sensors designed for autonomous cars.

NVIDIA‏Verified account @nvidia




NVIDIA DRIVE AR will enable next-generation #AugmentedReality interfaces that deliver information points of interest along a drive, create alerts, and navigate safely and easily. http://nvda.ws/2ACkN32  #CES2018[Image: CES2018.png]
[Image: DS_sW9FVwAA-CWl.jpg]
12:31 AM - 8 Jan 2018

Another example is Disney Research’s new interactive AR characters that can understand and react to different physical objects.

Record investment in 2017 expected to grow

The combination of AR capable consumer hardware and intelligent software systems is getting investors excited.
Investment in augmented reality and virtual reality (VR) companies set a new record of more than US$3 billion in 2017. One estimate suggests that overall total spending on AR/VR products and services will increase from US$11.4 billion in 2017 to nearly US$215 billion in 2021, some US$30 billion of which will be due to sales of AR headsets alone.

Read more: Star Trek’s Holodeck: from science fiction to a new reality

The forecasts for growth in AR have typically been much higher than for VR. This is partly due to the perception that VR will have success in some relatively specific vertical markets (gaming, 360 degree cinema, training, data visualisation, and so on), which mainly benefit from the solo, immersive user experience, while AR has the potential to change many aspects of the way we interact with digital systems in our work and at home.

How will AR go mainstream?

For a glimpse of some of the ways we can expect to soon be interacting with computers using AR, we can look at the innovations coming out of research organisations, industrial innovation labs and startup companies.

Retail

In the retail space, we are now starting to see AR used for more than just a view of a 3D product model. Nissan recently launched an AR experience in the United States that lets customers view cars in dealerships through a smartphone and receive an annotated tour from Star Wars droids.


Read more: Shopping is hellish for disabled people – augmented reality could be the fix


Researchers at MIT Media Lab have demonstrated how results of a product search can be displayed directly on the supermarket shelf. And in Australia, CHOICE has seen great success with its CluckAR app that augments egg cartons with an indication of how happy the hens are back at the respective egg farm.

[Image: file-20180111-60724-19r12nz.png?ixlib=rb...4&fit=clip] Choice created an AR app that enables people to see whether eggs are free range.
Manufacturing

In CSIRO’s Advanced Manufacturing Roadmap, AR is identified as a way for manufacturers to increase productivity and customisation.

Elevator manufacturer Thyssenkrupp claims that AR has enabled it to achieve a four times faster workflow for the custom design of in-home chair lifts. Ford Motor Company’s design team is using HoloLens to make rapid decisions about complex geometrical problems such as rear-view mirror blindspots. And systems like ASTOR use AR directly in the manufacturing process to give a machine operator real-time information such as the force on the tip of a milling tool.

Construction

In the construction industry, buildings are usually designed using 3D modelling software but built using 2D plans. Bentley Systems has been figuring out ways to use AR to help make the mental connection on site between the 2D plans and the intended 3D design.

Maintenance and training

For maintenance workers, emerging products such as SCOPE AR and CSIRO’s own Guardian Remote allow remote experts to provide instructions directly within the task space. Just think about how much better this is than a phone call for help that consists mainly of “look up and left. No, no, the other left…”

For worker training, the HoloCrane is an example of how AR will enable a novice to practise a skill in situ, without the risk of damage to expensive equipment.

Internet of Things

Augmented Reality will allow us to have greater awareness and control of Internet of Things (IoT) devices in smart homes, factories, farms and offices. At CSIRO’s new “Synergy” building in Canberra, we have developed a smart glasses system that displays historical and real-time energy usage data overlaid directly on the appliances consuming the energy.

Meanwhile the ground breaking Reality Editor system from MIT shows how AR can provide intuitive interfaces with which to instruct the smart devices in our everyday life.

The challenges ahead

While some form of AR in the future is a near certainty, there are a range of socio-technical challenges to address before AR technologies see mainstream adoption.

User interaction with wearable computers is still tricky, especially when users prefer not to have to hold an input device. And if developers of AR services are not careful to respect the privacy and security desires of their users, they can expect user backlash.

Visual clutter is also an issue. When we make use of virtual augmentations on specific parts of the physical world, there is usually limited real estate. We need solutions that help us manage what we see.

Too much visual clutter is a problem AR systems will need to avoid.

Whoever manages to solve these sorts of challenges first may well own the de facto standard of AR computing, and therefore the interface between people and their digital life. It is no surprise that all the major tech companies, and many startups, are rushing to get AR technology to users before anyone else.


Read more: With iPhone X, Apple is hoping to augment reality for the everyman


One alternative to central “winner takes all” ownership is to deliver cross-platform AR services via the web. Mozilla has been particularly active in this area and recently launched an experimental WebAR browser that works on today’s iPhones. At last year’s Web3D conference in Brisbane we demonstrated some of CSIRO’s work towards enabling WebAR services on HoloLens.

One way or another, AR computing is coming – it’s time to get ready.


Sources: https://theconversation.com/its-time-to-...%20reality


Note there's a Vimeo Video in there too - time to HOOK UP & ASSIMILATE

Bob... Ninja Assimilated
"The Light" - Jefferson Starship-Windows of Heaven Album
I'm an Earthling with a Martian Soul wanting to go Home.   
You have to turn your own lightbulb on. ©stevo25 & rhw007
Reply
#47
If eye gambled: I'd see your augment and raise your arguement
Call this: Project Bluebeam is real!!!

Better than holograms:
A new 3-D projection into thin air
January 24, 2018 by Seth Borenstein


[Image: betterthanho.jpg]
This photo provided by the Dan Smalley Lab at Brigham Young University in January 2018 shows a projected image of researcher Erich Nygaard in Provo, Utah. Scientists have figured out how to manipulate tiny nearly unseen specks in the air and use them to produce images more realistic than most holograms, according to a study published on Wednesday, Jan. 23, 2018, in the journal Nature. (Dan Smalley Lab, Brigham Young University via AP)
One of the enduring sci-fi moments of the big screen—R2-D2 beaming a 3-D image of Princess Leia into thin air in "Star Wars"—is closer to reality thanks to the smallest of screens: dust-like particles.

Scientists have figured out how to manipulate nearly unseen specks in the air and use them to create 3-D images that are more realistic and clearer than holograms, according to a study in Wednesday's journal Nature . The study's lead author, Daniel Smalley, said the new technology is "printing something in space, just erasing it very quickly."
In this case, scientists created a small butterfly appearing to dance above a finger and an image of a graduate student imitating Leia in the Star Wars scene.
Even with all sorts of holograms already in use, this new technique is the closest to replicating that Star Wars scene.
"The way they do it is really cool," said Curtis Broadbent, of the University of Rochester, who wasn't part of the study but works on a competing technology. "You can have a circle of people stand around it and each person would be able to see it from their own perspective. And that's not possible with a hologram."
The tiny specks are controlled with laser light, like the fictional tractor beam from "Star Trek," said Smalley, an electrical engineering professor at Brigham Young University. Yet it was a different science fiction movie that gave him the idea: The scene in the movie "Iron Man" when the Tony Stark character dons a holographic glove. That couldn't happen in real life because Stark's arm would disrupt the image.


Going from holograms to this type of technology—technically called volumetric display—is like shifting from a two-dimensional printer to a three-dimensional printer, Smalley said. Holograms appear to the eye to be three-dimensional, but "all of the magic is happening on a 2-D surface," Smalley said.
The key is trapping and moving the particles around potential disruptions—like Tony Stark's arm—so the "arm is no longer in the way," Smalley said.
Initially, Smalley thought gravity would make the particles fall and make it impossible to sustain an image, but the laser light energy changes air pressure in a way to keep them aloft, he said.
[Image: 2-betterthanho.jpg]
This photo provided by the Dan Smalley Lab at Brigham Young University in January 2018 shows a projected three-dimensional triangular prism in Provo, Utah. A study on this volumetric display was published in the journal Nature on Wednesday, Jan. 23, 2018. By shining light on specks in the air and then having the particles beam light back out, study lead author Smalley said the new technology is like "you really are printing something in space, just erasing it very quickly." (Dan Smalley Lab, Brigham Young University via AP)
Other versions of volumetric display use larger "screens" and "you can't poke your finger into it because your fingers would get chopped off," said Massachusetts Institute of Technology professor V. Michael Bove, who wasn't part of the study team but was Smalley's mentor.
The device Smalley uses is about one-and-a-half times the size of a children's lunchbox, he said.
[Image: betterthanah.jpg]
Professor Daniel Smalley and students. Credit: Nate Edwards, BYU Photo
So far the projections have been tiny, but with more work and multiple beams, Smalley hopes to have bigger projections.
This method could one day be used to help guide medical procedures—as well as for entertainment, Smalley said. It's still years away from daily use.
[Image: 1-betterthanho.jpg]
This photo provided by the Dan Smalley Lab at Brigham Young University in January 2018 shows a projected image of the earth above a finger tip in Provo, Utah. Scientists have figured out how to manipulate tiny nearly unseen specks in the air and use them to produce images more realistic than most holograms, according to a study published on Wednesday, Jan. 23, 2018, in the journal Nature. (Dan Smalley Lab, Brigham Young University via AP)
[Image: 1x1.gif] Explore further: The future of holographic video
More information: A photophoretic-trap volumetric display, Nature (2018). nature.com/articles/doi:10.1038/nature25176

Journal reference: Nature


Read more at: https://phys.org/news/2018-01-holograms-...r.html#jCp

[url=https://phys.org/news/2018-01-holograms-d-thin-air.html#jCp][/url]
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)