Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
We Be Bots
thought you all might like this.....b

Robo Fecundus

By Bill Gallagher
1950 Words

     I walked with the robot out to the compost pile in the back yard and told it to catch flies until I said stop.  Thats one of the exercises in the very lengthy instruction book which comes with the robot, a reference to help familiarize you with the robots somewhat unbelievable abilities and strengths.  One thing the book makes clear: familiarization with this aspect of the New Technology, familiarization with this machine, is an ongoing process, and never really stops.  Its evolution.
     I watched the machine as it silently plucked flies out of the air, and I felt a chill run up my spine.  Its movements were a blur to my eyes, and it never missed.  It looked like it caught the flies by their wings.  Incredible.  The robot was releasing the flies alive, but could easily be instructed to exterminate the flies as it caught them, and it would do so with the utmost precision and efficiency. 
     With the New Technology it would be easy to create fly exterminating mechanisms on a mass scale, poisonless and for the home,  and that could be good, unless it eventually wiped out flies completely.
    I pondered that as I watched the machine. 
     A world without flies would be way worse of a stinking mess than this one already is, a world without flies would not be good.
     I then wondered, as old men sometimes do, what if I just up and croaked right here right now without telling the robot to stop?  Would it stay up all night, long after the flies had gone to roost, searching for fly movements in a futile attempt to satisfy its primary command, or would it revert after a time to secondaries?  I will look that up in the dumb things instruction book.  I guess my main concern if that scenario was to play out, how long would I have to lay there dead, collecting flies myself, before someone took notice and addressed the tawdry little situation?
     I shouldn't call the robot a dumb thing, it is only dumb now, governed by a very tightly reigned Asimovian logic, and with only a rudimentary reasoning capability during this learning phase.  Soon it will fulfill its real purpose and that machine will become Super Human.
     Soon, that robot will be me.


     It was the year 2028 when the New Technology really kicked in.  Almost right away experiments were begun to create and use New Tech Robots as vessels in which to transplant human brains, modern mans first success in immortality.  There is a microscopic symbiosis involved which is pure genetic engineering, another vector of the New Tech, and though the process is not 100% successful it is respectably close. 
     The symbiotic buggy is a miracle drug along the lines of SIGA Pharmaceuticals novel anti-infective for mucous membranes. It extends the life of the brain radically and makes it electronically compatible with the machine.  Meanwhile, the machines brain receptacle is engineered so it is almost biological itself.
     New Tech.
     The earlier machines, the first 1000, were kind of clunky, but my machine, number 31,367, is sleek and functional, weighing in at just over 300 kilograms.
     Its a Toyota.
     I personally find the humanoid look in bad taste, those days are done, I am a machine now (Or will soon be) so make it easy to clean and repair, then let me loose.  Some people want their robots as close to human looking as possible, and even dress themselves.  All that is beside the point, imho, but to each his own.
     I once read an excellent and very thought provoking book by Greg Bear called "Queen of Angels", and in that future people could pick things like skin color, and have other real weird modifications done to their flesh.  The New Technology has kind of put us on a different track than that, yet I see some similarities.  I chose gun metal blue for the finish on my robot, and even though the majority of the body is metallicized plastic, or ceramicized plastic, a large majority of everything is still metal, especially some of the pumps and motors, and there is nothing like metal tubing to carry fluid under pressure.
     The largest problems with robot bodies have been, as you might guess, psychological.  The problems are deeply rooted in the sexual urge, and there have even been a few brain deaths caused by a real inability to put aside the procreative instinct.  Those early deaths were extreme cases, candidates are screened much more thoroughly now, and the education prior to having ones brain transplanted into a robotic body encompasses what is known to date. 
     These psychological problems stemming from sexuality are fairly common among both sexes, but men seem more affected with troublesome baggage.  Men hate to give anything up, to concede anything, and to give up what they have known all their lives concerning themselves and the opposite sex, well, one must want immortality pretty bad, thats all I can say, because robots don't have peckers.  You get over it or die big boy.  So far all the brain transplants into robots have been from old people who were very close to death already.  I myself am getting there quickly, and that factor more than anything lessens the psychological problems caused by basic sexuality.
     One early robot had a major problem every time he spoke to an attractive woman; his brain emitted some weird chemical that his robot body misinterpreted wickedly, causing him to do perfect backward somersaults.  This was extremely dangerous if the robot was in a room full of people and things.  One time his back flip caused him to fall through the ceiling of the apartment below, and its only because he hit the unoccupied kitchen table that he did not keep going through several floors.  It took awhile but that little snag was finally ironed out for the robot, and hopefully for future models who might experience the same misinterpretation.
     It is always good to give robots lots of room,  don't get too close.
     The instruction book says to remember that the whole robot, the entirety other than the human brain, is really just a capsule environment FOR the brain, and the brain will be kept alive at all costs during the event of catastrophic shutdowns or any other reason. 
     Some people/robots find sleep periods useful, even though there is no body which needs replenishment, or any other real need for sleep.  Others complain of a persistent chill which no alteration of the mechanism can dispel.  The will to live is everything to a robot, and the gathering of new experiences and informations.  There are not any real comforts, or pleasures, except the intellectual type, and yes, a lot is very hard to get used to.  Pleasure centers in the brain can be stimulated but if you are after that kind of thing it is a lot easier to obtain wirelessly, versus having your brain plopped into a metal behometh whose expected life span is ten thousand years. 
     A brain transplant into a mechanical body has to be considered the ultimate trauma, so some missed associations and other confusion are to be expected.  It is only because of the New Technology that any of this is possible anyway.  People have come very far very fast.  Is it too far too fast?  Probably not, in fact, from the looks of things, we are just playing catch up.


     It was the year 2020 when DNA started being used extensively to back up large holdings of computer memory, because of its stability and its small size.  Four or five google data centers worth of very stable DNA micro memory could be stored in capsules the size of large vitamins. This was more than a boon.  This was evolution.
     Along with many other DNA related enlightenments it was also discovered that living DNA could easily be "Encumbered" with information DNA, that is, huge amounts of data could be stored/replicated/manipulated within living things themselves, in the background, one might say. 
     The first known discovery of ancient DNA encoding was made by an obscure student of biology, one Bernard Doucette, who had detected what seemed to be vestigial order in the DNA of some wood he was studying, and by a fluke he cracked the code (It was binary) and found himself in sole possession of some very very High Tech information.  Several of his fellow students were present in the lab that day and Bernard announced his discovery to them with the immortal words:
    "Holy fucking shit!"
     Thus began the treasure hunt of the century.  Any and all DNA was scanned for order and huge volumes of extensive and detailed information on how to build the robots and many other things came to light almost overnight.  The languages of these encodings differed greatly from ours, and were in many forms, but the order was easy to identify then decode.  They were made to be decoded, DNA was just  storage, and most importantly it was stable long term storage.  In all reality it was the new treasure, this New Technology. 
     The more ubiquitous a DNA sample, the greater chance of finding ancient technological data encoded in it.  We ourselves are virtual libraries, we are self assembling machines of biology, short term tools of evolution.  We create the next step, the immortal step, we make ourselves into better tools, and all the instructions are included in every package!


     Many people were ecstatic about the New Technology.  Many people were not.  The religious butt heads, with their inbred harangue over god money were not happy.  Thankfully they went away quickly, like fungus under strong light.  Changes are happening so fast that it is still difficult to say which way it will go -- more and more information is being discovered daily, and that can only be good.  Some positive trends include a greatly reduced birth rate and much less alpha behavior among the more intelligent males, as if they are already trying to come to grips with a future very different from the one they inhabit now.
     So who did it and where did they go?  Who put all that high tech information in our DNA and the DNA of almost everything else on this planet?  Some of that has been discovered, but not all of it, not near all of it.  It seems we are just the latest bunch to give it a try here on Planet Earth, and there have been many before.  In order to guard against what is called "Periodic Cataclysm" any and all who discovered DNA memory added to it, as we are also doing now.  Evidence of this mind set can also be seen in the fact that most of our best drugs from times before, if not all of them, have been, sometime during the past, incorporated into plants, with tons of redundancy, as guard against catastrophic loss.  Engineering is easy to  see if you look for it.
     The last four or five worldwide civilizations that crashed and burned here were us, or a form of us.  We also had extensive holdings on all the planets we can plainly see, anywhere we could maintain an atmosphere.  Those ruins are many times still visible, but hard to see if you do not know what two or three hundred thousand years of space decay looks like.  Maybe we will find things in those ruins which will better explain this ancient junkyard we all live in.  Once I get used to my robot body thats where I am headed.  I am going into space, at least for awhile.  Plenty of time, a new way of seeing.
     As to where the early people went, no one knows yet.
     All we can say is they went away.


some of my craft work [url=[/url]
Meet the 21-Year-Old Tech Genius Betelhem Dessie Coding at Ethiopia’s First AI Lab

In addition to coordinating programs to inspire a new generation of girl coders, she also has worked on the development of Sophia the robot.

By  Derya Özdemir

April 13, 2020

[Image: robotics-header_resize_md.jpg]

"Sheba Valley", the Ethiopian equivalent of "Silicon Valley", is taken by a storm named Betelhem Dessie, who is an Ethiopian web and mobile technologies developer, and also, a coding genius.

She has turned 21 this year; however, her young age hasn't stopped her from accomplishing amazing feats. As of now, Dessie is coordinating a number of programs run by robotics lab iCog, the Addis Ababa based artificial intelligence lab that was behind the development of world-famous Sophia the robot.

Quote:I have a really deep connection with Ethiopia because part of my AI was developed there by iCog Labs in Addis Ababa. They are amazing! You should check them out! @icoglabs #AskSophia
— Sophia the Robot (@RealSophiaRobot) November 1, 2018

She has four software programs copyrighted solely to her name, which one of them she coded only at 10 years old.

Quote:Meet Betelhem Dessie, Project Manager for Anyone Can Code at iCog in Addis Ababa and Sophia, Hanson Robotics’ most well-known robot. #Women in sciences, technology and artificial intelligence. @ChevrierAntoine
— Canada in Ethiopia (@CanadaEthiopia) January 29, 2019

Her dance with science started when she was just 9 when she asked for money from his father to celebrate her birthday, who didn’t have any money to give her, to begin with. She decided to take matters into her hands.


She started editing videos and sending music to customers’ cell phones in her father’s electronics shop. From there, she started handling computer maintenance and installing software which resulted in a big appetite for anything tech and coding related.

She is currently studying for a Bachelor’s Degree in Software Engineering at Addis Ababa Institute of Technology; however, she has already undertaken projects such as Anyone Can Code, The Remus, Girls Can Code, and numerous others.

Quote:At 19-years-old, #BetelhemDessie is perhaps the youngest pioneer in #Ethiopia 's fast emerging tech scene, sometimes referred to as ' #ShebaValley '.#Africa #MNA
— M N A (@mnaEN) October 13, 2018

Moreover, she was present during the development of Sophia the robot in iCog Labs. Sophia the robot is the product of a collaborative effort between a team of developers from Ethiopia and a Hond Kong-based robotics company known as Hanson Robotics.

It should also be noted that Sophia the robot was partly-assembled in Ethiopia.

Currently, Dessie is the lead of the Solve-IT project where she works with young people to uncover innovative, technological solutions to some of the problems faced by their respective communities.


Bob... Ninja Assimilated
"The Morning Light, No sensation to compare to this, suspended animation, state of bliss, I keep my eyes on the circling sky, tongue tied and twisted just and Earth Bound Martian I" Learning to Fly Pink Floyd [Video:]
Facebook constructs bot-based universe to test out scenarios for manipulating humans, but don’t worry, it’s safe

15 Apr, 2020 23:43

[Image: 5e979b8c2030276042618899.png]
The bot network, which will simulate “negative” behaviors on Facebook’s platform for humans to observe, has immediately drawn comparisons to HBO’s dystopian TV series “Westworld.” ©  Global Look Press via ZUMA Press / Jaap Arriens;  GLP via PLANET PHOTOS

By Helen Buyniski, RT

Facebook has debuted a “web-enabled simulation” in which a population of bots it has created based on real users can duke it out - supposedly to help the platform deal with bad actors. But it’s not totally isolated from reality.

The social media behemoth’s new playpen for malevolent bots and their simulated victims is described in a company paper released on Wednesday with the ultra-bland, ’please, don’t read this, ordinary humans’ title of “WES: Agent-based User Interaction Simulation on Real Infrastructure.”

While the writers have cloaked their and their bots’ activities in several layers of academic language, the report reveals their creations are interacting through the real-life Facebook platform, not a simulation. The bots are set up to model different “negative” behaviors – scamming, phishing, posting wrongthink – that Facebook wants to curtail, and the simulation allows Facebook to tweak its control mechanisms for suppressing these behaviors.

Even though the bots are technically operating on real-life Facebook, with only the thinnest veil of programming separating them from interacting with real-world users, the researchers seem convinced enough of their ability to keep fantasy and reality separate that they feel comfortable hinting in the paper of new and different ways of invading Facebook users’ privacy.

Quote:Because the WW bots are isolated from affecting real users, they can be trained to perform potentially privacy-violating actions on each other.

But these nosy bots aren’t wholly walled off from reality by any means. “A smaller group of read-only bots will need to read real user actions and react to them in the simulation,” Input Mag pointed out in its coverage of the unusual experiment, pointing out that the bots’ masters “haven’t decided whether to completely sandbox the bots or simply use a restricted version of the platform.” The outlet described Facebook’s goal with creating this simulation as building “its own virtual Westworld" – the western-themed robot-populated quasi-dystopic Disneyland of HBO’s popular sci-fi series.

And just as Westworld’s robots come into contact with real humans, Facebook’s researchers acknowledge in their paper that the possibility of their supposedly self-contained bots interacting with real users exists. “Bots must be suitably isolated from real users to ensure that the simulation, although executed on real platform code, does not lead to unexpected interactions between bots and real users.”

However, given the sheer volume of ‘oops’ moments Facebook has experienced in recent years – from leaving hundreds of millions of users’ phone numbers on an unprotected server to letting apps like Cambridge Analytica data-mine tens of millions of unsuspecting users to feeding user data to phone companies and other third parties without their consent – the social media behemoth’s ‘word’ is unlikely to count for much to users concerned about being unwittingly enrolled in a bot-filled simulation.


Bob... Ninja Assimilated
"The Morning Light, No sensation to compare to this, suspended animation, state of bliss, I keep my eyes on the circling sky, tongue tied and twisted just and Earth Bound Martian I" Learning to Fly Pink Floyd [Video:]
Thatz why We Be NOTZ (not BOTS)

That way they cannot see.

I AM a notz.

My name is Clay and eye stand for everything I say.

when facebook googles itz own tweets it made on snapchat in the other 2.0 version of its windows for android os on apple's ios apps that were obsolete yesterdecade... then they know we no kowtow.

When eye am in the clouds expect a shadow cast upon return of input/output.

An automaton of any sort by any measure of complexity will notz supplant us.

Trust me. your resident snake oil y'all salesman that asks a free fair price of zero sum.
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
Didnt Facebook create 2 computers with artificial intelligence that quickly invented their own language to converse in? From what i remeber it was shut down rather quickly, seems scary to think what could happen if those bots got out.
When eye am in the clouds expect a shadow cast upon return of input/output.

Yes I agree, and while I am cut off from email right now, still have internet access want to say I too shall be like a bat leaving a MARKED PLACE for everyone to stare into that moments of "STATE OF BLISS" shall spread across the planet...coming from Mooers NY

Bob... Ninja Assimilated 

See you folks on Monday..look to the skies.
"The Morning Light, No sensation to compare to this, suspended animation, state of bliss, I keep my eyes on the circling sky, tongue tied and twisted just and Earth Bound Martian I" Learning to Fly Pink Floyd [Video:]
Quote:input/output.  Sheep Yes-and...

An automaton of any sort by any measure of complexity will notz supplant us.

Yes-and is a pillar of improvisation that prompts a participant to accept the reality that another participant says ("yes") and then build on that reality by providing additional information ("and").

Move Arrow over, Siri! Researchers develop improv-based  youareaduck Chatbot
July 15, 2020
University of Southern California
Computer scientists have incorporated improv dialogues into chatbots to produce more grounded and engaging interactions.



What would conversations with Alexa be like if she was a regular at The Second City?
Jonathan May, research lead at the USC Information Sciences Institute (ISI) and research assistant professor of computer science at USC's Viterbi School of Engineering, is exploring this question with Justin Cho, an ISI programmer analyst and prospective USC Viterbi Ph.D. student, through their Selected Pairs Of Learnable ImprovisatioN (SPOLIN) project. Their research incorporates improv dialogues into chatbots to produce more engaging interactions.
The SPOLIN research collection is made up of over 68,000 English dialogue pairs, or conversational dialogues of a prompt and subsequent response. These pairs model yes-and dialogues, a foundational principle in improvisation that encourages more grounded and relatable conversations. After gathering the data, Cho and May built SpolinBot, an improv agent programmed with the first yes-and research collection large enough to train a chatbot.
The project research paper, "Grounding Conversations with Improvised Dialogues," was presented on July 6 at the Association of Computational Linguistics conference, held July 5-10.
[b]Finding Common Ground[/b]
May was looking for new research ideas in his work. His love for language analysis had led him to work on Natural Language Processing (NLP) projects, and he began searching for more interesting forms of data he could work with.
"I'd done some improv in college and pined for those days," he said. "Then a friend who was in my college improv troupe suggested that it would be handy to have a 'yes-and' bot to practice with, and that gave me the inspiration -- it wouldn't just be fun to make a bot that can improvise, it would be practical!"
The deeper May explored this idea, the more valid he found it to be. Yes-and is a pillar of improvisation that prompts a participant to accept the reality that another participant says ("yes") and then build on that reality by providing additional information ("and"). This technique is key in establishing a common ground in interaction. As May put it, "Yes-and is the improv community's way of saying 'grounding.'"
Yes-ands are important because they help participants build a reality together. In movie scripts, for example, maybe 10-11% of the lines can be considered yes-ands, whereas in improv, at least 25% of the lines are yes-ands. This is because, unlike movies, which have settings and characters that are already established for audiences, improvisers act without scene, props, or any objective reality.
"Because improv scenes are built from almost no established reality, dialogue taking place in improv actively tries to reach mutual assumptions and understanding," said Cho. "This makes dialogue in improv more interesting than most ordinary dialogue, which usually takes place with many assumptions already in place (from common sense, visual signals, etc.)."
But finding a source to extract improv dialogue from was a challenge. Initially, May and Cho examined typical dialogue sets such as movie scripts and subtitle collections, but those sources didn't contain enough yes-ands to mine. Moreover, it can be difficult to find recorded, let alone transcribed, improv.
[b]The Friendly Neighborhood Improv Bot[/b]
Before visiting USC as an exchange student in Fall 2018, Cho reached out to May, inquiring about NLP research projects that he could participate in. Once Cho came to USC, he learned about the improv project that May had in mind.
"I was interested in how it touched on a niche that I wasn't familiar with, and I was especially intrigued that there was little to no prior work in this area," Cho said. "I was hooked when Jon said that our project will be answering a question that hasn't even been asked yet: the question of how modeling grounding in improv through the yes-and act can contribute to improving dialogue systems."
Cho investigated multiple approaches to gathering improv data. He finally came across Spontaneanation, an improv podcast hosted by prolific actor and comedian Paul F. Tompkins that ran from 2015 to 2019.
With its open-topic episodes, about a good 30 minutes of continuous improvisation, high quality recordings, and substantial size, Spontaneanation was the perfect source to mine yes-ands from for the project. The duo fed their Spontaneanation data into a program, and SpolinBot was born.
"One of the cool parts of the project is that we figured out a way to just use improv," May explained. "Spontaneanation was a great resource for us, but is fairly small as data sets go; we only got about 10,000 yes-ands from it. But we used those yes-ands to build a classifier (program) that can look at new lines of dialogue and determine whether they're yes-ands."
Working with improv dialogues first helped the researchers find yes-ands from other sources as well, as most of the SPOLIN data comes from movie scripts and subtitles. "Ultimately, the SPOLIN corpus contains more than five times as many yes-ands from non-improv sources than from improv, but we only were able to get those yes-ands by starting with improv," May said.
SpolinBot has a few controls that can refine its responses, taking them from safe and boring to funny and wacky, and also generates five response options that users can choose from to continue the conversation.
[b]SpolinBot #Goals[/b]
The duo has a lot of plans for SpolinBot, along with extending its conversational abilities beyond yes-ands. "We want to explore other factors that make improv interesting, such as character-building, scene-building, 'if this (usually an interesting anomaly) is true, what else is also true?,' and call-backs (referring to objects/events mentioned in previous dialogue turns)," Cho said. "We have a long way to go, and that makes me more excited for what I can explore throughout my PhD and beyond."
May echoed Cho's sentiments. "Ultimately, we want to build a good conversational partner and a good creative partner," he said, noting that even in improv, yes-ands only mark the beginning of a conversation. "Today's bots, SpolinBot included, aren't great at keeping the thread of the conversation going. There should be a sense that both participants aren't just establishing a reality, but are also experiencing that reality together."
That latter point is key, because, as May explained, a good partner should be an equal, not subservient in the way that Alexa and Siri are. "I'd like my partner to be making decisions and brainstorming along with me," he said. "We should ultimately be able to reap the benefits of teamwork and cooperation that humans have long benefited from by working together. And the virtual partner has the added benefit of being much better and faster at math than me, and not actually needing to eat!"

University of Southern California. "Move over, Siri! Researchers develop improv-based Chatbot." ScienceDaily. ScienceDaily, 15 July 2020.
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
Look up "eliza program".
I tried out the C-64 version in the '80s.

The Rise of Collaborative Industrial Robots in Advanced Manufacturing

Collaborative Industrial Robots (Cobots) automation is set to be at the center stage of the human-robot interaction in the 2020s, and beyond.
Susan Fourtané
July 23, 2020

[Image: colbots-main_resize_md.jpg]

Mechanical creatures and concepts similar to robots can be found in history from about 400 BCE. The first real industrial robot was used in 1937; it was a crane-like device with five movement axes, a grab hand that could turn around its own axis and was powered by one electric motor.
The first patented robot was produced by the American company Unimation in 1956. Back then, robots were also called programmable transfer machines since their only task was to move objects from one point to another. 
In Europe, ABB Robotics, a Swiss-Swedish leading supplier of industrial robots and robot software, and Kuka Robotics, a German manufacturer of industrial robots and solutions for factory automation, introduced industrial robots on the market in 1973.
It was in 1996 when the idea of collaborative robots came into play by the hand of J. Edward Colgate and Michael Peshkin, who invented the first collaborative robot (cobot), and called it "a device and method for direct physical interaction between a person and a computer-controlled manipulator."
Robotics and industrial manufacturing
Fast forward into the present century, and we find that the usage and development of Collaborative Industrial Robots (Cobots) are accelerating faster than ever. Many believe human and machine collaboration plays a paramount role in the development of Industry 4.0 and the Industrial Internet of Things (IIoT).
Collaborative Industrial Robots are of great help in assisting humans in the manufacturing industry. Cobots are equipped with advanced sensors for fine-tuned work. They are quick to learn from the people who use them, becoming great coworkers and collaborators. Kuka Robotics, who also launched one of the first industrial robots on the market, launched its first Cobot in 2004, called the LBR3. 
LBR3 was followed by the UR5 in 2008, the first Cobot released by , one of the world's largest robot suppliers. In 2012, the UR10 was launched, followed by the UR3 in 2015, a Cobot designed specifically for a tabletop.
Increase of productivity in manufacturing: Cobots take over boring and monotonous jobs, human employees perform more complex tasks
[Image: automation_resize_md.jpg] Automation increases productivity in manufacturing, Source: MicroStockHub/iStock 
Robotics can reduce labor costs and increase productivity in industrial manufacturing. Robotics can also prevent a recurrence of future plant shutdowns. In case of a pandemic or any other disturbance, robotics allow industries to continue production while human controllers can monitor safely from the distance, or even remotely.
Collaborative robots with Machine Vision can be used for detailed work. Cobots are already being implemented in global factories as part of the shift into the factory of tomorrow.
Factory automation, or industrial automation, is the connecting up of all factory equipment to improve the efficiency and reliability of process control systems. In turn, this leads to the achievement of lower costs, improved quality, increased flexibility, and overall reduced environmental impact.
In the research paper Collaborative Robots: Frontiers of Current Literature, published in the Journal of Intelligent Systems: Theory and Applications, Project Researcher Mikkel Knudsen and Research Director Dr. Jari Kaivo-oja from Finland Futures Research Centre, University of Turku, Finland explain that collaborative robots "play an increasing role in the advanced manufacturing landscape."
According to the researchers, "the Cobot market is rapidly expanding, and the academic literature is similarly growing." The paper presents current robotics trends and future frontiers of the Cobots development. The paper illustrates potential developments of future human-robot interactions and makes the following comparison between traditional industrial robots and collaborative industrial robots.
Traditional industrial robots Vs. collaborative industrial robots (Cobots)
[Image: industrial-robots_resize_md.jpg] Industrial welding robots at an automated car manufacturing factory assembly line, Source: imagima/iStock
The characteristics of collaborative industrial robots suit the demands of Industry 4.0 and the global megatrends better than those of traditional industrial robots. In other words, Cobots are better equipped to join humans in Industry 4.0 --also called the Fourth Industrial Revolution-- than traditional industrial robots. The comparison is as follows:
Traditional Industrial Robots
  • Fixed installation, repeatable tasks, rarely changed
  • Interaction with worker only during programming 
  • Profitable only with medium to large lot size 
  • Small or big, and very fast
Collaborative Industrial Robots
  • Flexible relocation and frequent task changes 
  • Safe and frequent interaction with worker 
  • Profitable even at single lot production 
  • Small, slow, easy to move, easy to use
Cobots and human collaboration: Human+Machine collaboration in the advanced manufacturing landscape

  • Independent: A human operator and a Cobot work on separate workpieces, independently, and for their individual manufacturing processes. The collaborative element constitutes the shared workspace without cages and fences.   
  • Simultaneous: A human operator and a Cobot operate on separate manufacturing processes at the same workpiece and at the same time. Concurrently operating on the same workpiece minimizes transit time, improves productivity and space utilization, but as such, there is no time or task dependency between the humans and the Cobot. 
  • Sequential: A human operator and a Cobot perform sequential manufacturing processes on the same workpiece. Here, there are time dependencies between the processes of the operator and the Cobot; often the Cobot is assigned to handle the more tedious processes, which may also improve the operator's working condition.
  • Supportive: A human operator and a Cobot work on the same process on the same piece interactively. Here, there may be full dependencies between the human and the Cobot, as one cannot perform the task without the other.
According to the researchers, most examples of Cobots deployed in industrial smart manufacturing settings today belong to the independent and simultaneous collaboration scenarios. Yet, the most advanced research projects aim to break new ground toward the deployment of sequential and supportive collaboration scenarios between humans and collaborative robots. 
In order to reach this point, more sophisticated systems and solutions need to be put in place. According to the paper, as the degree of interdependency and collaboration increases, "Cobots need to have improved semantic understanding of the task goal and the actions and intents of their human co-workers. Similarly, the human workforce needs to be able to communicate with the Cobot in intuitive ways." 
Private 5G wireless technology and Cobots in advanced manufacturing
[Image: industrial-arobots-2_resize_md.jpg] The changing role of the engineer: Using a digital tablet supervising industrial robotic arms doing their work in an automated home appliance factory, Source: vm/iStock
One of the most important developments directly linked to the improvement in collaborative robots in manufacturing is 5G technology. The need for a dedicated private and powerful 5G network able to cope with the demands of advanced manufacturing is something OEMs need to look into. 
The recent release of Nokia's industrial-grade 5G private wireless standalone has been developed to power Industry 4.0 needs and demands. With low-latency connectivity, a private wireless solution helps OEMs to increase robotic automation, ensure safety and security, and achieve new levels of quality, efficiency, and productivity in the manufacturing operations.
Cobots: Answers to megatrends
Cobots have started to make an impact on the largest market sector for industrial robots: The automotive industry. Forecasts for the annual revenues of collaborative robots suggest global revenues of $7.6 billion by 2027. Even more optimistic revenues of $9.2 billion by 2025 are reported in the paper.
Climate change, environmental pressures, shrinking workforces, and aging population as well as changing trade patterns on account of geopolitical shifts by 2030 are going to increase the already rapid technological development within the manufacturing industry. Within this context, Cobots are set to play a paramount role in shaping the response to these global trends.
Related Articles: 


We BE BOTS Split_spawn Split_spawn Split_spawn 

Bob... Ninja Assimilated

Engineers Will Take 60-Foot Gundam for Walk Despite COVID-19 Crisis

A massive 60-foot robot "Gundam" continues construction in Japan, despite the COVID-19 crisis.
[Image: WeGRW50tmyzf_thumb.jpg]
By Brad Bergan
July 22, 2020

[Image: engineers-will-take-60-foot-gundam-for-w...ize_md.jpg]
[url=]Michael Overstreet/YouTube
While the COVID-19 pandemic has slowed construction projects worldwide, the construction of a Gundam robot — based on a 1970s Japanese animation show of the same name (an anime called "Mobile Suit Gundam") — has continued unabated, according to a YouTube video posted earlier this month.
The 60-foot (18.2-meter) humanoid robot will debut at Gundam Factory Yokohama on October 1, 2020.

Gundam construction continues despite COVID-19
The 60-foot (18.2-meter) Gundam robot has continued construction since this January. Based on a popular fictional robot from roughly 50 TV series and films since 1979 — including several video games and manga — the Gundam will become an unavoidable feature of the Port of Yokohama (south of Tokyo), where it will stay for a full year, reports Popular Mechanics.
In the video above, workers visibly perform touch-ups on the robot via a crane while the giant Gundam hoists its legs up and down, and later rotates its massive torso. As of writing, the Gundam still does not have a head.
However, when the Gundam is complete, it will have an incredible 24 degrees of freedom — in other words, it will walk. The entire machine will weigh roughly 25 tons (roughly 22,679 kilograms), which is fairly lightweight, compared to how heavy it might have been.
[Image: gundam-project-michael-overstreet_resize_md.jpg]A worker continues tending to Gundam despite the COVID-19 crisis. Source: Michael Overstreet / YouTube
Building a real-life Gundam robot
These advanced weight efficiencies are the result of meticulous engineering and design as outlined in a collection of YouTube videos produced via Gundam Factory Yokohama. For example, one installment gives a tour of where workers designed, constructed, and ultimately assembled the 60-foot (18.2-meter) Gundam. From metal fingertips to wrist-arm connections, the hand is roughly 6.5 feet (nearly 2 meters) long.
The Head of Design for this Gundam project — Jun Narita — said special considerations regarding which types of material and motors to use were necessary because, with the wrong material or motors, the Gundam hand might weigh 1,300 pounds (589.6 kilograms).
"This weight restriction is like a curse," he said, reports Popular Mechanics.
[Image: gundam-project-michael-overstreet-1_resize_md.jpg]Weight efficiencies in material and motors are paramount. Source: Michael Overstreet / YouTube
Gundam robot delayed due to COVID-19 coronavirus
Gundam Factory Yokohama's website was updated to note an upcoming special preview event. It was initially planned for this month, but was subsequently canceled due to the COVID-19 coronavirus crisis.
Luckily, we live in a post-internet age when nearly everyone's experience of the outside world is flattened — equalized into little snippets of video on 2D screens. This is why, now more so than ever, we should expect the Gundam robot to make a digital debut — hailed by the world, but loved by aspirational newtypes.

THE SPICE MUST FLOW !!! Worship Worship Worship

Bob... Ninja Assimilated
"The Morning Light, No sensation to compare to this, suspended animation, state of bliss, I keep my eyes on the circling sky, tongue tied and twisted just and Earth Bound Martian I" Learning to Fly Pink Floyd [Video:]
Quote:"In a sense, it's nothing short of miraculous," Manning says. "All we're doing is having these very large neural networks run these Mad Libs tasks, but that's sufficient to cause them to start learning grammatical structures."

JULY 24, 2020
How AI systems use Mad Libs to teach themselves grammar
by Edmund L. Andrews, Stanford University
[Image: grammar.jpg]Credit: Pixabay/CC0 Public Domain
Imagine you're training a computer with a solid vocabulary and a basic knowledge about parts of speech. How would it understand this sentence: "The chef who ran to the store was out of food."

Did the chef run out of food? Did the store? Did the chef run the store that ran out of food?
Most human English speakers will instantly come up with the right answer, but even advanced artificial intelligence systems can get confused. After all, part of the sentence literally says that "the store was out of food."
Advanced new machine learning models have made enormous progress on these problems, mainly by training on huge datasets or "treebanks" of sentences that humans have hand-labeled to teach grammar, syntax and other linguistic principles.
The problem is that treebanks are expensive and labor intensive, and computers still struggle with many ambiguities. The same collection of words can have widely different meanings, depending on the sentence structure and context.
But a pair of new studies by artificial intelligence researchers at Stanford find that advanced AI systems can figure out linguistic principles on their own, without first practicing on sentences that humans have labeled for them. It's much closer to how human children learn languages long before adults teach them grammar or syntax.
Even more surprising, however, the researchers found that the AI model appears to infer "universal" grammatical relationships that apply to many different languages.
That has big implications for natural language processing, which is increasingly central to AI systems that answer questions, translate languages, help customers and even review resumes. It could also facilitate systems that learn languages spoken by very small numbers of people.
The key to success? It appears that machines learn a lot about language just by playing billions of fill-in-the-blank games that are reminiscent of "Mad Libs." In order to get better at predicting the missing words, the systems gradually create their own models about how words relate to each other.
"As these models get bigger and more flexible, it turns out that they actually self-organize to discover and learn the structure of human language,"
says Christopher Manning, the Thomas M. Siebel Professor in Machine Learning and professor of linguistics and of computer science at Stanford, and an associate director of Stanford's Institute for Human-Centered Artificial Intelligence (HAI). "It's similar to what a human child does."

[b]Learning Sentence Structure[/b]
The first study reports on experiments by three Stanford Ph.D. students in computer science—Kevin Clark, John Hewitt and Urvashi Khandelwal—who worked with Manning and with Omer Levy, a researcher at Facebook Artificial Intelligence Research.
The researchers began by using a state-of-the-art language processing model developed by Google that's nicknamed BERT (short for "Bidirectional Encoder Representations from Transformers"). BERT uses a Mad Libs approach to train itself, but researchers had assumed that the model was simply making associations between nearby words. A sentence that mentions "hoops" and "jump shot," for example, would prompt the model to search for words tied to basketball.
However, the Stanford team found that the system was doing something more profound: It was learning sentence structure in order to identify nouns and verbs as well as subjects, objects and predicates. That in turn improved its ability to untangle the true meaning of sentences that might otherwise be confusing.
"If it can work out the subject or object of a blanked-out verb, that will help it to predict the verb better than simply knowing the words that appear nearby," Manning says. "If it knows that 'she' refers to Lady Gaga, for example, it will have more of an idea of what 'she' is likely doing."
That's very useful. Take this sentence about promotional literature for mutual funds: "It goes on to plug a few diversified Fidelity funds by name."
The system recognized that "plug" was a verb, even though that word is usually a noun, and that "funds" was a noun and the object of the verb—even though "funds" might look like a verb. Not only that, the system didn't get distracted by the string of descriptive words—"a few diversified Fidelity"—between "plug" and "funds."
The system also became good at identifying words that referred to each other. In a passage about meetings between Israelis and Palestinians, the system recognized that the "talks" mentioned in one sentence were the same as "negotiations" in the next sentence. Here, too, the system didn't mistakenly decide that "talks" was a verb.
"In a sense, it's nothing short of miraculous," Manning says. "All we're doing is having these very large neural networks run these Mad Libs tasks, but that's sufficient to cause them to start learning grammatical structures."
[b]Discovering Universal Language Principles[/b]
In a separate paper based largely on work by Stanford student Ethan Chi, Manning and his colleagues found evidence that BERT teaches itself universal principles that apply in languages as different as English, French and Chinese. At the same time, the system learned differences: In English, an adjective usually goes in front of the noun it's modifying, but in French and many other languages it goes after the noun.
The bottom line is that identifying cross-language patterns should make it easier for a system that learns one language to learn more of them—even if they seem to have little in common.
"This common grammatical representation across languages suggests that multilingual models trained on 10 languages should be able to learn an eleventh or a twelfth language much more easily," Manning says. "Indeed, this is exactly what we are starting to find."

Explore further
'Reading' with aphasia is easier than 'running'

[b]More information:[/b] Christopher D. Manning et al. Emergent linguistic structure in artificial neural networks trained by self-supervision, Proceedings of the National Academy of Sciences (2020). DOI: 10.1073/pnas.1907367117
[b]Journal information:[/b] Proceedings of the National Academy of Sciences [/url]

Provided by [url=]Stanford University
Along the vines of the Vineyard.
With a forked tongue the snake singsss...
They could also incorporate word strings along with pictures synthesized like in Deep Dream programs.
These would then be judged against Azimov's Laws of Robotics, etc.

Forum Jump:

Users browsing this thread: 1 Guest(s)