Robohub https://robohub.org Connecting the robotics community to the world Sat, 25 Mar 2023 06:48:35 +0000 en-US hourly 1 https://wordpress.org/?v=6.0.3 One million robots work in car industry worldwide – new record https://robohub.org/one-million-robots-work-in-car-industry-worldwide-new-record/ Sun, 26 Mar 2023 08:43:36 +0000 https://robohub.org/?p=206922 The automotive industry has the largest number of robots working in factories around the world: Operational stock hit a new record of about one million units. This represents about one third of the total number installed across all industries.

“The automotive industry effectively invented automated manufacturing,” says Marina Bill, President of the International Federation of Robotics. “Today, robots are playing a vital role in enabling this industry’s transition from combustion engines to electric power. Robotic automation helps car manufacturers manage the wholesale changes to long-established manufacturing methods and technologies.”

Robot density in automotive

Robot density is a key indicator which illustrates the current level of automation in the top car producing economies: In the Republic of Korea, 2,867 industrial robots per 10,000 employees were in operation in 2021. Germany ranks in second place with 1,500 units followed by the United States counting 1,457 units and Japan with 1,422 units per 10,000 workers.

The world´s biggest car manufacturer, China, has a robot density of 772 units, but is catching up fast: Within a year, new robot installations in the Chinese automotive industry almost doubled to 61,598 units in 2021- accounting for 52% of the total 119,405 units installed in factories around the world.

Electric vehicles drive automation

Ambitious political targets for electric vehicles are forcing the car industry to invest: The European Union has announced plans to end the sale of air-polluting vehicles by 2035. The US government aims to reach a voluntary goal of 50% market share for electric vehicle sales by 2030 and all new vehicles sold in China must be powered by “new energy” by 2035. Half of them must be electric, fuel cell, or plug-in hybrid – the remaining 50%, hybrid vehicles.

Most automotive manufacturers who have already invested in traditional “caged” industrial robots for basic assembling are now also investing in collaborative applications for final assembly and finishing tasks. Tier-two automotive parts suppliers, many of which are SMEs, are slower to automate fully. Yet, as robots become smaller, more adaptable, easier to program, and less capital-intensive this is expected to change.

]]>
Robot Talk Episode 42 – Thom Kirwan-Evans https://robohub.org/robot-talk-episode-42-thom-kirwan-evans/ Sat, 25 Mar 2023 06:42:51 +0000 https://robohub.org/?p=206910 Claire chatted to Thom Kirwan-Evans from Origami Labs all about computer vision, machine learning, and robots in industry.

Thom Kirwan-Evans is a co-founder at Origami Labs where he applies the latest AI research to solve complex real world problems. Thom started as a physicist at Dstl working with camera systems before moving to an engineering consultancy and then setting up his own company last year. A keen runner and father of two, a key aim in starting his business was a good work-life balance.

]]>
Resilient bug-sized robots keep flying even after wing damage https://robohub.org/resilient-bug-sized-robots-keep-flying-even-after-wing-damage/ Thu, 23 Mar 2023 11:49:00 +0000 https://news.mit.edu/2023/resilient-bug-sized-robots-wing-damage-0315

MIT researchers have developed resilient artificial muscles that can enable insect-scale aerial robots to effectively recover flight performance after suffering severe damage. Photo: Courtesy of the researchers

By Adam Zewe | MIT News Office

Bumblebees are clumsy fliers. It is estimated that a foraging bee bumps into a flower about once per second, which damages its wings over time. Yet despite having many tiny rips or holes in their wings, bumblebees can still fly.

Aerial robots, on the other hand, are not so resilient. Poke holes in the robot’s wing motors or chop off part of its propellor, and odds are pretty good it will be grounded.

Inspired by the hardiness of bumblebees, MIT researchers have developed repair techniques that enable a bug-sized aerial robot to sustain severe damage to the actuators, or artificial muscles, that power its wings — but to still fly effectively.

They optimized these artificial muscles so the robot can better isolate defects and overcome minor damage, like tiny holes in the actuator. In addition, they demonstrated a novel laser repair method that can help the robot recover from severe damage, such as a fire that scorches the device.

Using their techniques, a damaged robot could maintain flight-level performance after one of its artificial muscles was jabbed by 10 needles, and the actuator was still able to operate after a large hole was burnt into it. Their repair methods enabled a robot to keep flying even after the researchers cut off 20 percent of its wing tip.

This could make swarms of tiny robots better able to perform tasks in tough environments, like conducting a search mission through a collapsing building or dense forest.

“We spent a lot of time understanding the dynamics of soft, artificial muscles and, through both a new fabrication method and a new understanding, we can show a level of resilience to damage that is comparable to insects,” says Kevin Chen, the D. Reid Weedon, Jr. Assistant Professor in the Department of Electrical Engineering and Computer Science (EECS), the head of the Soft and Micro Robotics Laboratory in the Research Laboratory of Electronics (RLE), and the senior author of the paper on these latest advances. “We’re very excited about this. But the insects are still superior to us, in the sense that they can lose up to 40 percent of their wing and still fly. We still have some catch-up work to do.”

Chen wrote the paper with co-lead authors Suhan Kim and Yi-Hsuan Hsiao, who are EECS graduate students; Younghoon Lee, a postdoc; Weikun “Spencer” Zhu, a graduate student in the Department of Chemical Engineering; Zhijian Ren, an EECS graduate student; and Farnaz Niroui, the EE Landsman Career Development Assistant Professor of EECS at MIT and a member of the RLE. The article appeared in Science Robotics.

Robot repair techniques

Using the repair techniques developed by MIT researchers, this microrobot can still maintain flight-level performance even after the artificial muscles that power its wings were jabbed by 10 needles and 20 percent of one wing tip was cut off. Credit: Courtesy of the researchers.

The tiny, rectangular robots being developed in Chen’s lab are about the same size and shape as a microcassette tape, though one robot weighs barely more than a paper clip. Wings on each corner are powered by dielectric elastomer actuators (DEAs), which are soft artificial muscles that use mechanical forces to rapidly flap the wings. These artificial muscles are made from layers of elastomer that are sandwiched between two razor-thin electrodes and then rolled into a squishy tube. When voltage is applied to the DEA, the electrodes squeeze the elastomer, which flaps the wing.

But microscopic imperfections can cause sparks that burn the elastomer and cause the device to fail. About 15 years ago, researchers found they could prevent DEA failures from one tiny defect using a physical phenomenon known as self-clearing. In this process, applying high voltage to the DEA disconnects the local electrode around a small defect, isolating that failure from the rest of the electrode so the artificial muscle still works.

Chen and his collaborators employed this self-clearing process in their robot repair techniques.

First, they optimized the concentration of carbon nanotubes that comprise the electrodes in the DEA. Carbon nanotubes are super-strong but extremely tiny rolls of carbon. Having fewer carbon nanotubes in the electrode improves self-clearing, since it reaches higher temperatures and burns away more easily. But this also reduces the actuator’s power density.

“At a certain point, you will not be able to get enough energy out of the system, but we need a lot of energy and power to fly the robot. We had to find the optimal point between these two constraints — optimize the self-clearing property under the constraint that we still want the robot to fly,” Chen says.

However, even an optimized DEA will fail if it suffers from severe damage, like a large hole that lets too much air into the device.

Chen and his team used a laser to overcome major defects. They carefully cut along the outer contours of a large defect with a laser, which causes minor damage around the perimeter. Then, they can use self-clearing to burn off the slightly damaged electrode, isolating the larger defect.

“In a way, we are trying to do surgery on muscles. But if we don’t use enough power, then we can’t do enough damage to isolate the defect. On the other hand, if we use too much power, the laser will cause severe damage to the actuator that won’t be clearable,” Chen says.

The team soon realized that, when “operating” on such tiny devices, it is very difficult to observe the electrode to see if they had successfully isolated a defect. Drawing on previous work, they incorporated electroluminescent particles into the actuator. Now, if they see light shining, they know that part of the actuator is operational, but dark patches mean they successfully isolated those areas.

The new research could make swarms of tiny robots better able to perform tasks in tough environments, like conducting a search mission through a collapsing building or dense forest. Photo: Courtesy of the researchers

Flight test success

Once they had perfected their techniques, the researchers conducted tests with damaged actuators — some had been jabbed by many needles while other had holes burned into them. They measured how well the robot performed in flapping wing, take-off, and hovering experiments.

Even with damaged DEAs, the repair techniques enabled the robot to maintain its flight performance, with altitude, position, and attitude errors that deviated only very slightly from those of an undamaged robot. With laser surgery, a DEA that would have been broken beyond repair was able to recover 87 percent of its performance.

“I have to hand it to my two students, who did a lot of hard work when they were flying the robot. Flying the robot by itself is very hard, not to mention now that we are intentionally damaging it,” Chen says.

These repair techniques make the tiny robots much more robust, so Chen and his team are now working on teaching them new functions, like landing on flowers or flying in a swarm. They are also developing new control algorithms so the robots can fly better, teaching the robots to control their yaw angle so they can keep a constant heading, and enabling the robots to carry a tiny circuit, with the longer-term goal of carrying its own power source.

“This work is important because small flying robots — and flying insects! — are constantly colliding with their environment. Small gusts of wind can be huge problems for small insects and robots. Thus, we need methods to increase their resilience if we ever hope to be able to use robots like this in natural environments,” says Nick Gravish, an associate professor in the Department of Mechanical and Aerospace Engineering at the University of California at San Diego, who was not involved with this research. “This paper demonstrates how soft actuation and body mechanics can adapt to damage and I think is an impressive step forward.”

This work is funded, in part, by the National Science Foundation (NSF) and a MathWorks Fellowship.


]]>
How drones for organ transportation are changing the healthcare industry https://robohub.org/how-drones-for-organ-transportation-are-changing-the-healthcare-industry/ Tue, 21 Mar 2023 11:20:22 +0000 https://robohub.org/?p=206857

Source: Unsplash

According to statistics, the healthcare drone industry has witnessed a dramatic surge in the last couple of years. In 2020, the market grew 30% and is expected to grow from $254 million in 2021 to $1,5 billion in 2028. The most common use case for healthcare drones is the delivery of medical supplies and laboratory samples.

However, it appears that in 2022, new ways of using drones have become available. Research groups in the USA have completed test drone organ delivery operations and have done so successfully. How will the proliferation of organ transportation with drones influence the healthcare industry?

What is an organ transportation drone?

Before we talk about how medical delivery drones may influence the healthcare industry, it’s worth investigating what they are and how they work.

Drones are unmanned aerial vehicles (UAVs) that can be operated remotely or that can fly autonomously using on-board sensors and GPS. The smallest drones can be as small as 30sm in length and weigh about 500 grams. The largest can reach the size of a track and carry weights up to 4.5 tons.

Drones for organ transportation are somewhere in the middle. Organs are usually delivered in batches and can actually be quite heavy with all the ecosystem that is necessary to maintain them in the desired condition. Organ delivery drones are able to carry freight up to 180kg. These drones are designed to transport vital organs such as hearts, kidneys, and livers from one location to another in a safe and efficient manner.

Drones are able to transport objects on relatively short distances. While iner-city and inter-city delivery is possible it’s probably too early to talk about international transportation. This limitation can be explained by the difficulties of piloting the drone as well as by the nature of organ transplantation as such that is an extremely timely matter.

Why are organ transportation drones so important?

Right now organ delivery drones are still on the stage of development and testing. However, a survey conducted among surgeons in the USA has shown that this innovation may have high importance for the field.

A survey by University of Maryland Medical Center in Baltimore has shown that 76.4% of organ transplantation surgeons believe that cold ischaemia time reduction to 8 hours, achieved via the use of organ delivery drones, would increase organ acceptance rates. In fact, time to delivery reduction is one of the most significant benefits of using drones for the delivery of organs. After the organs have been extracted from the body, they only have 4 to 72 hours to be transplanted. The longer the waiting time the higher are the chances of the organ failing upon transplantation. Only 16% of surgeons believed the current transportation system is adequate for organ delivery needs. Clara Guerrero, director of communications for the Texas Organ Sharing Alliance, says in the article for San Antonio Report, ‘You’re saving hours. What that also means is the organ is more viable. That person, they don’t have to wait so long for the organ to arrive. We’re saving lives faster and sooner’.

Another research has investigated the potential drawbacks and benefits of using organ transportation drones as opposed to delivery with commercial aircrafts and charter flights. They have used a modified, six-rotor UAS to model organ delivery. During the transportation process, they’ve measured the temperature and vibration levels. This is what they write:

“Temperatures remained stable and low (2.5 °C). Pressure changes (0.37–0.86 kPa) correlated with increased altitude. Drone travel was associated with less vibration (<0.5 G) than was observed with fixed-wing flight (>2.0 G). Peak velocity was 67.6 km/h (42 m/h). Biopsies of the kidney taken prior to and after organ shipment revealed no damage resulting from drone travel. The longest flight was 3.0 miles, modeling an organ flight between two inner city hospitals.”

Joseph R. Scalea et al, University of Maryland

In the future, the use of drones for organ transportation could greatly increase as the technology improves. For example, advances in autonomous flight systems and improved battery technology could make it possible for drones to fly longer distances and reach more remote locations. Additionally, the development of drone delivery networks could make it possible to deliver organs to hospitals and other healthcare facilities in a matter of minutes, reducing the time that vital organs are outside of a human body.

Who makes drones for organ transportation?

Currently, there are several companies in the world that are working on making organ transplantation a reality.

One such company is Zipline, based in California, USA. The company has developed a drone specifically for the transportation of medical supplies, including blood and organs. The drone is able to fly at high speeds and cover long distances, making it ideal for transporting organs between hospitals and other medical facilities.

Another company, Matternet, is also based in California and it has developed a similar drone for medical deliveries. This drone is used to deliver diagnostic samples in Switzerland and can be applied for carrying small organs as well.

A Canadian company Unither Bioélectronique specializes in quick and efficient delivery methods for organ transportation such as drones. The Indian government is developing an organ delivery drone system together with.

In China, a company called EHang has developed a drone that can transport organs and other medical supplies. This drone is able to fly at high speeds and cover long distances, making it ideal for transporting organs between hospitals and other medical facilities.

In Europe, a company called Volocopter, based in Germany, has developed a drone specifically for the transportation of organs. The drone is equipped with advanced navigation systems and can fly at high speeds, making it ideal for transporting organs between hospitals and other medical facilities.

In India, the first human organ delivery drone developed by MGM Healthcare. It can be used to transport organs with a maximum distance of 20 kilometers.

Conclusion

The use of organ drone delivery represents a significant breakthrough in the field of organ transplantation. This innovative technology has the potential to revolutionize the way organs are transported, making the process faster, more efficient, and more reliable than ever before. By reducing the time it takes to deliver organs to transplant centers, drones could help save countless lives by ensuring that patients receive the organs they need in a timely manner. Moreover, by reducing the risk of organ damage during transport, drones could improve the success rates of organ transplants, leading to better outcomes for patients. With the ongoing development of organ drone delivery technology, we can look forward to a future where organ transplantation is more accessible, reliable, and effective than ever before.

]]>
Robotic bees and roots offer hope of healthier environment and sufficient food https://robohub.org/robotic-bees-and-roots-offer-hope-of-healthier-environment-and-sufficient-food/ Sat, 18 Mar 2023 10:55:02 +0000 https://robohub.org/?p=206834

Robotics and AI can help build healthier bee colonies, benefitting biodiversity and food supply. © 0 Lorenzo Bernini 0, Shutterstock.com

The robotic bee replicants home in on the unsuspecting queen of a hive. But unlike the rebellious replicants in the 1982 sci-fi thriller Blade Runner, these ones are here to work.

Combining miniature robotics, artificial intelligence (AI) and machine learning, the plan is for the robotic bees to stimulate egg laying in the queen by, for example, feeding her the right foods at the right time.

Survive and thrive

‘We plan to affect a whole ecosystem by interacting with only one single animal, the queen,’ said Dr Farshad Arvin, a roboticist and computer scientist at the University of Durham in the UK. ‘If we can keep activities like egg laying happening at the right time, we are expecting to have healthier broods and more active and healthy colonies. This will then improve pollination.’

While that goes on above the surface, shape-morphing robot roots that can adapt and interact with real plants and fungi are hard at work underground. There, plants and their fungal partners form vast networks.

These robotic bees and roots are being developed by two EU-funded projects. Both initiatives are looking into how artificial versions of living things central to maintaining ecosystems can help real-life organisms and their environment survive and thrive – while ensuring food for people remains plentiful.

“If we can keep activities like egg laying happening at the right time, we are expecting to have healthier broods.”

– Dr Farshad Arvin, RoboRoyale

That could be crucial to the planet’s long-term future, particularly with many species currently facing steep population declines as a result of threats that include habitat loss, pollution and climate change.

One of those at risk is the honeybee, a keystone species in the insect pollination required for 75% of crops grown for human food globally.

Fit for a queen

The RoboRoyale project that Arvin leads combines microrobotic, biological and machine-learning technologies to nurture the queen honeybee’s well-being. The project is funded by the European Innovation Council’s Pathfinder programme.

A unique aspect of RoboRoyale is its sole focus on the queen rather than the entire colony, according to Arvin. He said the idea is to demonstrate how supporting a single key organism can stimulate production in the whole environment, potentially affecting hundreds of millions of organisms.

The multi-robot system, which the team hopes to start testing in the coming months, will learn over time how to groom the queen to optimise her egg laying and production of pheromones – chemical scents that influence the behaviour of the hive.

The system is being deployed in artificial glass observation hives in Austria and Turkey, with the bee replicants designed to replace the so-called court bees that normally interact with the queen.

Foods for broods

One aim is that the robot bees can potentially stimulate egg laying by providing the queen with specific protein-rich foods at just the right time to boost this activity. In turn, an expected benefit is that a resulting increase in bees and foraging flights would mean stronger pollination of the surrounding ecosystem to support plant growth and animals.

The system enables six to eight robotic court bees, some equipped with microcameras, to be steered inside an observation hive by a controller attached to them from outside. The end goal is to make the robot bees fully autonomous.

The concept design of RoboRoyale robotic controller. © Farshad Arvin, 2023

Prior to this, the RoboRoyale team observed queen bees in several hives using high-resolution cameras and image-analysis software to get more insight into their behaviour.

The team captured more than 150 million samples of the queens’ trajectories inside the hive and detailed footage of their social interactions with other bees. It is now analysing the data.

Once the full robotic system is sufficiently tested, the RoboRoyale researchers hope it will foster understanding of the potential for bio-hybrid technology not only in bees but also in other organisms.

‘It might lead to a novel type of sustainable technology that positively impacts surrounding ecosystems,’ said Arvin.

Wood Wide Web

The other project, I-Wood, is exploring a very different type of social network – one that’s underground.

Scientists at the Italian Institute of Technology (IIT) in Genoa are studying what they call the Wood Wide Web. It consists of plant roots connected to each other through a symbiotic network of fungi that provide them with nutrients and help them to share resources and communicate.

“Biomimicry in robotics and technology will have a fundamental role in saving our planet.”

– Dr Barbara Mazzolai, I-Wood

To understand these networks better and find ways to stimulate their growth, I-Wood is developing soft, shape-changing robotic roots that can adapt and interact with real plants and fungi. The idea is for a robotic plant root to use a miniaturised 3D printer in its tip to enable it to grow and branch out, layer by layer, in response to environmental factors such as temperature, humidity and available nutrients.

‘These technologies will help to increase knowledge about the relationship between symbionts and hosts,’ said Dr Barbara Mazzolai, an IIT roboticist who leads the project.

Mazzolai’s team has a greenhouse where it grows rice plants inoculated with fungi. So far, the researchers have separately examined the growth of roots and fungi.

Soon, they plan to merge their findings to see how, when and where the interaction between the two occurs and what molecules it involves.

The findings can later be used by I-Wood’s robots to help the natural symbiosis between fungi and roots work as effectively as possible. The team hopes to start experimenting with robots in the greenhouse by the end of this year.

The robotic roots can be programmed to move autonomously, helped by sensors in their tips, according to Mazzolai. Like the way real roots or earthworms move underground, they will also seek passages that are easier to move through due to softer or less compact soil.

Tweaks of the trade

But there are challenges in combining robotics with nature.

For example, bees are sensitive to alien objects in their hive and may remove them or coat them in wax. This makes it tricky to use items like tracking tags.

The bees have, however, become more accepting after the team tweaked elements of the tags such as their coating, materials and smell, according to Arvin of RoboRoyale.

Despite these challenges, Arvin and Mazzolai believe robotics and artificial intelligence could play a key part in sustaining ecosystems and the environment in the long term. For Mazzolai, the appeal lies in the technologies’ potential to offer deeper analysis of little-understood interactions among plants, animals and the environment.

For instance, with the underground web of plant roots and fungi believed to be crucial to maintaining healthy ecosystems and limiting global warming by locking up carbon, the project’s robotic roots can help shed light on how we can protect and support these natural processes.

‘Biomimicry in robotics and technology will have a fundamental role in saving our planet,’ Mazzolai said.


This article was originally published in Horizon, the EU Research and Innovation magazine.

]]>
Robot Talk Episode 41 – Alessandra Rossi https://robohub.org/robot-talk-episode-41-alessandra-rossi/ Fri, 17 Mar 2023 14:21:06 +0000 https://robohub.org/?p=206848 Claire chatted to Alessandra Rossi from the University of Naples all about social robotics, theory of mind, and robots playing football.

Alessandra Rossi is Assistant Professor at the University of Naples Federico II in Italy. Her PhD thesis was part of the Marie Sklodowska-Curie ETN SECURE project at the University of Hertfordshire in the UK, and she is now a Visiting Lecturer and Researcher there. Her research interests include human-robot interaction, social robotics, explainable AI, multi-agent systems and user profiling. She is the team leader of RoboCup team Bold Hearts at the University of Hertfordshire, and Executive Committee member of the RoboCup Humanoid League.

]]>
Mix-and-match kit could enable astronauts to build a menagerie of lunar exploration bots https://robohub.org/mix-and-match-kit-could-enable-astronauts-to-build-a-menagerie-of-lunar-exploration-bots/ Tue, 14 Mar 2023 18:22:00 +0000 https://news.mit.edu/2023/mixed-robot-kit-lunar-exploration-0314

A team of MIT engineers is designing a kit of universal robotic parts that an astronaut could easily mix and match to build different robot “species” to fit various missions on the moon. Credit: hexapod image courtesy of the researchers, edited by MIT News

By Jennifer Chu | MIT News Office

When astronauts begin to build a permanent base on the moon, as NASA plans to do in the coming years, they’ll need help. Robots could potentially do the heavy lifting by laying cables, deploying solar panels, erecting communications towers, and building habitats. But if each robot is designed for a specific action or task, a moon base could become overrun by a zoo of machines, each with its own unique parts and protocols.

To avoid a bottleneck of bots, a team of MIT engineers is designing a kit of universal robotic parts that an astronaut could easily mix and match to rapidly configure different robot “species” to fit various missions on the moon. Once a mission is completed, a robot can be disassembled and its parts used to configure a new robot to meet a different task.

The team calls the system WORMS, for the Walking Oligomeric Robotic Mobility System. The system’s parts include worm-inspired robotic limbs that an astronaut can easily snap onto a base, and that work together as a walking robot. Depending on the mission, parts can be configured to build, for instance, large “pack” bots capable of carrying heavy solar panels up a hill. The same parts could be reconfigured into six-legged spider bots that can be lowered into a lava tube to drill for frozen water.

“You could imagine a shed on the moon with shelves of worms,” says team leader George Lordos, a PhD candidate and graduate instructor in MIT’s Department of Aeronautics and Astronautics (AeroAstro), in reference to the independent, articulated robots that carry their own motors, sensors, computer, and battery. “Astronauts could go into the shed, pick the worms they need, along with the right shoes, body, sensors and tools, and they could snap everything together, then disassemble it to make a new one. The design is flexible, sustainable, and cost-effective.”

Lordos’ team has built and demonstrated a six-legged WORMS robot. Last week, they presented their results at IEEE’s Aerospace Conference, where they also received the conference’s Best Paper Award.

MIT team members include Michael J. Brown, Kir Latyshev, Aileen Liao, Sharmi Shah, Cesar Meza, Brooke Bensche, Cynthia Cao, Yang Chen, Alex S. Miller, Aditya Mehrotra, Jacob Rodriguez, Anna Mokkapati, Tomas Cantu, Katherina Sapozhnikov, Jessica Rutledge, David Trumper, Sangbae Kim, Olivier de Weck, Jeffrey Hoffman, along with Aleks Siemenn, Cormac O’Neill, Diego Rivero, Fiona Lin, Hanfei Cui, Isabella Golemme, John Zhang, Jolie Bercow, Prajwal Mahesh, Stephanie Howe, and Zeyad Al Awwad, as well as Chiara Rissola of Carnegie Mellon University and Wendell Chun of the University of Denver.

Animal instincts

WORMS was conceived in 2022 as an answer to NASA’s Breakthrough, Innovative and Game-changing (BIG) Idea Challenge — an annual competition for university students to design, develop, and demonstrate a game-changing idea. In 2022, NASA challenged students to develop robotic systems that can move across extreme terrain, without the use of wheels.

A team from MIT’s Space Resources Workshop took up the challenge, aiming specifically for a lunar robot design that could navigate the extreme terrain of the moon’s South Pole — a landscape that is marked by thick, fluffy dust; steep, rocky slopes; and deep lava tubes. The environment also hosts “permanently shadowed” regions that could contain frozen water, which, if accessible, would be essential for sustaining astronauts.

As they mulled over ways to navigate the moon’s polar terrain, the students took inspiration from animals. In their initial brainstorming, they noted certain animals could conceptually be suited to certain missions: A spider could drop down and explore a lava tube, a line of elephants could carry heavy equipment while supporting each other down a steep slope, and a goat, tethered to an ox, could help lead the larger animal up the side of a hill as it transports an array of solar panels.

“As we were thinking of these animal inspirations, we realized that one of the simplest animals, the worm, makes similar movements as an arm, or a leg, or a backbone, or a tail,” says deputy team leader and AeroAstro graduate student Michael Brown. “And then the lightbulb went off: We could build all these animal-inspired robots using worm-like appendages.’”

The research team in Killian Court at MIT. Credit: Courtesy of the researchers

Snap on, snap off

Lordos, who is of Greek descent, helped coin WORMS, and chose the letter “O” to stand for “oligomeric,” which in Greek signifies “a few parts.”

“Our idea was that, with just a few parts, combined in different ways, you could mix and match and get all these different robots,” says AeroAstro undergraduate Brooke Bensche.

The system’s main parts include the appendage, or worm, which can be attached to a body, or chassis, via a “universal interface block” that snaps the two parts together through a twist-and-lock mechanism. The parts can be disconnected with a small tool that releases the block’s spring-loaded pins.

Appendages and bodies can also snap into accessories such as a “shoe,” which the team engineered in the shape of a wok, and a LiDAR system that can map the surroundings to help a robot navigate.

“In future iterations we hope to add more snap-on sensors and tools, such as winches, balance sensors, and drills,” says AeroAstro undergraduate Jacob Rodriguez.

The team developed software that can be tailored to coordinate multiple appendages. As a proof of concept, the team built a six-legged robot about the size of a go-cart. In the lab, they showed that once assembled, the robot’s independent limbs worked to walk over level ground. The team also showed that they could quickly assemble and disassemble the robot in the field, on a desert site in California.

In its first generation, each WORMS appendage measures about 1 meter long and weighs about 20 pounds. In the moon’s gravity, which is about one-sixth that of Earth’s, each limb would weigh about 3 pounds, which an astronaut could easily handle to build or disassemble a robot in the field. The team has planned out the specs for a larger generation with longer and slightly heavier appendages. These bigger parts could be snapped together to build “pack” bots, capable of transporting heavy payloads.

“There are many buzz words that are used to describe effective systems for future space exploration: modular, reconfigurable, adaptable, flexible, cross-cutting, et cetera,” says Kevin Kempton, an engineer at NASA’s Langley Research Center, who served as a judge for the 2022 BIG Idea Challenge. “The MIT WORMS concept incorporates all these qualities and more.”

This research was supported, in part, by NASA, MIT, the Massachusetts Space Grant, the National Science Foundation, and the Fannie and John Hertz Foundation.

]]>
Learning to compute through art https://robohub.org/learning-to-compute-through-art/ Sun, 12 Mar 2023 09:30:00 +0000 https://news.mit.edu/2023/learning-compute-through-art-0306

Shua Cho works on her artwork in “Introduction to Physical Computing for Artists” at the MIT Student Art Association. Photo: Sarah Bastille

By Ken Shulman | Arts at MIT

One student confesses that motors have always freaked them out. Amy Huynh, a first-year student in the MIT Technology and Policy Program, says “I just didn’t respond to the way electrical engineering and coding is usually taught.”

Huynh and her fellow students found a different way to master coding and circuits during the Independent Activities Period course Introduction to Physical Computing for Artists — a class created by Student Art Association (SAA) instructor Timothy Lee and offered for the first time last January. During the four-week course, students learned to use circuits, wiring, motors, sensors, and displays by developing their own kinetic artworks. 

“It’s a different approach to learning about art, and about circuits,” says Lee, who joined the SAA instructional staff last June after completing his MFA at Goldsmiths, University of London. “Some classes can push the technology too quickly. Here we try to take away the obstacles to learning, to create a collaborative environment, and to frame the technology in the broader concept of making an artwork. For many students, it’s a very effective way to learn.”

Lee graduated from Wesleyan University with three concurrent majors in neuroscience, biology, and studio art. “I didn’t have a lot of free time,” says Lee, who originally intended to attend medical school before deciding to follow his passion for making art. “But I benefited from studying both science and art. Just as I almost always benefited from learning from my peers. I draw on both of those experiences in designing and teaching this class.”

On this January evening, the third of four scheduled classes, Lee leads his students through an exercise to create an MVP — a minimum viable product of their art project. The MVP, he explains, serves as an artist’s proof of concept. “This is the smallest single unit that can demonstrate that your project is doable,” he says. “That you have the bare-minimum functioning hardware and software that shows your project can be scalable to your vision. Our work here is different from pure robotics or pure electronics. Here, the technology and the coding don’t need to be perfect. They need to support your aesthetic and conceptual goals. And here, these things can also be fun.”

Lee distributes various electronic items to the students according to their specific needs — wires, soldering irons, resistors, servo motors, and Arduino components. The students have already acquired a working knowledge of coding and the Arduino language in the first two class sessions. Sophomore Shua Cho is designing an evening gown bedecked with flowers that will open and close continuously. Her MVP is a cluster of three blossoms, mounted on a single post that, when raised and lowered, opens and closes the sewn blossoms. She asks Lee for help in attaching a servo motor — an electronic motor that alternates between 0, 90, and 180 degrees — to the post. Two other students, working on similar problems, immediately pull their chairs beside Cho and Lee to join the discussion. 

Shua Cho is designing an evening gown bedecked with flowers that will open and close continuously. Her minimum viable product is a cluster of three blossoms, mounted on a single post that, when raised and lowered, opens and closes the sewn blossoms. Photo: Sarah Bastille

The instructor suggests they observe the dynamics of an old-fashioned train locomotive wheel. One student calls up the image on their laptop. Then, as a group, they reach a solution for Cho — an assembly of wire and glue that will attach the servo engine to the central post, opening and closing the blossoms. It’s improvised, even inelegant. But it works, and proves that the project for the blossom-covered kinetic dress is viable.  

“This is one of the things I love about MIT,” says aeronautical and astronautical engineering senior Hannah Munguia. Her project is a pair of hands that, when triggered by a motion sensor, will applaud when anyone walks by. “People raise their hand when they don’t understand something. And other people come to help. The students here trust each other, and are willing to collaborate.”

Student Hannah Munguia (left), instructor Timothy Lee (center), and student Bryan Medina work on artwork in “Introduction to Physical Computing for Artists” at the MIT Student Art Association. Photo: Sarah Bastille

Cho, who enjoys exploring the intersection between fashion and engineering, discovered Lee’s work on Instagram long before she decided to enroll at MIT. “And now I have the chance to study with him,” says Cho, who works at Infinite — MIT’s fashion magazine — and takes classes in both mechanical engineering and design. “I find that having a creative project like this one, with a goal in mind, is the best way for me to learn. I feel like it reinforces my neural pathways, and I know it helps me retain information. I find myself walking down the street or in my room, thinking about possible solutions for this gown. It never feels like work.”

For Lee, who studied computational art during his master’s program, his course is already a successful experiment. He’d like to offer a full-length version of “Introduction to Physical Computing for Artists” during the school year. With 10 sessions instead of four, he says, students would be able to complete their projects, instead of stopping at an MVP.   

“Prior to coming to MIT, I’d only taught at art institutions,” says Lee. “Here, I needed to revise my focus, to redefine the value of art education for students who most likely were not going to pursue art as a profession. For me, the new definition was selecting a group of skills that are necessary in making this type of art, but that can also be applied to other areas and fields. Skills like sensitivity to materials, tactile dexterity, and abstract thinking. Why not learn these skills in an atmosphere that is experimental, visually based, sometimes a little uncomfortable. And why not learn that you don’t need to be an artist to make art. You just have to be excited about it.”

]]>
Robot Talk Episode 40 – Edward Timpson https://robohub.org/robot-talk-episode-40-edward-timpson/ Fri, 10 Mar 2023 16:28:54 +0000 https://robohub.org/?p=206789

Claire chatted to Edward Timpson from QinetiQ all about robots in the military, uncrewed vehicles, and cyber security.

Ed Timpson joined QinetiQ in 2020 after 11 years serving in the Royal Navy as a Weapons Engineering Officer (Submarines) across a number of ranks. Joining QinetiQ Target Systems as a Project Engineer and also managing the Hardware team, he had success in developing new capabilities for the Banshee family of UAS. He then moved into future systems within QinetiQ as a Principal Systems Engineer specialising in complex trials and experimentation of uncrewed vehicles. He now heads up the Robotics and Autonomous Systems capability within QinetiQ UK.

]]>
A new bioinspired earthworm robot for future underground explorations https://robohub.org/a-new-bioinspired-earthworm-robot-for-future-underground-explorations/ Thu, 09 Mar 2023 07:07:40 +0000 https://robohub.org/?p=206767

Author: D.Farina. Credits: Istituto Italiano di Tecnologia – © IIT, all rights reserved

Researchers at Istituto Italiano di Tecnologia (IIT-Italian Institute of Technology) in Genova has realized a new soft robot inspired by the biology of earthworms,which is able to crawl thanks to soft actuators that elongate or squeeze, when air passes through them or is drawn out. The prototype has been described in the international journal Scientific Reports of the Nature Portfolio, and it is the starting point for developing devices for underground exploration, but also search and rescue operations in confined spaces and the exploration of other planets.

Nature offers many examples of animals, such as snakes, earthworms, snails, and caterpillars, which use both the flexibility of their bodies and the ability to generate physical travelling waves along the length of their body to move and explore different environments. Some of their movements are also similar to plant roots.

Taking inspiration from nature and, at the same time, revealing new biological phenomena while developing new technologies is the main goal of the BioInspired Soft robotics lab coordinated by Barbara Mazzolai, and this earthworm-like robot is the latest invention coming from her group.

The creation of earthworm-like robot was made possible through a thorough understanding and application of earthworm locomotion mechanics. They use alternating contractions of muscle layers to propel themselves both below and above the soil surface by generating retrograde peristaltic waves. The individual segments of their body (metameres) have a specific quantity of fluid that controls the internal pressure to exert forces, and perform independent, localized and variable movement patterns.

IIT researchers have studied the morphology of earthworms and have found a way to mimic their muscle movements, their constant volume coelomic chambers and the function of their bristle-like hairs (setae) by creating soft robotic solutions.

The team developed a peristaltic soft actuator (PSA) that implements the antagonistic muscle movements of earthworms; from a neutral position it elongates when air is pumped into it and compresses when air is extracted from it. The entire body of the robotic earthworm is made of five PSA modules in series, connected with interlinks. The current prototype is 45 cm long and weighs 605 grams.

Each actuator has an elastomeric skin that encapsulates a known amount of fluid, thus mimicking the constant volume of internal coelomic fluid in earthworms. The earthworm segment becomes shorter longitudinally and wider circumferentially and exerts radial forces as the longitudinal muscles of an individual constant volume chamber contract. Antagonistically, the segment becomes longer along the anterior–posterior axis and thinner circumferentially with the contraction of circumferential muscles, resulting in penetration forces along the axis.

Every single actuator demonstrates a maximum elongation of 10.97mm at 1 bar of positive pressure and a maximum compression of 11.13mm at 0.5 bar of negative pressure, unique in its ability to generate both longitudinal and radial forces in a single actuator module.

In order to propel the robot on a planar surface, small passive friction pads inspired by earthworms’ setae were attached to the ventral surface of the robot. The robot demonstrated improved locomotion with a speed of 1.35mm/s.

This study not only proposes a new method for developing a peristaltic earthworm-like soft robot but also provides a deeper understanding of locomotion from a bioinspired perspective in different environments. The potential applications for this technology are vast, including underground exploration, excavation, search and rescue operations in subterranean environments and the exploration of other planets. This bioinspired burrowing soft robot is a significant step forward in the field of soft robotics and opens the door for further advancements in the future.

]]>
What is the hype cycle for robotics? https://robohub.org/what-is-the-hype-cycle-for-robotics/ Tue, 07 Mar 2023 10:10:11 +0000 https://robohub.org/?p=206751 We’ve all seen or heard of the Hype Cycle. It’s a visual depiction of the lifecycle stages a technology goes through from the initial development to commercial maturity. It’s a useful way to track what technologies are compatible with your organization’s needs. There are five stages of the Hype Cycle, which take us through the initial excitement trigger, that leads to the peak of inflated expectations followed by disillusionment. It’s only as a product moves into more tangible market use, sometimes called ‘The Slope of Enlightenment’, that we start to reach full commercial viability.

Working with so many robotics startups, I see this stage as the transition into revenue generation in more than pilot use cases. This is the point where a startup no longer needs to nurture each customer deployment but can produce reference use cases and start to reliably scale. I think this is a useful model but that Gartner’s classifications don’t do robotics justice.

For example, this recent Gartner chart puts Smart Robots at the top of the hype cycle. Robotics is a very fast moving field at the moment. The majority of new robotics companies are less than 5-10 years old. From the perspective of the end user, it can be very difficult to know when a company is moving out of the hype cycle and into commercial maturity because there aren’t many deployments or much marketing at first, particularly compared to the media coverage of companies at the peak of the hype cycle.

So, here’s where I think robotics technologies really fit on the Gartner Hype Cycle:

Innovation trigger

  • Voice interfaces for practical applications of robots
  • Foundational models applied to robotics


Peak of inflated expectations

  • Large Language models – although likely to progress very quickly
  • Humanoids

Trough of disillusionment

  • Quadrupeds
  • Cobots
  • Full self-driving cars and trucks
  • Powered clothing/Exoskeletons


Slope of enlightenment

  • Teleoperation
  • Cloud fleet management
  • Drones for critical delivery to remote locations
  • Drones for civilian surveillance
  • Waste recycling
  • Warehouse robotics (pick and place)
  • Hospital logistics
  • Education robots
  • Food preparation
  • Rehabilitation
  • AMRs in other industries


Plateau of productivity

  • Robot vacuum cleaners (domestic and commercial)
  • Surgical Robots
  • Warehouse robotics (AMRs in particular)
  • Factory automation (robot arms)
  • 3d printing
  • ROS
  • Simulation

AI, in the form of Large Language Models ie. ChatGPT, GPT3 and Bard is at peak hype, as are humanoid robots, and perhaps the peak of that hype is the idea of RoboGPT, or using LLMs to interpret human commands to robots. Just in the last year, four or five new humanoid robot companies have come out of stealth from Figure, Teslabot, Aeolus, Giant AI, Agility, Halodi, and so far only Halodi has a commercial deployment doing internal security augmentation for ADT.

Cobots are still in the Trough of Disillusionment, in spite of Universal Robot selling 50,000+ arms. People buy robot arms from companies like Universal primarily for affordability, ease of setup, not requiring safety guarding hardware and capable of industrial precision. The full promise of collaborative robots has had trouble landing with end users. We don’t really deploy collaborative robots engaged in frequent hand-offs to humans. Perhaps we need more dual armed cobots with better human-robot interaction before we really explore the possibilities.

Interestingly the Trough of Disillusionment generates a lot of media coverage but it’s usually negative. Self-driving cars and trucks are definitely at the bottom of the trough. Whereas powered clothing or exoskeletons, or quadrupeds are a little harder to place.

AMRs, or Autonomous Mobile Robots, are a form of self-driving cargo that is much more successful than self-driving cars or trucks traveling on public roads. AMRs are primarily deployed in warehouses, hospitals, factories, farms, retail facilities, airports and even on the sidewalk. Behind every successful robot deployment there is probably a cloud fleet management provider or a teleoperation provider, or monitoring service.

Finally, the Plateau of Productivity is where the world’s most popular robots are. Peak popularity is the Roomba and other home robot vacuum cleaners. Before their acquisition by Amazon, iRobot had sold more than 40 million Roombas and captured 20% of the domestic vacuum cleaner market. Now commercial cleaning fleets are switching to autonomy as well.

And of course Productivity (not Hype) is also where the workhorse industrial robot arms live with ever increasing deployments worldwide. The International Federation of Robotics, IFR, reports that more than half a million new industrial robot arms were deployed in 2021, up 31% from 2020. This figure has been rising pretty steadily since I first started tracking robotics back in 2010.


What does your robotics hype cycle look like? What technology would you like me to add to this chart? Contact andra@svrobo.org

]]>
Robot Talk Episode 39 – Maria Bauza Villalonga https://robohub.org/robot-talk-episode-39-maria-bauza-villalonga/ Fri, 03 Mar 2023 12:12:35 +0000 https://robohub.org/?p=206735

Claire chatted to Dr Maria Bauza Villalonga from DeepMind all about robot learning, transferable skills, and general AI.

Maria Bauza Villalonga is a research scientist at DeepMind. In 2022, she earned her PhD in Robotics at the Massachusetts Institute of Technology, working with Prof. Alberto Rodriguez. Her research focuses on achieving precise robotic generalization by learning probabilistic models of the world that allow robots to reuse their skills across multiple tasks with high success. Maria has received several fellowships including Facebook, NVIDIA, and LaCaixa.

]]>
RoboHouse Interview Trilogy, part III: Srimannarayana Baratam and Perciv.ai https://robohub.org/robohouse-interview-trilogy-part-iii-srimannarayana-baratam-and-perciv-ai/ Wed, 01 Mar 2023 12:35:44 +0000 http://robohub.org/?guid=a4ffeb3637be4179d0f46b4da15ababc The final episode of our RoboHouse Interview Trilogy: ‘The Working Life of the Robotics Engineer’ interviews Srimannarayana Baratam. Sriman, as he is also called, co-founded the company Perciv.ai just two months after graduating. Rens van Poppel explores his journey so far. Perciv.ai claims that AI-driven machine perception could become affordable to everyone. When was this […]

The post RoboHouse Interview Trilogy, Part III: Srimannarayana Baratam and Perciv.ai appeared first on RoboHouse.

]]>

The final episode of our RoboHouse Interview Trilogy: ‘The Working Life of the Robotics Engineer’ interviews Srimannarayana Baratam. Sriman, as he is also called, co-founded the company Perciv.ai just two months after graduating. Rens van Poppel explores his journey so far.

Perciv.ai claims that AI-driven machine perception could become affordable to everyone. When was this vision formed, and how did it come about? Sriman points to the period right after his graduation. He says it was pivotal for building trust with partners, and consensus with effective communication. Because starting your own company comes with a lot of challenges.

 

Srimannarayana Baratam was the first to graduate from the MSc Robotics at the TU Delft. The Master’s degree programme was newly launched in 2020 and aims to train students who can guide the industry towards a kind of robotisation that promotes and reinforces workplace attractiveness.

“It is important for to find partners you can trust,” says Sriman. “You need to understand each other’s motivation and commitment. You need to assess what real value does this person add to the team.”

Coming from an automotive background in India, Sriman’s master’s thesis investigated the use of radar and cameras to protect vulnerable people in urban environments. He co-founded the start-up with his supervisor, Dr András Pálffy, and Balazs Szekeres, another robotics student who heard about the project. In the two months after his graduation, Sriman and his co-founders came together to focus full-time on their vision for Perciv.ai.

“In July and August we sat down and discussed the vision between the three of us,” he says. This period also led to tough conversations, ranging from finance to market strategy. “When finally the main questions were sorted out, you just got to take that leap of faith together. This leap of faith proved fruitful, seeing that the high level of trust resulted in a high level of productivity over the past five months.”

Since then Perciv.ai went on to win the NWO take-off phase 1 grant, got their own office and workspace in RoboHouse, signed a contract with an unmanned aerial vehicle (UAV) company and in doing so, generated their first sales revenue.

“This does not mean that there are no more heavy debates,” Sriman adds. “We all share the same vision, but in order to reach our goal of a sustainable and affordable product, we sometimes have different ideas on what that final product should look like.”

Sriman’s passion for robotics and the company’s goals is palpable: “We want to make machine perception technology available to all.”

The post RoboHouse Interview Trilogy, Part III: Srimannarayana Baratam and Perciv.ai appeared first on RoboHouse.

]]>
Robot Talk Episode 38 – Jonathan Aitken https://robohub.org/robot-talk-episode-38-jonathan-aitken/ Sun, 26 Feb 2023 11:26:20 +0000 https://robohub.org/?p=206677

Claire chatted to Dr Jonathan Aitken from the University of Sheffield all about manufacturing, sewer inspection, and robots in the real world.

Jonathan Aitken is a Senior University Teacher in Robotics at the University of Sheffield. His research is focused on building useful, useable, and expandable architectures for future robotics systems. Most recently this has involved building complex digital twins for collaborative robots in manufacturing processes and investigating localisation for robots operating in sewer pipes. His teaching focuses on providing students with the tools to bring distributed computing to complex robotic processes.

]]>
Custom, 3D-printed heart replicas look and pump just like the real thing https://robohub.org/custom-3d-printed-heart-replicas-look-and-pump-just-like-the-real-thing/ Thu, 23 Feb 2023 11:54:00 +0000 https://news.mit.edu/2023/custom-3d-printed-heart-replicas-patient-specific-0222

MIT engineers are hoping to help doctors tailor treatments to patients’ specific heart form and function, with a custom robotic heart. The team has developed a procedure to 3D print a soft and flexible replica of a patient’s heart. Image: Melanie Gonick, MIT

By Jennifer Chu | MIT News Office

No two hearts beat alike. The size and shape of the the heart can vary from one person to the next. These differences can be particularly pronounced for people living with heart disease, as their hearts and major vessels work harder to overcome any compromised function.

MIT engineers are hoping to help doctors tailor treatments to patients’ specific heart form and function, with a custom robotic heart. The team has developed a procedure to 3D print a soft and flexible replica of a patient’s heart. They can then control the replica’s action to mimic that patient’s blood-pumping ability.

The procedure involves first converting medical images of a patient’s heart into a three-dimensional computer model, which the researchers can then 3D print using a polymer-based ink. The result is a soft, flexible shell in the exact shape of the patient’s own heart. The team can also use this approach to print a patient’s aorta — the major artery that carries blood out of the heart to the rest of the body.

To mimic the heart’s pumping action, the team has fabricated sleeves similar to blood pressure cuffs that wrap around a printed heart and aorta. The underside of each sleeve resembles precisely patterned bubble wrap. When the sleeve is connected to a pneumatic system, researchers can tune the outflowing air to rhythmically inflate the sleeve’s bubbles and contract the heart, mimicking its pumping action. 

The researchers can also inflate a separate sleeve surrounding a printed aorta to constrict the vessel. This constriction, they say, can be tuned to mimic aortic stenosis — a condition in which the aortic valve narrows, causing the heart to work harder to force blood through the body.

Doctors commonly treat aortic stenosis by surgically implanting a synthetic valve designed to widen the aorta’s natural valve. In the future, the team says that doctors could potentially use their new procedure to first print a patient’s heart and aorta, then implant a variety of valves into the printed model to see which design results in the best function and fit for that particular patient. The heart replicas could also be used by research labs and the medical device industry as realistic platforms for testing therapies for various types of heart disease.

“All hearts are different,” says Luca Rosalia, a graduate student in the MIT-Harvard Program in Health Sciences and Technology. “There are massive variations, especially when patients are sick. The advantage of our system is that we can recreate not just the form of a patient’s heart, but also its function in both physiology and disease.”

Rosalia and his colleagues report their results in a study appearing in Science Robotics. MIT co-authors include Caglar Ozturk, Debkalpa Goswami, Jean Bonnemain, Sophie Wang, and Ellen Roche, along with Benjamin Bonner of Massachusetts General Hospital, James Weaver of Harvard University, and Christopher Nguyen, Rishi Puri, and Samir Kapadia at the Cleveland Clinic in Ohio.

Print and pump

In January 2020, team members, led by mechanical engineering professor Ellen Roche, developed a “biorobotic hybrid heart” — a general replica of a heart, made from synthetic muscle containing small, inflatable cylinders, which they could control to mimic the contractions of a real beating heart.

Shortly after those efforts, the Covid-19 pandemic forced Roche’s lab, along with most others on campus, to temporarily close. Undeterred, Rosalia continued tweaking the heart-pumping design at home.

“I recreated the whole system in my dorm room that March,” Rosalia recalls.

Months later, the lab reopened, and the team continued where it left off, working to improve the control of the heart-pumping sleeve, which they tested in animal and computational models. They then expanded their approach to develop sleeves and heart replicas that are specific to individual patients. For this, they turned to 3D printing.

“There is a lot of interest in the medical field in using 3D printing technology to accurately recreate patient anatomy for use in preprocedural planning and training,” notes Wang, who is a vascular surgery resident at Beth Israel Deaconess Medical Center in Boston.

An inclusive design

In the new study, the team took advantage of 3D printing to produce custom replicas of actual patients’ hearts. They used a polymer-based ink that, once printed and cured, can squeeze and stretch, similarly to a real beating heart.

As their source material, the researchers used medical scans of 15 patients diagnosed with aortic stenosis. The team converted each patient’s images into a three-dimensional computer model of the patient’s left ventricle (the main pumping chamber of the heart) and aorta. They fed this model into a 3D printer to generate a soft, anatomically accurate shell of both the ventricle and vessel.

The action of the soft, robotic models can be controlled to mimic the patient’s blood-pumping ability. Image: Melanie Gonick, MIT

The team also fabricated sleeves to wrap around the printed forms. They tailored each sleeve’s pockets such that, when wrapped around their respective forms and connected to a small air pumping system, the sleeves could be tuned separately to realistically contract and constrict the printed models.

The researchers showed that for each model heart, they could accurately recreate the same heart-pumping pressures and flows that were previously measured in each respective patient.

“Being able to match the patients’ flows and pressures was very encouraging,” Roche says. “We’re not only printing the heart’s anatomy, but also replicating its mechanics and physiology. That’s the part that we get excited about.”

Going a step further, the team aimed to replicate some of the interventions that a handful of the patients underwent, to see whether the printed heart and vessel responded in the same way. Some patients had received valve implants designed to widen the aorta. Roche and her colleagues implanted similar valves in the printed aortas modeled after each patient. When they activated the printed heart to pump, they observed that the implanted valve produced similarly improved flows as in actual patients following their surgical implants.

Finally, the team used an actuated printed heart to compare implants of different sizes, to see which would result in the best fit and flow — something they envision clinicians could potentially do for their patients in the future.

“Patients would get their imaging done, which they do anyway, and we would use that to make this system, ideally within the day,” says co-author Nguyen. “Once it’s up and running, clinicians could test different valve types and sizes and see which works best, then use that to implant.”

Ultimately, Roche says the patient-specific replicas could help develop and identify ideal treatments for individuals with unique and challenging cardiac geometries.

“Designing inclusively for a large range of anatomies, and testing interventions across this range, may increase the addressable target population for minimally invasive procedures,” Roche says.

This research was supported, in part, by the National Science Foundation, the National Institutes of Health, and the National Heart Lung Blood Institute.


]]>
Fully autonomous real-world reinforcement learning with applications to mobile manipulation https://robohub.org/fully-autonomous-real-world-reinforcement-learning-with-applications-to-mobile-manipulation/ Wed, 22 Feb 2023 12:59:00 +0000 http://bair.berkeley.edu/blog/2023/01/20/relmm/

By Jędrzej Orbik, Charles Sun, Coline Devin, Glen Berseth

Reinforcement learning provides a conceptual framework for autonomous agents to learn from experience, analogously to how one might train a pet with treats. But practical applications of reinforcement learning are often far from natural: instead of using RL to learn through trial and error by actually attempting the desired task, typical RL applications use a separate (usually simulated) training phase. For example, AlphaGo did not learn to play Go by competing against thousands of humans, but rather by playing against itself in simulation. While this kind of simulated training is appealing for games where the rules are perfectly known, applying this to real world domains such as robotics can require a range of complex approaches, such as the use of simulated data, or instrumenting real-world environments in various ways to make training feasible under laboratory conditions. Can we instead devise reinforcement learning systems for robots that allow them to learn directly “on-the-job”, while performing the task that they are required to do? In this blog post, we will discuss ReLMM, a system that we developed that learns to clean up a room directly with a real robot via continual learning.





We evaluate our method on different tasks that range in difficulty. The top-left task has uniform white blobs to pickup with no obstacles, while other rooms have objects of diverse shapes and colors, obstacles that increase navigation difficulty and obscure the objects and patterned rugs that make it difficult to see the objects against the ground.

To enable “on-the-job” training in the real world, the difficulty of collecting more experience is prohibitive. If we can make training in the real world easier, by making the data gathering process more autonomous without requiring human monitoring or intervention, we can further benefit from the simplicity of agents that learn from experience. In this work, we design an “on-the-job” mobile robot training system for cleaning by learning to grasp objects throughout different rooms.

Lesson 1: The Benefits of Modular Policies for Robots.

People are not born one day and performing job interviews the next. There are many levels of tasks people learn before they apply for a job as we start with the easier ones and build on them. In ReLMM, we make use of this concept by allowing robots to train common-reusable skills, such as grasping, by first encouraging the robot to prioritize training these skills before learning later skills, such as navigation. Learning in this fashion has two advantages for robotics. The first advantage is that when an agent focuses on learning a skill, it is more efficient at collecting data around the local state distribution for that skill.


That is shown in the figure above, where we evaluated the amount of prioritized grasping experience needed to result in efficient mobile manipulation training. The second advantage to a multi-level learning approach is that we can inspect the models trained for different tasks and ask them questions, such as, “can you grasp anything right now” which is helpful for navigation training that we describe next.


Training this multi-level policy was not only more efficient than learning both skills at the same time but it allowed for the grasping controller to inform the navigation policy. Having a model that estimates the uncertainty in its grasp success (Ours above) can be used to improve navigation exploration by skipping areas without graspable objects, in contrast to No Uncertainty Bonus which does not use this information. The model can also be used to relabel data during training so that in the unlucky case when the grasping model was unsuccessful trying to grasp an object within its reach, the grasping policy can still provide some signal by indicating that an object was there but the grasping policy has not yet learned how to grasp it. Moreover, learning modular models has engineering benefits. Modular training allows for reusing skills that are easier to learn and can enable building intelligent systems one piece at a time. This is beneficial for many reasons, including safety evaluation and understanding.

Lesson 2: Learning systems beat hand-coded systems, given time


Many robotics tasks that we see today can be solved to varying levels of success using hand-engineered controllers. For our room cleaning task, we designed a hand-engineered controller that locates objects using image clustering and turns towards the nearest detected object at each step. This expertly designed controller performs very well on the visually salient balled socks and takes reasonable paths around the obstacles but it can not learn an optimal path to collect the objects quickly, and it struggles with visually diverse rooms. As shown in video 3 below, the scripted policy gets distracted by the white patterned carpet while trying to locate more white objects to grasp.

1)
2)
3)
4)
We show a comparison between (1) our policy at the beginning of training (2) our policy at the end of training (3) the scripted policy. In (4) we can see the robot’s performance improve over time, and eventually exceed the scripted policy at quickly collecting the objects in the room.

Given we can use experts to code this hand-engineered controller, what is the purpose of learning? An important limitation of hand-engineered controllers is that they are tuned for a particular task, for example, grasping white objects. When diverse objects are introduced, which differ in color and shape, the original tuning may no longer be optimal. Rather than requiring further hand-engineering, our learning-based method is able to adapt itself to various tasks by collecting its own experience.

However, the most important lesson is that even if the hand-engineered controller is capable, the learning agent eventually surpasses it given enough time. This learning process is itself autonomous and takes place while the robot is performing its job, making it comparatively inexpensive. This shows the capability of learning agents, which can also be thought of as working out a general way to perform an “expert manual tuning” process for any kind of task. Learning systems have the ability to create the entire control algorithm for the robot, and are not limited to tuning a few parameters in a script. The key step in this work allows these real-world learning systems to autonomously collect the data needed to enable the success of learning methods.

This post is based on the paper “Fully Autonomous Real-World Reinforcement Learning with Applications to Mobile Manipulation”, presented at CoRL 2021. You can find more details in our paper, on our website and the on the video. We provide code to reproduce our experiments. We thank Sergey Levine for his valuable feedback on this blog post.

]]>
RoboHouse Interview Trilogy, part II: Wendel Postma and Project MARCH https://robohub.org/robohouse-interview-trilogy-part-ii-wendel-postma-and-project-march/ Sun, 19 Feb 2023 09:30:02 +0000 http://robohub.org/?guid=3098197678fc199f542e29252346bdd1 For the second part of our RoboHouse Interview Trilogy: The Working Life of the Robotics Engineer we speak with Wendel Postma, chief engineer at Project MARCH VIII. How does he resolve the conundrum of integration: getting a bunch of single-minded engineers to ultimately serve the needs of one single exoskeleton user? Rens van Poppel inquires. […]

The post RoboHouse Interview Trilogy, Part II: Wendel Postma and Project MARCH appeared first on RoboHouse.

]]>

For the second part of our RoboHouse Interview Trilogy: The Working Life of the Robotics Engineer we speak with Wendel Postma, chief engineer at Project MARCH VIII. How does he resolve the conundrum of integration: getting a bunch of single-minded engineers to ultimately serve the needs of one single exoskeleton user? Rens van Poppel inquires.

Wendel oversees technical engineering quality, and shares responsible for on-time delivery within budget with the other project managers. He spends his days wandering around the Dream Hall on TU Delft Campus, encouraging his team to explore new avenues for developing the exoskeleton. What is possible within the time that we have? Can conflicting design solutions work together?

Bringing bad news is part of the chief engineer’s job.

There is no shortage of hobbies and activities for Chief Engineer, Wendel. Sitting still is something he can’t do, which is why outside of Project MARCH, he is doing a lot of sports. This year, Wendel is making sure the team has 1 exoskeleton at the end of the year instead of many different parts. He also communicates well within the team so all the technological advances are understood and with a class of yoga so everyone can relax again. Wendel has many different goals. For example, he later wants to work in the health industry and complete an Ironman. Source: Project MARCH website.

In daily life, Arnhem-based Project MARCH pilot Koen van Zeeland is an executive in laying fibreglass in the Utrecht area. He was diagnosed with a spinal cord injury in 2013. Koen is a hard worker and his phone is always ringing. Yet he likes to make time to have a drink with his friends in the pub. Besides the pub, you might also find him on the moors, where he likes to walk his dog Turbo. Koen is also super sporty. Besides working out three times a week, Koen is also an avid cyclist with the goal of cycling up the mountains in Austria on his handbike. Source: Project MARCH website.

Koen van Zeeland is the primary test user of the exoskeleton and has control over the movements he makes. Project MARCH therefore calls him the ‘pilot’ of the exoskeleton. As the twenty-seventh and perhaps most important team member, Koen is valued highly within Project MARCH VIII. Source: Project MARCH website.

Project MARCH is iterative enterprise.

Most of its workplace drama comes from the urgency to deliver at least one significant improvement on the existing prototype. This year’s obsessions is weight; a lighter exoskeleton would require less power from both pilot and motors. Self-balancing would become easier to realise.

In order not to weaken the frame of the exoskeleton, there was a lot of enthusiasm to experiment with carbon fibre, which is both a light and strong material. Something, however, got in the way: the team struggled to find a pilot.

My job is making sure that in the end we don’t have 600 separate parts, but one exoskeleton.

“Having a test pilot is crucial if we are to reach our goals,” Wendel says. “Our current exoskeleton is built to fit the particular body shape of the person controlling it. The design is not yet adjustable to a different body shape. So it is crucial to get the pilot involved as quickly as possible.”

Not having a pilot was stressful for the entire team.

Their dream of creating a self-balancing exoskeleton was in danger. Wendel had to step up: “As chief engineer you have to make tough decisions. Carbon fibre is strong, but not flexible and difficult to machine. That is why we switched to aluminium, because it is easier to modify even after it is finished.”

“It was a huge disappointment,” Wendel says. “Some of us had already finished trainings for carbon manufacturing. Carbon parts were already ordered. The team felt let down. We had spent a so much time on something that was now impossible – because of the delays caused by having no pilot.”

“I learnt that bringing bad news is part of the chief engineer’s job. The next step is to look at how to convert the engineers’ enthusiasm for carbon fibre into new solutions and to redeploy their personal qualities.”

Wendel says the job also taught him to consider a hundred things at the same time. And to make sacrifices. Project MARCH involves long workdays and maybe not seeing your friends and roommates as much as you would like.

As a naturally curious person, Wendel found out that curiosity must be complemented by grit to make it in robotics. You often need to go deeper and study in more detail to make a good decision. “It is hard work. However, that is also what makes the job so much fun. You work in such a highly motivated team.”

That is also what makes the job so much fun.

The carbon story ended well, though.

When the team did found a pilot, hard-working Koen van Zeeland, the choice for aluminium as a base material paid off. Through a process of weight analysis, parts can now be optimised for an ever lighter exoskeleton.

The Project MARCH team continues to grow through setbacks and has doubled-down on their efforts to create the world’s first self-balancing exoskeleton. If they succeed, it will be a huge success for this unique way of running a business.

The post RoboHouse Interview Trilogy, Part II: Wendel Postma and Project MARCH appeared first on RoboHouse.

]]>
ep.364: Shaking Up The Sheetmetal Industry, with Ed Mehr https://robohub.org/shaking-up-the-sheetmetal-industry/ Fri, 17 Feb 2023 16:11:54 +0000 https://robohub.org/?p=206630

Conventional sheet metal manufacturing is highly inefficient for the low-volume production seen in the space industry. At Machina Labs, they developed a novel method of forming sheet metal using two robotic arms to bend the metal into different geometries. This method cuts down the time to produce large sheet metal parts from several months down to a few hours. Ed Mehr, Co-Founder and CEO of Machina Labs, explains this revolutionary manufacturing process.

Ed Mehr

Ed Mehr in Work Uniform

Ed Mehr is the co-founder and CEO of Machina Labs. He has an engineering background in smart manufacturing and artificial intelligence. In his previous position at Relativity Space, he led a team in charge of developing the world’s largest metal 3D printer. Relativity Space uses 3D printing to make rocket parts rapidly, and with the flexibility for multiple iterations. Ed previously was the CTO at Cloudwear (Now Averon), and has also worked at SpaceX, Google, and Microsoft.

Links

]]>
Robot Talk Episode 37 – Interview with Yang Gao https://robohub.org/robot-talk-episode-37-interview-with-yang-gao/ Fri, 17 Feb 2023 15:43:39 +0000 https://robohub.org/?p=206642

Claire chatted to Professor Yang Gao from the University of Surrey all about space robotics and planetary exploration.

Yang Gao is Professor of Space Autonomous Systems and Founding Head of the STAR LAB that specializes in robotic sensing, perception, visual guidance, navigation, and control (GNC) and biomimetic mechanisms for industrial applications in extreme environments. She brings over 20 years of research experience in developing robotics and autonomous systems, in which she has been the principal investigator of over 30 inter/nationally teamed projects and involved in real-world mission development.

]]>
Top 5 robot trends 2023 https://robohub.org/top-5-robot-trends-2023/ Thu, 16 Feb 2023 10:52:07 +0000 https://robohub.org/?p=206621

Top 5 Robot Trends 2023 © International Federation of Robotics

The stock of operational robots around the globe hit a new record of about 3.5 million units – the value of installations reached an estimated 15.7 billion USD. The International Federation of Robotics analyzes the top 5 trends shaping robotics and automation in 2023.

“Robots play a fundamental role in securing the changing demands of manufacturers around the world,” says Marina Bill, President of the International Federation of Robotics. “New trends in robotics attract users from small enterprise to global OEMs.”

1 – Energy Efficiency

Energy efficiency is key to improve companies’ competitiveness amid rising energy costs. The adoption of robotics helps in many ways to lower energy consumption in manufacturing. Compared to traditional assembly lines, considerable energy savings can be achieved through reduced heating. At the same time, robots work at high speed thus increasing production rates so that manufacturing becomes more time- and energy-efficient.

Today’s robots are designed to consume less energy, which leads to lower operating costs. To meet sustainability targets for their production, companies use industrial robots equipped with energy saving technology: robot controls are able to convert kinetic energy into electricity, for example, and feed it back into the power grid. This technology significantly reduces the energy required to run a robot. Another feature is the smart power saving mode that controls the robot´s energy supply on-demand throughout the workday. Since industrial facilities need to monitor their energy consumption even today, such connected power sensors are likely to become an industry standard for robotic solutions.

2 – Reshoring

Resilience has become an important driver for reshoring in various industries: Car manufacturers e.g. invest heavily in short supply lines to bring processes closer to their customers. These manufacturers use robot automation to manufacture powerful batteries cost-effectively and in large quantities to support their electric vehicle projects. These investments make the shipment of heavy batteries redundant. This is important as more and more logistics companies refuse to ship batteries for safety reasons.

Relocating microchip production back to the US and Europe is another reshoring trend. Since most industrial products nowadays require a semiconductor chip to function, their supply close to the customer is crucial. Robots play a vital role in chip manufacturing, as they live up to the extreme requirements of precision. Specifically designed robots automate the silicon wafer fabrication, take over cleaning and cleansing tasks or test integrated circuits. Recent examples of reshoring are Intel´s new chip factories in Ohio or the recently announced chip plant in the Saarland region of Germany run by chipmaker Wolfspeed and automotive supplier ZF.

3 – Robots easier to use

Robot programming has become easier and more accessible to non-experts. Providers of software-driven automation platforms support companies, letting users manage industrial robots with no prior programming experience. Original equipment manufacturers work hand-in-hand with low code or even no-code technology partners that allow users of all skill levels to program a robot.

The easy-to-use software paired with an intuitive user experience replaces extensive robotics programming and opens up new robotics automation opportunities: Software start-ups are entering this market with specialized solutions for the needs of small and medium-sized companies. For example: a traditional heavy-weight industrial robot can be equipped with sensors and a new software that allows collaborative setup operation. This makes it easy for workers to adjust heavy machinery to different tasks. Companies will thus get the best of both worlds: robust and precise industrial robot hardware and state-of-the-art cobot software.

Easy-to-use programming interfaces, that allow customers to set up the robots themselves, also drive the emerging new segment of low-cost robotics. Many new customers reacted to the pandemic in 2020 by trying out robotic solutions. Robot suppliers acknowledged this demand: Easy setup and installation, for instance, with pre-configured software to handle grippers, sensors or controllers support lower-cost robot deployment. Such robots are often sold through web shops and program routines for various applications are downloadable from an app store.

4 – Artificial Intelligence (AI) and digital automation

Propelled by advances in digital technologies, robot suppliers and system integrators offer new applications and improve existing ones regarding speed and quality. Connected robots are transforming manufacturing. Robots will increasingly operate as part of a connected digital ecosystem: Cloud Computing, Big Data Analytics or 5G mobile networks provide the technological base for optimized performance. The 5G standard will enable fully digitalized production, making cables on the shopfloor obsolete.

Artificial Intelligence (AI) holds great potential for robotics, enabling a range of benefits in manufacturing. The main aim of using AI in robotics is to better manage variability and unpredictability in the external environment, either in real-time, or off-line. This makes AI supporting machine learning play an increasing role in software offerings where running systems benefit, for example with optimized processes, predictive maintenance or vision-based gripping.

This technology helps manufacturers, logistics providers and retailers dealing with frequently changing products, orders and stock. The greater the variability and unpredictability of the environment, the more likely it is that AI algorithms will provide a cost-effective and fast solution – for example, for manufacturers or wholesalers dealing with millions of different products that change on a regular basis. AI is also useful in environments in which mobile robots need to distinguish between the objects or people they encounter and respond differently.

5 – Second life for industrial robots

Since an industrial robot has a service lifetime of up to thirty years, new tech equipment is a great opportunity to give old robots a “second life”. Industrial robot manufacturers like ABB, Fanuc, KUKA or Yaskawa run specialized repair centers close to their customers to refurbish or upgrade used units in a resource-efficient way. This prepare-to-repair strategy for robot manufacturers and their customers also saves costs and resources. To offer long-term repair to customers is an important contribution to the circular economy.

]]>
Learning challenges shape a mechanical engineer’s path https://robohub.org/learning-challenges-shape-a-mechanical-engineers-path/ Wed, 15 Feb 2023 10:26:00 +0000 https://news.mit.edu/2023/james-hermus-learning-challenges-shape-mechanical-engineers-path-0212

“I observed assistive technologies — developed by scientists and engineers my friends and I never met — which liberated us. My dream has always been to be one of those engineers.” Hermus says. Credit: Tony Pulsone

By Michaela Jarvis | Department of Mechanical Engineering

Before James Hermus started elementary school, he was a happy, curious kid who loved to learn. By the end of first grade, however, all that started to change, he says. As his schoolbooks became more advanced, Hermus could no longer memorize the words on each page, and pretend to be reading. He clearly knew the material the teacher presented in class; his teachers could not understand why he was unable to read and write his assignments. He was accused of being lazy and not trying hard enough.

Hermus was fortunate to have parents who sought out neuropsychology testing — which documented an enormous discrepancy between his native intelligence and his symbol decoding and phonemic awareness. Yet despite receiving a diagnosis of dyslexia, Hermus and his family encountered resistance at his school. According to Hermus, the school’s reading specialist did not “believe” in dyslexia, and, he says, the principal threatened his family with truancy charges when they took him out of school each day to attend tutoring.

Hermus’ school, like many across the country, was reluctant to provide accommodations for students with learning disabilities who were not two years behind in two subjects, Hermus says. For this reason, obtaining and maintaining accommodations, such as extended time and a reader, was a constant battle from first through 12th grade: Students who performed well lost their right to accommodations. Only through persistence and parental support did Hermus succeed in an educational system which he says all too often fails students with learning disabilities.

By the time Hermus was in high school, he had to become a strong self-advocate. In order to access advanced courses, he needed to be able to read more and faster, so he sought out adaptive technology — Kurzweil, a text-to-audio program. This, he says, was truly life-changing. At first, to use this program he had to disassemble textbooks, feed the pages through a scanner, and digitize them.

After working his way to the University of Wisconsin at Madison, Hermus found a research opportunity in medical physics and then later in biomechanics. Interestingly, the steep challenges that Hermus faced during his education had developed in him “the exact skill set that makes a successful researcher,” he says. “I had to be organized, advocate for myself, seek out help to solve problems that others had not seen before, and be excessively persistent.”

While working as a member of Professor Darryl Thelen’s Neuromuscular Biomechanics Lab at Madison, Hermus helped design and test a sensor for measuring tendon stress. He recognized his strengths in mechanical design. During this undergraduate research, he co-authored numerous journal and conference papers. These experiences and a desire to help people with physical disabilities propelled him to MIT.

“MIT is an incredible place. The people in MechE at MIT are extremely passionate and unassuming. I am not unusual at MIT,” Hermus says. Credit: Tony Pulsone

In September 2022, Hermus completed his PhD in mechanical engineering from MIT. He has been an author on seven papers in peer-reviewed journals, three as first author and four of them published when he was an undergraduate. He has won awards for his academics and for his mechanical engineering research and has served as a mentor and an advocate for disability awareness in several different contexts.

His work as a researcher stems directly from his personal experience, Hermus says. As a student in a special education classroom, “I observed assistive technologies — developed by scientists and engineers my friends and I never met — which liberated us. My dream has always been to be one of those engineers.”

Hermus’ work aims to investigate and model human interaction with objects where both substantial motion and force are present. His research has demonstrated that the way humans perform such everyday actions as turning a steering wheel or opening a door is very different from much of robotics. He showed specific patterns exist in the behavior that provide insight into neural control. In 2020, Hermus was the first author on a paper on this topic, which was published in the Journal of Neurophysiology and later won first place in the MIT Mechanical Engineering Research Exhibition. Using this insight, Hermus and his colleagues implemented these strategies on a Kuka LBR iiwa robot to learn about how humans regulate their many degrees of freedom. This work was published in IEEE Transactions on Robotics 2022. More recently, Hermus has collaborated with researchers at the University of Pittsburgh to see if these ideas prove useful in the development of brain computer interfaces — using electrodes implanted in the brain to control a prosthetic robotic arm.

While the hardware of prosthetics and exoskeletons is advancing, Hermus says, there are daunting limitations to the field in the descriptive modeling of human physical behavior, especially during contact with objects. Without these descriptive models, developing generalizable implementations of prosthetics, exoskeletons, and rehabilitation robotics will prove challenging.

“We need competent descriptive models of human physical interaction,” he says.

While earning his master’s and doctoral degrees at MIT, Hermus worked with Neville Hogan, the Sun Jae Professor of Mechanical Engineering, in the Eric P. and Evelyn E. Newman Laboratory for Biomechanics and Human Rehabilitation. Hogan has high praise for the research Hermus has conducted over his six years in the Newman lab.

“James has done superb work for both his master’s and doctoral theses. He tackled a challenging problem and made excellent and timely progress towards its solution. He was a key member of my research group,” Hogan says. “James’ commitment to his research is unquestionably a reflection of his own experience.”

Following postdoctoral research at MIT, where he has also been a part-time lecturer, Hermus is now beginning postdoctoral work with Professor Aude Billard at EPFL in Switzerland, where he hopes to gain experience with learning and optimization methods to further his human motor control research.

Hermus’ enthusiasm for his research is palpable, and his zest for learning and life shines through despite the hurdles his dyslexia presented. He demonstrates a similar kind of excitement for ski-touring and rock-climbing with the MIT Outing Club, working at MakerWorkshop, and being a member of the MechE community.

“MIT is an incredible place. The people in MechE at MIT are extremely passionate and unassuming. I am not unusual at MIT,” he says. “Nearly every person I know well has a unique story with an unconventional path.”

]]>
RoboHouse Interview Trilogy, part I: Christian Geckeler and the origami gripper https://robohub.org/robohouse-interview-trilogy-part-i-christian-geckeler-and-the-origami-gripper/ Sun, 12 Feb 2023 07:07:04 +0000 http://robohub.org/?guid=fe616a558bfead1b5c7303e7f3df12a7 Part one of our RoboHouse Interview Trilogy: The Working Life of Robotics Engineers seeks out Christian Geckeler. Christian is a PhD student at the Environmental Robotics Lab of ETH Zürich. He speaks with Rens van Poppel about the experience of getting high into the wild. What if drones could help place sensors in forests more […]

The post RoboHouse Interview Trilogy, Part I: Christian Geckeler and The Origami Gripper appeared first on RoboHouse.

]]>

Part one of our RoboHouse Interview Trilogy: The Working Life of Robotics Engineers seeks out Christian Geckeler. Christian is a PhD student at the Environmental Robotics Lab of ETH Zürich. He speaks with Rens van Poppel about the experience of getting high into the wild.

What if drones could help place sensors in forests more easily? What if a sensor device could automatically grab and hold a tree branch? Which flexible material is also strong and biodegradable? These leaps of imagination lead Christian to a new kind of gripper, inspired by the Japanese art of folding.

His origami design wraps itself around tree branches close enough to trigger an unfolding movement. This invention may in the future improve our insight into hard-to-access forest canopies, in a way that is environmentally friendly and pleasant for human operators.

What is it like to work in the forest as a researcher with this technology?
“Robotic solutions deployed in forests are currently scarce,” says Christian. “So developing solutions for such an environment is challenging, but also rewarding. Personally I also enjoy being outdoors. Compared to a lab, the forest is wilder and more unpredictable. Which I find wonderful, except when it’s cold.”

Are there limits as to where the gripper can be deployed?
“The gripper is quite versatile. Rather than the type of trees, it is the diameter and angle of the branch that dictate whether the gripper can attach. Even so, dense foliage could hinder the drone, and there should be sufficient space for the gripper to attach.”

Christian Geckeler, PhD student at the Environmental Robotics Lab of ETH Zürich, a university for science and technology in Switzerland where some 530 professors teach around 20,500 students – including 4,100 doctoral students – from over 120 countries.

Are the used materials environmentally friendly?
“Currently not all components are biodegradable, and the gripper must be recollected after sampling is finished. However, we are currently working on a fully biodegradable gripper, which releases itself and falls on the ground after being exposed to sufficient amounts of water, which makes collection much easier.”

How good at outdoor living do aspiring tree-canopy researchers need to be?
“Everything is a learning process,” says Christian philosophically. “Rather than existing expertise, a willingness to learn and passion for the subject is much more important.”

What happens when the drone gets stuck in a tree?
“As a safety measure, the drone has a protective net on top which prevents leaves and branches from coming in contact with the propeller. And we avoid interaction between the drone and foliage, so this has never happened.”

What struck you when took the gripper into the wild?
“Perhaps the most surprising thing was the great variance that is found in nature; no two trees are alike and every branch is different. The only way of finding out if your solution works is by testing outside as soon and as often as possible.”

Christian ends with a note on the importance of social and technical interplay in robotics: “You may think you develop a robot perfectly, but you must make sure society actually wants it and that it is easy to use for not technically-minded people too.”

The post RoboHouse Interview Trilogy, Part I: Christian Geckeler and The Origami Gripper appeared first on RoboHouse.

]]>
Robot Talk Episode 36 – Interview with Ignazio Maria Viola https://robohub.org/robot-talk-episode-36-interview-with-ignazio-maria-viola/ Fri, 10 Feb 2023 12:00:58 +0000 https://robohub.org/?p=206570

Claire chatted to Professor Ignazio Maria Viola from the University of Edinburgh all about aerodynamics, dandelion-inspired drones, and swarm sensing.

Ignazio Maria Viola is Professor of Fluid Mechanics and Bioinspired Engineering at the School of Engineering, University of Edinburgh, and Fellow of the Royal Institution of Naval Architects. He is the recipient of the ERC Consolidator Grant Dandidrone to explore the unsteady aerodynamics of dandelion-inspired drones.

]]>
Engineers devise a modular system to produce efficient, scalable aquabots https://robohub.org/engineers-devise-a-modular-system-to-produce-efficient-scalable-aquabots/ Tue, 07 Feb 2023 09:53:00 +0000 https://news.mit.edu/2023/engineers-devise-modular-system-efficient-scalable-aquabots-0206

Researchers have come up with an innovative approach to building deformable underwater robots using simple repeating substructures. The team has demonstrated the new system in two different example configurations, one like an eel, pictured here in the MIT tow tank. Credit: Courtesy of the researchers

By David L. Chandler | MIT News Office

Underwater structures that can change their shapes dynamically, the way fish do, push through water much more efficiently than conventional rigid hulls. But constructing deformable devices that can change the curve of their body shapes while maintaining a smooth profile is a long and difficult process. MIT’s RoboTuna, for example, was composed of about 3,000 different parts and took about two years to design and build.

Now, researchers at MIT and their colleagues — including one from the original RoboTuna team — have come up with an innovative approach to building deformable underwater robots, using simple repeating substructures instead of unique components. The team has demonstrated the new system in two different example configurations, one like an eel and the other a wing-like hydrofoil. The principle itself, however, allows for virtually unlimited variations in form and scale, the researchers say.

The work is being reported in the journal Soft Robotics, in a paper by MIT research assistant Alfonso Parra Rubio, professors Michael Triantafyllou and Neil Gershenfeld, and six others.

Existing approaches to soft robotics for marine applications are generally made on small scales, while many useful real-world applications require devices on scales of meters. The new modular system the researchers propose could easily be extended to such sizes and beyond, without requiring the kind of retooling and redesign that would be needed to scale up current systems.

The deformable robots are made with lattice-like pieces, called voxels, that are low density and have high stiffness. The deformable robots are made with lattice-like pieces, called voxels, that are low density and have high stiffness. Credit: Courtesy of the researchers

“Scalability is a strong point for us,” says Parra Rubio. Given the low density and high stiffness of the lattice-like pieces, called voxels, that make up their system, he says, “we have more room to keep scaling up,” whereas most currently used technologies “rely on high-density materials facing drastic problems” in moving to larger sizes.

The individual voxels in the team’s experimental, proof-of-concept devices are mostly hollow structures made up of cast plastic pieces with narrow struts in complex shapes. The box-like shapes are load-bearing in one direction but soft in others, an unusual combination achieved by blending stiff and flexible components in different proportions.

“Treating soft versus hard robotics is a false dichotomy,” Parra Rubio says. “This is something in between, a new way to construct things.” Gershenfeld, head of MIT’s Center for Bits and Atoms, adds that “this is a third way that marries the best elements of both.”

“Smooth flexibility of the body surface allows us to implement flow control that can reduce drag and improve propulsive efficiency, resulting in substantial fuel saving,” says Triantafyllou, who is the Henry L. and Grace Doherty Professor in Ocean Science and Engineering, and was part of the RoboTuna team.


Credit: Courtesy of the researchers.

In one of the devices produced by the team, the voxels are attached end-to-end in a long row to form a meter-long, snake-like structure. The body is made up of four segments, each consisting of five voxels, with an actuator in the center that can pull a wire attached to each of the two voxels on either side, contracting them and causing the structure to bend. The whole structure of 20 units is then covered with a rib-like supporting structure, and then a tight-fitting waterproof neoprene skin. The researchers deployed the structure in an MIT tow tank to show its efficiency in the water, and demonstrated that it was indeed capable of generating forward thrust sufficient to propel itself forward using undulating motions.

“There have been many snake-like robots before,” Gershenfeld says. “But they’re generally made of bespoke components, as opposed to these simple building blocks that are scalable.”

For example, Parra Rubio says, a snake-like robot built by NASA was made up of thousands of unique pieces, whereas for this group’s snake, “we show that there are some 60 pieces.” And compared to the two years spent designing and building the MIT RoboTuna, this device was assembled in about two days, he says.

The individual voxels are mostly hollow structures made up of cast plastic pieces with narrow struts in complex shapes. Credit: Courtesy of the researchers

The other device they demonstrated is a wing-like shape, or hydrofoil, made up of an array of the same voxels but able to change its profile shape and therefore control the lift-to-drag ratio and other properties of the wing. Such wing-like shapes could be used for a variety of purposes, ranging from generating power from waves to helping to improve the efficiency of ship hulls — a pressing demand, as shipping is a significant source of carbon emissions.

The wing shape, unlike the snake, is covered in an array of scale-like overlapping tiles, designed to press down on each other to maintain a waterproof seal even as the wing changes its curvature. One possible application might be in some kind of addition to a ship’s hull profile that could reduce the formation of drag-inducing eddies and thus improve its overall efficiency, a possibility that the team is exploring with collaborators in the shipping industry.

The team also created a wing-like hydrofoil. Credit: Courtesy of the researchers

Ultimately, the concept might be applied to a whale-like submersible craft, using its morphable body shape to create propulsion. Such a craft that could evade bad weather by staying below the surface, but without the noise and turbulence of conventional propulsion. The concept could also be applied to parts of other vessels, such as racing yachts, where having a keel or a rudder that could curve gently during a turn instead of remaining straight could provide an extra edge. “Instead of being rigid or just having a flap, if you can actually curve the way fish do, you can morph your way around the turn much more efficiently,” Gershenfeld says.


The research team included Dixia Fan of the Westlake University in China; Benjamin Jenett SM ’15, PhD ’ 20 of Discrete Lattice Industries; Jose del Aguila Ferrandis, Amira Abdel-Rahman and David Preiss of MIT; and Filippos Tourlomousis of the Demokritos Research Center of Greece. The work was supported by the U.S. Army Research Lab, CBA Consortia funding, and the MIT Sea Grant Program.

]]>
Microelectronics give researchers a remote control for biological robots https://robohub.org/microelectronics-give-researchers-a-remote-control-for-biological-robots/ Sun, 05 Feb 2023 10:30:19 +0000 https://robohub.org/?p=206519

A photograph of an eBiobot prototype, lit with blue microLEDs. Remotely controlled miniature biological robots have many potential applications in medicine, sensing and environmental monitoring. Image courtesy of Yongdeok Kim

By Liz Ahlberg Touchstone

First, they walked. Then, they saw the light. Now, miniature biological robots have gained a new trick: remote control.

The hybrid “eBiobots” are the first to combine soft materials, living muscle and microelectronics, said researchers at the University of Illinois Urbana-Champaign, Northwestern University and collaborating institutions. They described their centimeter-scale biological machines in the journal Science Robotics.

“Integrating microelectronics allows the merger of the biological world and the electronics world, both with many advantages of their own, to now produce these electronic biobots and machines that could be useful for many medical, sensing and environmental applications in the future,” said study co-leader Rashid Bashir, an Illinois professor of bioengineering and dean of the Grainger College of Engineering.

Rashid Bashir. Photo by L. Brian Stauffer

Bashir’s group has pioneered the development of biobots, small biological robots powered by mouse muscle tissue grown on a soft 3D-printed polymer skeleton. They demonstrated walking biobots in 2012 and light-activated biobots in 2016. The light activation gave the researchers some control, but practical applications were limited by the question of how to deliver the light pulses to the biobots outside of a lab setting.

The answer to that question came from Northwestern University professor John A. Rogers, a pioneer in flexible bioelectronics, whose team helped integrate tiny wireless microelectronics and battery-free micro-LEDs. This allowed the researchers to remotely control the eBiobots.

“This unusual combination of technology and biology opens up vast opportunities in creating self-healing, learning, evolving, communicating and self-organizing engineered systems. We feel that it’s a very fertile ground for future research with specific potential applications in biomedicine and environmental monitoring,” said Rogers, a professor of materials science and engineering, biomedical engineering and neurological surgery at Northwestern University and director of the Querrey Simpson Institute for Bioelectronics.

Remote control steering allows the eBiobots to maneuver around obstacles, as shown in this composite image of a bipedal robot traversing a maze. Image courtesy of Yongdeok Kim

To give the biobots the freedom of movement required for practical applications, the researchers set out to eliminate bulky batteries and tethering wires. The eBiobots use a receiver coil to harvest power and provide a regulated output voltage to power the micro-LEDs, said co-first author Zhengwei Li, an assistant professor of biomedical engineering at the University of Houston.

The researchers can send a wireless signal to the eBiobots that prompts the LEDs to pulse. The LEDs stimulate the light-sensitive engineered muscle to contract, moving the polymer legs so that the machines “walk.” The micro-LEDs are so targeted that they can activate specific portions of muscle, making the eBiobot turn in a desired direction. See a video on YouTube.

The researchers used computational modeling to optimize the eBiobot design and component integration for robustness, speed and maneuverability. Illinois professor of mechanical sciences and engineering Mattia Gazzola led the simulation and design of the eBiobots. The iterative design and additive 3D printing of the scaffolds allowed for rapid cycles of experiments and performance improvement, said Gazzola and co-first author Xiaotian Zhang, a postdoctoral researcher in Gazzola’s lab.

The eBiobots are the first wireless bio-hybrid machines, combining biological tissue, microelectronics and 3D-printed soft polymers. Image courtesy of Yongdeok Kim

The design allows for possible future integration of additional microelectronics, such as chemical and biological sensors, or 3D-printed scaffold parts for functions like pushing or transporting things that the biobots encounter, said co-first author Youngdeok Kim, who completed the work as a graduate student at Illinois.

The integration of electronic sensors or biological neurons would allow the eBiobots to sense and respond to toxins in the environment, biomarkers for disease and more possibilities, the researchers said.

“In developing a first-ever hybrid bioelectronic robot, we are opening the door for a new paradigm of applications for health care innovation, such as in-situ biopsies and analysis, minimum invasive surgery or even cancer detection within the human body,” Li said.

The National Science Foundation and the National Institutes of Health supported this work.


]]>
Robot Talk Episode 35 – Interview with Emily S. Cross https://robohub.org/robot-talk-episode-35-interview-with-emily-s-cross/ Fri, 03 Feb 2023 15:32:46 +0000 https://robohub.org/?p=206511

Claire chatted to Professor Emily S. Cross from the University of Glasgow and Western Sydney University all about neuroscience, social learning, and human-robot interaction.

Emily S. Cross is a Professor of Social Robotics at the University of Glasgow, and a Professor of Human Neuroscience at the MARCS Institute at Western Sydney University. Using interactive learning tasks, brain scanning, and dance, acrobatics and robots, she and her Social Brain in Action Laboratory team explore how we learn by watching others throughout the lifespan, how action experts’ brains enable them to perform physical skills so exquisitely, and the social influences that shape human-robot interaction.

]]>
Sea creatures inspire marine robots which can operate in extra-terrestrial oceans https://robohub.org/sea-creatures-inspire-marine-robots-which-can-operate-in-extra-terrestrial-oceans/ Thu, 02 Feb 2023 11:07:00 +0000 https://www.bristol.ac.uk/news/2023/february/marine-robots.html

RoboSalps in action. Credits: Valentina Lo Gatto

These robotic units called RoboSalps, after their animal namesakes, have been engineered to operate in unknown and extreme environments such as extra-terrestrial oceans.

Although salps resemble jellyfish with their semi-transparent barrel-shaped bodies, they belong to the family of Tunicata and have a complex life cycle, changing between solitary and aggregate generations where they connect to form colonies.

RoboSalps have similarly light, tubular bodies and can link to each other to form ‘colonies’ which gives them new capabilities that can only be achieved because they work together.

Researcher Valentina Lo Gatto of Bristol’s Department of Aerospace Engineering is leading the study. She is also a student at the EPSRC Centre of Doctoral Training in Future Autonomous and Robotic Systems (FARSCOPE CDT).

She said: “RoboSalp is the first modular salp-inspired robot. Each module is made of a very light-weight soft tubular structure and a drone propeller which enables them to swim. These simple modules can be combined into ‘colonies’ that are much more robust and have the potential to carry out complex tasks. Because of their low weight and their robustness, they are ideal for extra-terrestrial underwater exploration missions, for example, in the subsurface ocean on the Jupiter moon Europa.”

RoboSalps are unique as each individual module can swim on its own. This is possible because of a small motor with rotor blades – typically used for drones – inserted into the soft tubular structure.

When swimming on their own, RoboSalps modules are difficult to control, but after joining them together to form colonies, they become more stable and show sophisticated movements.

In addition, by having multiple units joined together, scientists automatically obtain a redundant system, which makes it more robust against failure. If one module breaks, the whole colony can still move.

A colony of soft robots is a relatively novel concept with a wide range of interesting applications. RoboSalps are soft, potentially quite energy efficient, and robust due to inherent redundancy. This makes them ideal for autonomous missions where a direct and immediate human control might not be feasible.

Dr Helmut Hauser of Bristol’s Department of Engineering Maths, explained: “These include the exploration of remote submarine environments, sewage tunnels, and industrial cooling systems. Due to the low weight and softness of the RoboSalp modules, they are also ideal for extra-terrestrial missions. They can easily be stored in a reduced volume, ideal for reducing global space mission payloads.”

A compliant body also provides safer interaction with potentially delicate ecosystems, both on earth and extra-terrestrial, reducing the risk of environmental damage. The possibility to detach units or segments, and rearrange them, gives the system adaptability: once the target environment is reached, the colony could be deployed to start its exploration.

At a certain point, it could split into multiple segments, each exploring in a different direction, and afterwards reassemble in a new configuration to achieve a different objective such as manipulation or sample collection.

Prof Jonathan Rossiter added: “We are also developing control approaches that are able to exploit the compliance of the modules with the goal of achieving energy efficient movements close to those observed in biological salps.”

]]>
Our future could be full of undying, self-repairing robots – here’s how https://robohub.org/our-future-could-be-full-of-undying-self-repairing-robots-heres-how/ Wed, 01 Feb 2023 14:28:20 +0000 http://robohub.org/?guid=e1d7fdc88964f50cba7a15f5383b62dd

Robotic head, 3D illustration (frank60/Shutterstock)

By Jonathan Roberts (Professor in Robotics, Queensland University of Technology)

With generative artificial intelligence (AI) systems such as ChatGPT and StableDiffusion being the talk of the town right now, it might feel like we’ve taken a giant leap closer to a sci-fi reality where AIs are physical entities all around us.

Indeed, computer-based AI appears to be advancing at an unprecedented rate. But the rate of advancement in robotics – which we could think of as the potential physical embodiment of AI – is slow.

Could it be that future AI systems will need robotic “bodies” to interact with the world? If so, will nightmarish ideas like the self-repairing, shape-shifting T-1000 robot from the Terminator 2 movie come to fruition? And could a robot be created that could “live” forever?

Energy for ‘life’

Biological lifeforms like ourselves need energy to operate. We get ours via a combination of food, water, and oxygen. The majority of plants also need access to light to grow.

By the same token, an everlasting robot needs an ongoing energy supply. Currently, electrical power dominates energy supply in the world of robotics. Most robots are powered by the chemistry of batteries.

An alternative battery type has been proposed that uses nuclear waste and ultra-thin diamonds at its core. The inventors, a San Francisco startup called Nano Diamond Battery, claim a possible battery life of tens of thousands of years. Very small robots would be an ideal user of such batteries.

But a more likely long-term solution for powering robots may involve different chemistry – and even biology. In 2021, scientists from the Berkeley Lab and UMAss Amherst in the US demonstrated tiny nanobots could get their energy from chemicals in the liquid they swim in.

The researchers are now working out how to scale up this idea to larger robots that can work on solid surfaces.

Repairing and copying oneself

Of course, an undying robot might still need occasional repairs.

Ideally, a robot would repair itself if possible. In 2019, a Japanese research group demonstrated a research robot called PR2 tightening its own screw using a screwdriver. This is like self-surgery! However, such a technique would only work if non-critical components needed repair.

Other research groups are exploring how soft robots can self-heal when damaged. A group in Belgium showed how a robot they developed recovered after being stabbed six times in one of its legs. It stopped for a few minutes until its skin healed itself, and then walked off.

Another unusual concept for repair is to use other things a robot might find in the environment to replace its broken part.

Last year, scientists reported how dead spiders can be used as robot grippers. This form of robotics is known as “necrobotics”. The idea is to use dead animals as ready-made mechanical devices and attach them to robots to become part of the robot.

The proof-of-concept in necrobotics involved taking a dead spider and ‘reanimating’ its hydraulic legs with air, creating a surprisingly strong gripper. Preston Innovation Laboratory/Rice University

A robot colony?

From all these recent developments, it’s quite clear that in principle, a single robot may be able to live forever. But there is a very long way to go.

Most of the proposed solutions to the energy, repair and replication problems have only been demonstrated in the lab, in very controlled conditions and generally at tiny scales.

The ultimate solution may be one of large colonies or swarms of tiny robots who share a common brain, or mind. After all, this is exactly how many species of insects have evolved.

The concept of the “mind” of an ant colony has been pondered for decades. Research published in 2019 showed ant colonies themselves have a form of memory that is not contained within any of the ants.

This idea aligns very well with one day having massive clusters of robots that could use this trick to replace individual robots when needed, but keep the cluster “alive” indefinitely.

Ant colonies can contain ‘memories’ that are distributed between many individual insects. frank60/Shutterstock

Ultimately, the scary robot scenarios outlined in countless science fiction books and movies are unlikely to suddenly develop without anyone noticing.

Engineering ultra-reliable hardware is extremely difficult, especially with complex systems. There are currently no engineered products that can last forever, or even for hundreds of years. If we do ever invent an undying robot, we’ll also have the chance to build in some safeguards.The Conversation


Jonathan Roberts is Director of the Australian Cobotics Centre, the Technical Director of the Advanced Robotics for Manufacturing (ARM) Hub, and is a Chief Investigator at the QUT Centre for Robotics. He receives funding from the Australian Research Council. He was the co-founder of the UAV Challenge – an international drone competition.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
Sensing with purpose https://robohub.org/sensing-with-purpose/ Sun, 29 Jan 2023 09:30:00 +0000 https://news.mit.edu/2023/fadel-adib-sensing-purpose-0124

Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science and the Media Lab, seeks to develop wireless technology that can sense the physical world in ways that were not possible before. Image: Adam Glanzman

By Adam Zewe | MIT News Office

Fadel Adib never expected that science would get him into the White House, but in August 2015 the MIT graduate student found himself demonstrating his research to the president of the United States.

Adib, fellow grad student Zachary Kabelac, and their advisor, Dina Katabi, showcased a wireless device that uses Wi-Fi signals to track an individual’s movements.

As President Barack Obama looked on, Adib walked back and forth across the floor of the Oval Office, collapsed onto the carpet to demonstrate the device’s ability to monitor falls, and then sat still so Katabi could explain to the president how the device was measuring his breathing and heart rate.

“Zach started laughing because he could see that my heart rate was 110 as I was demoing the device to the president. I was stressed about it, but it was so exciting. I had poured a lot of blood, sweat, and tears into that project,” Adib recalls.

For Adib, the White House demo was an unexpected — and unforgettable — culmination of a research project he had launched four years earlier when he began his graduate training at MIT. Now, as a newly tenured associate professor in the Department of Electrical Engineering and Computer Science and the Media Lab, he keeps building off that work. Adib, the Doherty Chair of Ocean Utilization, seeks to develop wireless technology that can sense the physical world in ways that were not possible before.

In his Signal Kinetics group, Adib and his students apply knowledge and creativity to global problems like climate change and access to health care. They are using wireless devices for contactless physiological sensing, such as measuring someone’s stress level using Wi-Fi signals. The team is also developing battery-free underwater cameras that could explore uncharted regions of the oceans, tracking pollution and the effects of climate change. And they are combining computer vision and radio frequency identification (RFID) technology to build robots that find hidden items, to streamline factory and warehouse operations and, ultimately, alleviate supply chain bottlenecks.

While these areas may seem quite different, each time they launch a new project, the researchers uncover common threads that tie the disciplines together, Adib says.

“When we operate in a new field, we get to learn. Every time you are at a new boundary, in a sense you are also like a kid, trying to understand these different languages, bring them together, and invent something,” he says.

A science-minded child

A love of learning has driven Adib since he was a young child growing up in Tripoli on the coast of Lebanon. He had been interested in math and science for as long as he could remember, and had boundless energy and insatiable curiosity as a child.

“When my mother wanted me to slow down, she would give me a puzzle to solve,” he recalls.

By the time Adib started college at the American University of Beirut, he knew he wanted to study computer engineering and had his sights set on MIT for graduate school.

Seeking to kick-start his future studies, Adib reached out to several MIT faculty members to ask about summer internships. He received a response from the first person he contacted. Katabi, the Thuan and Nicole Pham Professor in the Department of Electrical Engineering and Computer Science (EECS), and a principal investigator in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT Jameel Clinic, interviewed him and accepted him for a position. He immersed himself in the lab work and, as the end of summer approached, Katabi encouraged him to apply for grad school at MIT and join her lab.

“To me, that was a shock because I felt this imposter syndrome. I thought I was moving like a turtle with my research, but I did not realize that with research itself, because you are at the boundary of human knowledge, you are expected to progress iteratively and slowly,” he says.

As an MIT grad student, he began contributing to a number of projects. But his passion for invention pushed him to embark into unexplored territory. Adib had an idea: Could he use Wi-Fi to see through walls?

“It was a crazy idea at the time, but my advisor let me work on it, even though it was not something the group had been working on at all before. We both thought it was an exciting idea,” he says.

As Wi-Fi signals travel in space, a small part of the signal passes through walls — the same way light passes through windows — and is then reflected by whatever is on the other side. Adib wanted to use these signals to “see” what people on the other side of a wall were doing.

Discovering new applications

There were a lot of ups and downs (“I’d say many more downs than ups at the beginning”), but Adib made progress. First, he and his teammates were able to detect people on the other side of a wall, then they could determine their exact location. Almost by accident, he discovered that the device could be used to monitor someone’s breathing.

“I remember we were nearing a deadline and my friend Zach and I were working on the device, using it to track people on the other side of the wall. I asked him to hold still, and then I started to see him appearing and disappearing over and over again. I thought, could this be his breathing?” Adib says.

Eventually, they enabled their Wi-Fi device to monitor heart rate and other vital signs. The technology was spun out into a startup, which presented Adib with a conundrum once he finished his PhD — whether to join the startup or pursue a career in academia.

He decided to become a professor because he wanted to dig deeper into the realm of invention. But after living through the winter of 2014-2015, when nearly 109 inches of snow fell on Boston (a record), Adib was ready for a change of scenery and a warmer climate. He applied to universities all over the United States, and while he had some tempting offers, Adib ultimately realized he didn’t want to leave MIT. He joined the MIT faculty as an assistant professor in 2016 and was named associate professor in 2020.

“When I first came here as an intern, even though I was thousands of miles from Lebanon, I felt at home. And the reason for that was the people. This geekiness — this embrace of intellect — that is something I find to be beautiful about MIT,” he says.

He’s thrilled to work with brilliant people who are also passionate about problem-solving. The members of his research group are diverse, and they each bring unique perspectives to the table, which Adib says is vital to encourage the intellectual back-and-forth that drives their work.

Diving into a new project

For Adib, research is exploration. Take his work on oceans, for instance. He wanted to make an impact on climate change, and after exploring the problem, he and his students decided to build a battery-free underwater camera.

Adib learned that the ocean, which covers 70 percent of the planet, plays the single largest role in the Earth’s climate system. Yet more than 95 percent of it remains unexplored. That seemed like a problem the Signal Kinetics group could help solve, he says.

But diving into this research area was no easy task. Adib studies Wi-Fi systems, but Wi-Fi does not work underwater. And it is difficult to recharge a battery once it is deployed in the ocean, making it hard to build an autonomous underwater robot that can do large-scale sensing.

So, the team borrowed from other disciplines, building an underwater camera that uses acoustics to power its equipment and capture and transmit images.

“We had to use piezoelectric materials, which come from materials science, to develop transducers, which come from oceanography, and then on top of that we had to marry these things with technology from RF known as backscatter,” he says. “The biggest challenge becomes getting these things to gel together. How do you decode these languages across fields?”

It’s a challenge that continues to motivate Adib as he and his students tackle problems that are too big for one discipline.

He’s excited by the possibility of using his undersea wireless imaging technology to explore distant planets. These same tools could also enhance aquaculture, which could help eradicate food insecurity, or support other emerging industries.

To Adib, the possibilities seem endless.

“With each project, we discover something new, and that opens up a whole new world to explore. The biggest driver of our work in the future will be what we think is impossible, but that we could make possible,” he says.

]]>
Robot Talk Episode 34 – Interview with Sabine Hauert https://robohub.org/robot-talk-episode-34-interview-with-sabine-hauert/ Sat, 28 Jan 2023 08:50:19 +0000 https://robohub.org/?p=206455

Claire chatted to Dr Sabine Hauert from the University of Bristol all about swarm robotics, nanorobots, and environmental monitoring.

Sabine Hauert is Associate Professor of Swarm Engineering at University of Bristol. She leads a team of 20 researchers working on making swarms for people, and across scales, from nanorobots for cancer treatment, to larger robots for environmental monitoring, or logistics. Previously she worked at MIT and EPFL. She is President and Executive Trustee of non-profits robohub.org and aihub.org, which connect the robotics and AI communities to the public.

]]>
Special drone collects environmental DNA from trees https://robohub.org/special-drone-collects-environmental-dna-from-trees/ Fri, 27 Jan 2023 10:32:48 +0000 https://robohub.org/?p=206443

By Peter Rüegg

Ecologists are increasingly using traces of genetic material left behind by living organisms left behind in the environment, called environmental DNA (eDNA), to catalogue and monitor biodiversity. Based on these DNA traces, researchers can determine which species are present in a certain area.

Obtaining samples from water or soil is easy, but other habitats – such as the forest canopy – are difficult for researchers to access. As a result, many species remain untracked in poorly explored areas.

Researchers at ETH Zurich and the Swiss Federal Institute for Forest, Snow and Landscape Research WSL, and the company SPYGEN have partnered to develop a special drone that can autonomously collect samples on tree branches.

(Video: ETH Zürich)

How the drone collects material

The drone is equipped with adhesive strips. When the aircraft lands on a branch, material from the branch sticks to these strips. Researchers can then extract DNA in the lab, analyse it and assign it to genetic matches of the various organisms using database comparisons.

But not all branches are the same: they vary in terms of their thickness and elasticity. Branches also bend and rebound when a drone lands on them. Programming the aircraft in such a way that it can still approach a branch autonomously and remain stable on it long enough to take samples was a major challenge for the roboticists.

“Landing on branches requires complex control,” explains Stefano Mintchev, Professor of Environmental Robotics at ETH Zurich and WSL. Initially, the drone does not know how flexible a branch is, so the researchers fitted it with a force sensing cage. This allows the drone to measure this factor at the scene and incorporate it into its flight manoeuvre.

Scheme: DNA is extracted from the collected branch material, amplified, sequenced and the sequences found are compared with databases. This allows the species to be identified. (Graphic: Stefano Mintchev / ETH Zürich)

Preparing rainforest operations at Zoo Zurich

Researchers have tested their new device on seven tree species. In the samples, they found DNA from 21 distinct groups of organisms, or taxa, including birds, mammals and insects. “This is encouraging, because it shows that the collection technique works,“ says Mintchev, who co-​authored the study that has appeared in the journal Science Robotics.

The researchers now want to improve their drone further to get it ready for a competition in which the aim is to detect as many different species as possible across 100 hectares of rainforest in Singapore in 24 hours.

To test the drone’s efficiency under conditions similar to those it will experience at the competition, Mintchev and his team are currently working at the Zoo Zurich’s Masoala Rainforest. “Here we have the advantage of knowing which species are present, which will help us to better assess how thorough we are in capturing all eDNA traces with this technique or if we’re missing something,“ Mintchev says.

For this event, however, the collection device must become more efficient and mobilize faster. In the tests in Switzerland, the drone collected material from seven trees in three days; in Singapore, it must be able to fly to and collect samples from ten times as many trees in just one day.

Collecting samples in a natural rainforest, however, presents the researchers with even tougher challenges. Frequent rain washes eDNA off surfaces, while wind and clouds impede drone operation. “We are therefore very curious to see whether our sampling method will also prove itself under extreme conditions in the tropics,” Mintchev says.

]]>
The robots of CES 2023 https://robohub.org/the-robots-of-ces-2023/ Wed, 25 Jan 2023 10:44:25 +0000 https://svrobo.org/?p=24647

Robots were on the main expo floor at CES this year, and these weren’t just cool robots for marketing purposes. I’ve been tracking robots at CES for more than 10 years, watching the transition from robot toys to real robots. Increasing prominence has been given to self-driving cars, LiDARs and eVTOL drones, but, in my mind it was really the inclusion of John Deere and agricultural robots last year that confirmed that CES was incorporating more industry, more real machines, not just gadgets.
In fact, according to the organizing association CTA or the Consumer Technology Association, these days CES no longer stands for the Consumer Electronics Show. CES now just stands for CES, one of the world’s largest technology expos.

Eve from Halodi Robotics shakes hands at CES 2023 with Karinne Ramirez-Amaro, associate professor at Chalmers University of Technology and head of IEEE Robotics and Automation Society’s Women in Engineering chapter. (Image source: Andra Keay)

Eve from Halodi Robotics shakes hands at CES 2023 with Karinne Ramirez-Amaro, associate professor at Chalmers University of Technology and head of IEEE Robotics and Automation Society’s Women in Engineering chapter. (Image source: Andra Keay)

The very first robot I saw was Eve from Halodi Robotics, exhibiting in the ADT Commercial booth. I am a big fan of this company. Not only do they have great robotics technology, which is very safe and collaborative, but I’ve watched them go from an angel funded startup to their first commercial deployments, providing 140 robots to ADT. One of their secrets has been spending the last year working closely with ADT to finetune the first production features of Eve, focusing on indoor security and working alongside people. In the future, Halodi has potential for many other applications including eldercare.

Another robot company (and robot) that I’m a big fan of is Labrador Robotics, and their mobile tray fetching robot for eldercare. Labrador exhibited their mobile robot in the AARP Innovation Lab pavilion, and are rolling out robots both in houses and in aged care facilities. There are two units pictured and the height of the platform can raise or lower depending on whether it needs to reach a countertop or fridge unit to retrieve items, like drinks and medications, or whether it needs to lower to become a bed or chair side table. These units can be commanded by voice, or tablet, or scheduled to travel around designated ‘bus stops’, using advanced localization and mapping. The team at Labrador have a wealth of experience at other consumer robotics companies.

Two Retrievers from Labrador Robotics in the AARP Innovation Lab Pavilion at CES 2023. (Image source: Andra Keay)

Two Retrievers from Labrador Robotics in the AARP Innovation Lab Pavilion at CES 2023. (Image source: Andra Keay)

I first met Sampriti Battacharya, pictured below with her autonomous electric boat, when she was still doing her robotics PhD at MIT, dreaming about starting her own company. Five short years later, she’s now the proud founder of Navier with not one but two working prototypes of the ‘boat of the future’. The Navier 30 is a 30’ long electric intelligent hydrofoil with a range of 75 nautical miles and a top speed of 35 knots. Not only is the electric hydrofoil 90% more energy efficient than traditional boats but it eliminates sea sickness with a super smooth ride. Sampriti’s planning to revolutionize public transport for cities that span waterways, like San Francisco or Boston or New York.

Navier’s ‘boat of the future’ with founder Sampriti Battacharya, plus an extra stowaway quadruped robot from Unitree. Image source: Andra Keay

Navier’s ‘boat of the future’ with founder Sampriti Battacharya, plus an extra stowaway quadruped robot from Unitree. Image source: Andra Keay

Another rapidly evolving robotics company is Yarbo. Starting out as a prototype snow blowing robot, after five years of solid R&D, Snowbot has evolved into the Yarbo modular family of smart yard tools. Imagine a smart docking mobile base which can be turned from a lawn mower to a snow blower or a leaf blower. It can navigate autonomously, and it’s electric of course.

And these robotics companies are making waves at CES. I met French startup Acwa Robotics earlier in 2022 and was so impressed that I nominated them as an IEEE Entrepreneurship Star. Water utilities around the world are faced with aging and damaged infrastructure, inaccessible underground pipes, responsible for huge amounts of water loss, road and building damage. Acwa’s intelligent robot travels inside the pipes without stopping water flow and provides rapid precisely localized inspection data that can pinpoint leaks, cracks and deterioration. Acwa was nominated for honors in the Smart Cities category and also won CES Best of Innovation Award.

Acwa Robotics and CES 2023 Best of Innovation Awards (Image Source: Acwa Robotics)

Acwa Robotics and CES 2023 Best of Innovation Awards (Image Source: Acwa Robotics)

Some other robotics companies and startups worth looking at were Apex.ai, Caterpillar, Unitree, Bosch Rexroth, Waymo, Zoox, Whill, Meropy, Artemis Robotics, Fluent Pet and Orangewood. Let me know who I missed! According to the app, 278 companies tagged themselves as Robotics, 73 as Drones, 514 as Vehicle Tech, and 722 as Startups, although I’d say the overall number of exhibitors and attendees was down on previous years although there were definitely more robots.

]]>
Robotics research: How Asia, Europe and America invest – Global Report 2023 https://robohub.org/robotics-research-how-asia-europe-and-america-invest-global-report-2023/ Sun, 22 Jan 2023 09:11:44 +0000 https://robohub.org/?p=206391 Countries around the world invest in robotics to support developments in industry and society. What are the exact targets of robotics research funding programs (R&D) officially driven by governments in Asia, Europe and America today? This has been researched by the International Federation of Robotics and published in the 2023 update paper of “World Robotics R&D Programs”.

© Pixabay

“The 3rd version of World Robotics R&D Programs covers the latest funding developments including updates in 2022,” says Prof. Dr. Jong-Oh Park, Vice-Chairman IFR Research Committee and member of the Executive Board.

The overview shows that the most advanced robotics countries in terms of annual installations of industrial robots – China, Japan, USA, South Korea, Germany – and the EU drive very different R&D strategies:

Robotics R&D programs – officially driven by governments

In China, the “14th Five-Year Plan” for Robot Industry Development, released by the Ministry of Industry and Information Technology (MIIT) in Beijing on 21st December 2021, focuses on promoting innovation. The goal is to make China a global leader for robot technology and industrial advancement. Robotics is included in 8 key industries for the next 5 years. In order to implement national science and technology innovation arrangements, the key special program “Intelligent Robots” was launched under the National Key R&D Plan on 23rd April 2022 with a funding of 43.5 million USD. The recent statistical yearbook “World Robotics” by IFR shows that China reached a robot density of 322 units per 10,000 workers in the manufacturing industry: The country ranks 5th worldwide in 2021 compared to 20th (140 units) in 2018.

In Japan, the “New Robot Strategy” aims to make the country the world´s number one robot innovation hub. More than 930.5 million USD in support has been provided by the Japanese government in 2022. Key sectors are manufacturing (77.8 million USD), nursing and medical (55 million USD), infrastructure (643.2 million USD) and agriculture (66.2 million USD). The action plan for manufacturing and service includes projects such as autonomous driving, advanced air mobility or the development of integrated technologies that will be the core of next-generation artificial intelligence and robots. A budget of 440 million USD was allocated to robotics-related projects in the “Moonshot Research and Development Program” over a period of 5 years from 2020 to 2025. According to the statistical yearbook “World Robotics” by IFR, Japan is the world´s number one industrial robot manufacturer and delivered 45% of the global supply in 2021.

The 3rd Basic Plan on Intelligent Robots of South Korea is pushing to develop robotics as a core industry in the fourth industrial revolution. The Korean government allocated 172.2 million USD in funding for the “2022 Implementation Plan for the Intelligent Robot”. From 2022 to 2024 a total of 7.41 million USD is planned in funding for the “Full-Scale Test Platform Project for Special-Purpose Manned or Unmanned Aerial Vehicles”. The statistical yearbook “World Robotics” showed an all-time high of 1,000 industrial robots per 10,000 employees in 2021. This makes Korea the country with the highest robot density worldwide.

Horizon Europe is the European Union’s key research and innovation framework program with a budget of 94.30 billion USD for seven years (2021-2027). Top targets are: strengthening the EU’s scientific and technological bases, boosting Europe’s innovation capacity, competitiveness and jobs as well as delivering on citizens’ priorities and sustaining socio-economic models and values. The European Commission provides total funding of 198.5 million USD for the robotics-related work program 2021-2022.

Germany´s High-Tech Strategy 2025 (HTS) is the fourth edition of the German R&D and innovation program. The German government will provide around 69 million USD annually until 2026 – a total budget of 345 million USD for five years. As part of the HTS 2025 mission, the program “Shaping technology for the people” was launched. This program aims to use technological change in society as a whole and in the world of work for the benefit of people. Research topics are: digital assistance systems such as data glasses, human-robot-collaboration, exoskeletons to support employees in their physical work, but also solutions for the more flexible organization of work processes or the support of mobile work. According to the report “World Robotics” by IFR, Germany is the largest robot market in Europe – the robot density ranks in 4th place worldwide with 397 units per 10,000 employees.

The National Robotics Initiative (NRI) in the USA was launched for fundamental robotics R&D supported by the US government. The NRI-3.0 program, announced in February 2021, seeks research on integrated robot systems and builds upon the previous NRI programs. The US government supported the NRI-3.0 fund to the sum of 14 million USD in 2021. Collaboration among academics, industry, government, non-profit, and other organizations is encouraged. The “Moon to Mars” project by NASA for example highlights objectives to establish a long-term presence in the vicinity of and on the moon. The projects target research and technology development that will significantly increase the performance of robots to collaboratively support deep space human exploration and science missions. For the Artemis lunar program, the US government is planning to allocate a budget of 35 billion USD from 2020 to 2024. The statistical yearbook “World Robotics” by IFR shows that robot density in the United States rose from 255 units in 2020 to 274 units in 2021. The country ranks 9th in the world. Regarding annual installations of industrial robots, the USA takes 3rd position.

]]>
Robot Talk Episode 33 – Interview with Dan Stoyanov https://robohub.org/robot-talk-episode-33-interview-with-dan-stoyanov/ Fri, 20 Jan 2023 12:00:53 +0000 https://robohub.org/?p=206398

Claire chatted to Professor Dan Stoyanov from University College London all about robotic vision, surgical robotics, and artificial intelligence.

Dan Stoyanov, FREng, FIET, is a Professor at UCL Computer Science holding a Royal Academy of Engineering Chair in Emerging Technologies. He is Director of the Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), a large research centre combining engineering and clinical expertise. His research interests are focused on surgical robotics, surgical data science and the development of surgical AI systems for clinical use. He is Chief Scientist at Digital Surgery, and he co-founded Odin Vision Ltd as a UCL spin-out focused on AI in gastroenterology.

]]>
When a professor meets a farmer https://robohub.org/when-a-professor-meets-a-farmer/ Thu, 19 Jan 2023 09:06:48 +0000 http://robohub.org/?guid=b527b60e578d5cffef0a62deb2f0fb30 Robot developments and the study of social processes can happen side-by-side in RoboHouse. Because we feel that technology should learn to look beyond its own horizons, if we aim to make the workplace more attractive. Why are people leaving the jobs they used to love? What’s going on in crucial sectors like healthcare, agriculture and […]

The post When a professor meets a farmer appeared first on RoboHouse.

]]>

Robot developments and the study of social processes can happen side-by-side in RoboHouse. Because we feel that technology should learn to look beyond its own horizons, if we aim to make the workplace more attractive. Why are people leaving the jobs they used to love? What’s going on in crucial sectors like healthcare, agriculture and manufacturing?

To explore these questions, we go into the field with scientists and innovators. Under the banner of FRAIM, our new transdisciplinary research centre dedicated to the future of work. What do robot specialists notice when they travel to places where people and robots work together?

Our latest instalment of FRAIM in the Field follows Maria Luce Lupetti as she meets with Henk Verdegaal on a grey November day. Last August Verdegaal, flower bulb farmer in the Netherlands, finally saw what an agricultural robot could do on his lands. The Agbot by developer AgXeed was humming along, managed by a “smart and ready to use autonomy system with a full suite of vehicle peripherals.”

As the Agbot demonstration progressed, flower bulb farmer Henk Verdegaal became more and more convinced of the potential of this particular way of implementing robotics: “I feel this could be deployed for all things related to light ground work. The only issue is: can the system run for enough hours to make it worthwhile financially. That seems hard at the moment.”

Henk Verdegaal experiments with smart technology to reduce his use of pesticides. He expects that systems like Agbot can also reduce his reliance on labour and liberate him from field work, so that he can focus on more important processes. Drones however, have so far failed to impress Verdegaal. Connectivity issues caused the drone to lose its way and communicate poorly with the camera.

Assistant professor Maria Luce Lupetti, specialised in critical design for AI systems at TU Delft, arrives at a sobering insight during FRAIM in the Field: “In a place like a farm there are clear problems like not finding people to drive the truck. So it automatically makes you think: ‘OK, you make it autonomous. You have a clear need for that, the technology is there.’ But there are reasons why people have a hard time finding workers. These problems are systemic. There are financial issues, there are sustainability issues. There is a pressing housing crisis that makes the price of the land rise. A lot of different forces are coming together to influence the work of people on a farm.”

Watch the rest of the series here, or on our youtube channel.


The post When a professor meets a farmer appeared first on RoboHouse.

]]>
New robots in Europe can be workers’ best friends https://robohub.org/new-robots-in-europe-can-be-workers-best-friends/ Tue, 17 Jan 2023 10:53:01 +0000 https://robohub.org/?p=206374

Researchers are ushering in a new way of thinking about robots in the workplace based on the idea of robots and workers as teammates rather than competitors. © BigBlueStudio, Shutterstock

For decades, the arrival of robots in the workplace has been a source of public anxiety over fears that they will replace workers and create unemployment.

Now that more sophisticated and humanoid robots are actually emerging, the picture is changing, with some seeing robots as promising teammates rather than unwelcome competitors.

‘Cobot’ colleagues

Take Italian industrial-automation company Comau. It has developed a robot that can collaborate with – and enhance the safety of – workers in strict cleanroom settings in the pharmaceutical, cosmetics, electronics, food and beverage industries. The innovation is known as a “collaborative robot”, or “cobot”.

Comau’s arm-like cobot, which is designed for handling and assembly tasks, can automatically switch from an industrial to a slower speed when a person enters the work area. This new feature allows one robot to be used instead of two, maximising productivity and protecting workers.

‘It has advanced things by allowing a dual mode of operation,’ said Dr Sotiris Makris, a roboticist at the University of Patras in Greece. ‘You can either use it as a conventional robot or, when it is in collaborative mode, the worker can grab it and move it around as an assisting device.’

Makris was coordinator of the just-completed EU-funded SHERLOCK project, which explored new methods for safely combining human and robot capabilities from what it regarded as an often overlooked research angle: psychological and social well-being.

Creative and inclusive

Robotics can help society by carrying out repetitive, tedious tasks, freeing up workers to engage in more creative activities. And robotic technologies that can collaborate effectively with workers could make workplaces more inclusive, such as by aiding people with disabilities.

“There is increasing competition around the globe, with new advances in robotics.”

– Dr Sotiris Makris, SHERLOCK

These opportunities are important to seize as the structure and the age profile of the European workforce changes. For example, the proportion of 55-to-64-year-olds increased from 12.5% of the EU’s employees in 2009 to 19% in 2021.

Alongside the social dimension, there is also economic benefit from greater industrial efficiency, showing that neither necessarily needs to come at the expense of the other.

‘There is increasing competition around the globe, with new advances in robotics,’ said Makris. ‘That is calling for actions and continuous improvement in Europe.’

Makris cites the humanoid robots being developed by Elon Musk-led car manufacturer Tesla. Wearable robotics, bionic limbs and exoskeleton suits are also being developed that promise to enhance people’s capabilities in the workplace.

Still, the rapidly advancing wave of robotics poses big challenges when it comes to ensuring they are effectively integrated into the workplace and that people’s individual needs are met when working with them. 

Case for SHERLOCK

SHERLOCK also examined the potential for smart exoskeletons to support workers in carrying and handling heavy parts at places such as workshops, warehouses or assembly sites. Wearable sensors and AI were used to monitor and track human movements.

With this feedback, the idea is that the exoskeleton can then adapt to the needs of the specific task while helping workers retain an ergonomic posture to avoid injury.

‘Using sensors to collect data from how the exoskeleton performs allowed us to see and better understand the human condition,’ said Dr Makris. ‘This allowed us to have prototypes on how exoskeletons need to be further redesigned and developed in the future, depending on different user profiles and different countries.’

SHERLOCK, which has just ended after four years, brought together 18 European organisations in multiple countries from Greece to Italy and the UK working on different areas of robotics.

The range of participants enabled the project to harness a wide variety of perspectives, which Dr Makris said was also beneficial in the light of differing national rules on integrating robotics technology.

As a result of the interaction of these robotic systems with people, the software is advanced enough to give direction to ‘future developments on the types of features to have and how the workplace should be designed,’ said Dr Makris.

Old hands, new tools

Another EU-funded project that ended this year, CO-ADAPT, used cobots to help older people navigate the digitalised workplace.

“You find interesting differences in how much the machine and how much the person should do.”

– Prof Giulio Jacucci, CO-ADAPT

The project team developed a cobot-equipped adaptive workstation to aid people in assembly tasks, such as making a phone, car or toy – or, indeed, combining any set of individual components into a finished product during manufacturing. The station can adapt workbench height and lighting to a person’s physical characteristics and visual abilities. It also includes features like eye-tracking glasses to gather information on mental workload.

That brings more insight into what all kinds of people need, said Professor Giulio Jacucci, coordinator of CO-ADAPT and a computer scientist at the University of Helsinki in Finland.

‘You find interesting differences in how much the machine and how much the person should do, as well as how much the machine should try to give guidance and how,’ Jacucci said. ‘This is important work that goes down to the nuts and bolts of making this work.’

Still, cobot-equipped workplaces that can fully tap into and respond to people’s mental states in real-life settings could still be a number of years away, he said.

‘It’s so complex because there’s the whole mechanical part, plus trying to understand people’s status from their psychophysiological states,’ said Prof Jacucci.

Meanwhile, because new technologies can be used in much simpler ways to improve the workplace, CO-ADAPT also explored digitalisation more broadly.

Smart shifts

One area was software that enables ‘smart-shift scheduling’, which arranges duty periods for workers based on their personal circumstances. The approach has been shown to reduce sick leave, stress and sleep disorders among social welfare and health care workers.

‘It’s a fantastic example of how workability improves because we use evidence-based knowledge of how to have well-being-informed schedules,’ said Prof Jacucci.

Focusing on the individual is key to the future of well-integrated digital tools and robotics, he said.

‘Let’s say you have to collaborate with some robot in an assembly task,’ he said. ‘The question is: should the robot be aware of my cognitive and other abilities? And how should we divide the task between the two?’

The basic message from the project is that plenty of room exists to improve and broaden working environments.

‘It shows how much untapped potential there is,’ said Prof Jacucci.


This article was originally published in Horizon, the EU Research and Innovation magazine.

Research in this article was funded by the EU. If you liked this article, please consider sharing it on social media.

]]>
Interview with Hae-Won Park, Seungwoo Hong and Yong Um about MARVEL, a robot that can climb on various inclined steel surfaces https://robohub.org/interview-with-hae-won-park-seungwoo-hong-and-yong-um-about-marvel-a-robot-that-can-climb-on-various-inclined-steel-surfaces/ Sun, 15 Jan 2023 09:48:13 +0000 https://robohub.org/?p=206343

Prof. Hae-Won Park (left), Ph.D. Student Yong Um (centre), Ph.D. Student Seungwoo Hong (right). Credits: KAIST

We had the chance to interview Hae-Won Park, Seungwoo Hong and Yong Um, authors of the paper “Agile and versatile climbing on ferromagnetic surfaces with a quadrupedal robot”, recently published in Science Robotics.

What is the topic of the research in your paper?
The main topic of our work is that the robot we have developed can move agilely, not only on flat ground but also on vertical walls and ceilings made of ferromagnetic materials. Also, it has the ability to perform dexterous maneuvers such as crossing gaps, overcoming obstacles, and transitioning upon corners.

Could you tell us about the implications of your research and why it is an interesting area for study?
Such agile and dexterous locomotion capabilities will be able to expand the robot’s operational workspace and approach places that are difficult or dangerous for human operators to access directly. For example, inspection and welding operations in heavy industries such as shipbuilding, steel bridges, and storage tanks.

Could you explain your methodology? What were your main findings?
Our magnet foot can switch the on/off state in a short period of time (5 ms) and in an energy-efficient way, thanks to the novel geometry design of EPM. At the same time, the magnet foot can provide large holding forces in both shear and normal directions due to the MRE footpad. Also, our actuators can provide balanced speed/torque characteristics, high-bandwidth torque control capability, and the ability to mediate high impulsive force. To control vertical and inverted locomotion as well as various versatile motions, we have utilized a control framework (model predictive control) that can generate reliable and robust reaction forces to track desired body motions in 3D space while preventing slippage or tipping-over occurs. We found that all the elements mentioned earlier are imperative to perform dynamic maneuvers against gravity.

What further work are you planning in this area?
So far, the robot is able to move on smooth surfaces with moderate curvature. To enable the robot to move on irregularly shaped surfaces, we are working on designing a compliantly-integrated multiple miniaturized EPMs with MRE footpads that can increase the effective contact area to provide robust adhesion. Also, a vision system with high-level navigation algorithms will be included to enable the robot to move autonomously in the near future.

About the authors

Hae-Won Park received the B.S. and M.S. degrees from Yonsei University, Seoul, South Korea, in 2005 and 2007, respectively, and the Ph.D. degree from the University of Michigan, Ann Arbor, MI, USA, in 2012, all in mechanical engineering. He is an Associate Professor of mechanical engineering with the Korea Advanced Institute of Science and Technology, Daejeon, South Korea. His research interests include the intersection of control, dynamics, and mechanical design of robotic systems, with special emphasis on legged locomotion robots. Dr. Park is the recipient of the 2018 National Science Foundation (NSF) CAREER Award and NSF most prestigious awards in support of early-career faculty.

Seungwoo Hong received the B.S. degree from Shanghai Jiao Tong University, Shanghai, China, in July 2014, and the M.S. degree from Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, in August 2017, all in mechanical engineering. He is currently a Ph.D. candidate with the Department of Mechanical Engineering, KAIST, Daejeon, Korea. His current research interests include model-based optimization, motion planning and control of legged robotic systems.

Yong Um received the B.S. degree in mechanical engineering from the Korea Advanced Institute of Science and Technology, Daejeon, South Korea, in 2020. He is currently working toward the Ph.D. degree in mechanical engineering in Korea Advanced Institute of Science and Technology. His research interests include mechanical system and magnetic device design for legged robot.


]]>
CES unveiled 2023 all access with Brian Tong (+ interviews & keynotes) https://robohub.org/ces-unveiled-2023-all-access-with-brian-tong-interviews-keynotes/ Sat, 14 Jan 2023 08:44:26 +0000 https://robohub.org/?p=206340

CES Unveiled was jam packed with the latest and greatest tech from companies from all over the world. Get a behind-the-scenes look with Brian Tong at the innovations we saw.

C Space Studio & Anchor Desk interviews

Keynotes & Insider look

]]>
Robot Talk Episode 32 – Interview with Mollie Claypool https://robohub.org/robot-talk-episode-32-interview-with-mollie-claypool/ Fri, 13 Jan 2023 12:15:55 +0000 https://robohub.org/?p=206357

Claire chatted to Mollie Claypool from Automated Architecture about robot house-building, zero-carbon architecture, and community participation.

Mollie Claypool is CEO of AUAR Ltd, a tech company revolutionising house building using automation. Mollie is a leading architecture theorist focused on issues of social justice highlighted by increasing automation in architecture and design production. She is also Associate Professor in Architecture at The Bartlett School of Architecture, UCL.

]]>
Get ready to robot! Robot drawing and story competitions for primary schoolchildren now officially open for entries https://robohub.org/get-ready-to-robot-robot-drawing-and-story-competitions-for-primary-schoolchildren-now-officially-open-for-entries/ Thu, 12 Jan 2023 08:27:22 +0000 https://robohub.org/?p=206325

The EPSRC UK Robotics and Autonomous Systems (UK-RAS) Network is pleased to announce the official launch of its 2023 competitions, inviting the UK’s primary schoolchildren to share their creative robot designs and imaginative stories with a panel of experts, for a chance to win some unique prizes. These annual competitions, which have proved hugely popular with budding authors and illustrators nationwide, are now returning for the fourth year.

The “Draw A Robot” competition challenges children in Key Stage 1 (aged 5-7 years old) to design a robot that they’d like to see in the future. Children can use whichever drawing materials they prefer — paper, pens, pencils, paints, crayons, or even natural materials — to create their ideal robot design, and the robot can be designed to perform any task or job. Competition participants will be able to explain their robot’s functions by labelling gadgets and features on the drawing and writing a short design spec.

For the “Once Upon A Robot” writing competition, Key Stage 2 children (aged 7-11 years old), are invited to write an imaginative short story featuring any kind of robot – or robots – their imagination can conjure! Children will have up to 800 words to tell their creative robot tales and they can choose any literary genre they like. It could be a spine-tingling horror, an action-packed adventure, or even a light-hearted comedy.

ZOOG by Matilde Facchini, age 7 (Draw a Robot 2022 Winner)

Plan-o-bot by Tehmina Walker, age 6 (Draw a Robot 2021 Winner)

The two competitions will be judged by robotics experts from the organising ESPRC UK-RAS Network, plus two very special invited judges. The writing competition will be judged this year by award-winning author Sharna Jackson, whose inspiring and mystifying books include High-Rise Mystery and The Good Turn. The drawing competition will be judged by internationally acclaimed Anglo/American author, illustrator and artist Ted Dewan, creator of the Emmy-Award-winning animated television series Bing.

This year’s exclusive prize packages include:

Draw A Robot Competition winner

  • Thames & Kosmos Coding and Robotics kit – contributed by competition partner the University of Sheffield Advanced Manufacturing Research Centre (AMRC)
  • A tour of the AMRC’s Factory 2050 in Sheffield, the UK’s first state-of-the-art factory dedicated to conducting collaborative research into reconfigurable digitally assisted assembly, component manufacturing and machining technologies
  • A copy of the book “The Sorcerer’s Apprentice”, signed by competition judge Ted Dewan

Draw A Robot Competition runner-up

  • 4M Green Science Solar Hybrid Power Aqua Robot – contributed by competition partner the UKRI Trust Worthy Autonomous Systems (TAS) hub
  • A copy of the book “Top Secret”, signed by competition judge Ted Dewan

Once Upon A Robot Competition winner

  • Lego Mindstorms Robot Inventor kit – contributed by competition partner Birmingham Extreme Robotics Lab
  • A tour of the Extreme Robotics Lab in Birmingham and a robotics masterclass from RobotCoders for the winner and a friend
  • Printed copy of the winning story with bespoke illustrations by illustrator and science communicator Hana Ayoob
  • A copy of the books “The Good Turn” and “Black Artists Shaping the World”, signed by competition judge Sharna Jackson

Once Upon A Robot Competition runner-up

  • Maqueen Lite – micro:bit – contributed by competition partner The National Robotarium
  • A copy of the book “High-Rise Mystery”, signed by competition judge Sharna Jackson

For more information, details of prizes, judging criteria and to submit an entry, please visit https://www.ukras.org.uk/school-robot-competition/.

Both competitions are open for entry from the 10th January and will close for submissions on the 23rd April. The winners will be announced at a special virtual award ceremony due to be held on 22nd June 2023.

EPSRC UK-RAS Network Chair Prof. Robert Richardson says: “We are absolutely delighted to be launching these two fantastic competitions for primary schoolchildren for the fourth year running, which offer the next generation a creative way to engage with the exciting world of robotics and automation. We can’t wait to see the imagination and ingenuity that the nation’s young authors and artists bring to these challenges, and we look forward to the very enjoyable task of judging this year’s entries.”

The two creative competitions for young children were first launched in 2020 for UK Robotics Week, now the UK Festival of Robotics – a 7-day celebration of robotics and intelligent systems held at the end of June. This annual celebration is hosted by the EPSRC UK Robotics and Autonomous Systems (UK-RAS) Network, which provides academic leadership in robotics and coordinates activities at 35 partner universities across the UK.

]]>
Year end summary: Top Robocar stories of 2022 https://robohub.org/year-end-summary-top-robocar-stories-of-2022/ Tue, 10 Jan 2023 08:10:51 +0000 http://robohub.org/?guid=54fbdddc3d49b10964fc859a4b649f19

Here’s my annual summary of the top stories of the prior year. This time the news was a strong mix of bad and good.

Read the text story on Forbes.com at Robocars 2022 year in review.

And see the video version here:

]]>
Futuristic fields: Europe’s farm industry on cusp of robot revolution https://robohub.org/futuristic-fields-europes-farm-industry-on-cusp-of-robot-revolution/ Sun, 08 Jan 2023 09:50:20 +0000 https://robohub.org/?p=206281

Artificial intelligence is set to revolutionise agriculture by helping farmers meet field-hand needs and identify diseased plants. © baranozdemir, iStock

In the Dutch province of Zeeland, a robot moves swiftly through a field of crops including sunflowers, shallots and onions. The machine weeds autonomously – and tirelessly – day in, day out.

“Farmdroid” has made life a lot easier for Mark Buijze, who runs a biological farm with 50 cows and 15 hectares of land. Buijze is one of the very few owners of robots in European agriculture.

Robots to the rescue

His electronic field worker uses GPS and is multifunctional, switching between weeding and seeding. With the push of a button, all Buijze has to do is enter coordinates and Farmdroid takes it from there.

‘With the robot, the weeding can be finished within one to two days – a task that would normally take weeks and roughly four to five workers if done by hand,’ he said. ‘By using GPS, the machine can identify the exact location of where it has to go in the field.’

About 12 000 years ago, the end of foraging and start of agriculture heralded big improvements in people’s quality of life. Few sectors have a history as rich as that of farming, which has evolved over the centuries in step with technological advancements.

In the current era, however, agriculture has been slower than other industries to follow one tech trend: artificial intelligence (AI). While already commonly used in forms ranging from automated chatbots and face recognition to car braking and warehouse controls, AI for agriculture is still in the early stages of development.

Now, advances in research are spurring farmers to embrace robots by showing how they can do everything from meeting field-hand needs to detecting crop diseases early.

Lean and green

For French agronomist Bertrand Pinel, farming in Europe will require far greater use of robots to be productive, competitive and green – three top EU goals for a sector whose output is worth around €190 billion a year.

“Labour is one of the biggest obstacles in agriculture.”

– Fritz van Evert, ROBS4CROPS

One reason for using robots is the need to forgo the use of herbicides by eliminating weeds the old-fashioned way: mechanical weeding, a task that is not just mundane but also arduous and time consuming. Another is the frequent shortage of workers to prune grapevines.

‘In both cases, robots would help,’ said Pinel, who is research and development project manager at France-based Terrena Innovation. ‘That is our idea of the future for European agriculture.’

Pinel is part of the EU-funded ROBS4CROPS project. With some 50 experts and 16 institutional partners involved, it is pioneering a robot technology on participating farms in the Netherlands, Greece, Spain and France.

‘This initiative is quite innovative,’ said Frits van Evert, coordinator of the project. ‘It has not been done before.’

In the weeds

AI in agriculture looks promising for tasks that need to be repeated throughout the year such as weeding, according to van Evert, a senior researcher in precision agriculture at Wageningen University in the Netherlands.

‘If you grow a crop like potatoes, typically you plant the crop once per year in the spring and you harvest in the fall, but the weeding has to be done somewhere between six and 10 times per year,’ he said.

Plus, there is the question of speed. Often machines work faster than any human being can.

“With this robot everything is done in the field.”

– Francisco Javier Nieto De Santos, FLEXIGROBOTS

Francisco Javier Nieto De Santos, coordinator of the EU-funded FLEXIGROBOTS project, is particularly impressed by a model robot that takes soil samples. When done by hand, this practice requires special care to avoid contamination, delivery to a laboratory and days of analysis.

‘With this robot everything is done in the field,’ De Santos said. ‘It can take several samples per hour, providing results within a matter of minutes.’

Eventually, he said, the benefits of such technologies will extend beyond the farm industry to reach the general public by increasing the overall supply of food.

Unloved labour

Meanwhile, agricultural robots may be in demand not because they can work faster than any person but simply because no people are available for the job.

Even before inflation rates and fertiliser prices began to surge in 2021 amid an energy squeeze made worse by Russia’s invasion of Ukraine this year, farmers across Europe were struggling on another front: finding enough field hands including seasonal workers.

‘Labour is one of the biggest obstacles in agriculture,’ said van Evert. ‘It’s costly and hard to get these days because fewer and fewer people are willing to work in agriculture. We think that robots, such as self-driving tractors, can take away this obstacle.’

The idea behind ROBS4CROPS is to create a robotic system where existing agricultural machinery is upgraded so it can work in tandem with farm robots.

For the system to work, raw data such as images or videos must first be labelled by researchers in ways than can later be read by the AI.

Driverless tractors

The system then uses these large amounts of information to make “smart” decisions as well as predictions – think about the autocorrect feature on laptop computers and mobile phones, for example.

A farming controller comparable to the “brain” of the whole operation decides what needs to happen next or how much work remains to be done and where – based on information from maps or instructions provided by the farmer.

The machinery – self-driving tractors and smart implements like weeders equipped with sensors and cameras – gathers and stores more information as it works, becoming “smarter”.

Crop protection

FLEXIGROBOTS, based in Spain, aims to help farmers use existing robots for multiple tasks including disease detection.

Take drones, for example. Because they can spot a diseased plant from the air, drones can help farmers detect sick crops early and prevent a wider infestation.

‘If you can’t detect diseases in an early stage, you may lose the produce of an entire field, the production of an entire year,’ said De Santos. ‘The only option is to remove the infected plant.’

For example, there is no treatment for the fungus known as mildew, so identifying and removing diseased plants early on is crucial.

Pooling information is key to making the whole system smarter, De Santos said. Sharing data gathered by drones with robots or feeding the information into models expands the “intelligence” of the machines.

Although agronomist Pinel doesn’t believe that agriculture will ever be solely reliant on robotics, he’s certain about their revolutionary impact.

‘In the future, we hope that the farmers can just put a couple of small robots in the field and let them work all day,’ he said.

Research in this article was funded by the EU. If you liked this article, please consider sharing it on social media.

Watch the video


This article was originally published in Horizon, the EU Research and Innovation magazine.

]]>
Smart ‘Joey’ bots could soon swarm underground to clean and inspect our pipes https://robohub.org/smart-joey-bots-could-soon-swarm-underground-to-clean-and-inspect-our-pipes/ Fri, 06 Jan 2023 12:40:12 +0000 https://robohub.org/?p=206288

Joey’s design. Image credit: TL Nguyen, A Blight, A Pickering, A Barber, GH Jackson-Mills, JH Boyle, R Richardson, M Dogar, N Cohen

By Mischa Dijkstra, Frontiers science writer

Researchers from the University of Leeds have developed the first mini-robot, called Joey, that can find its own way independently through networks of narrow pipes underground, to inspect any damage or leaks. Joeys are cheap to produce, smart, small, and light, and can move through pipes inclined at a slope or over slippery or muddy sediment at the bottom of the pipes. Future versions of Joey will operate in swarms, with their mobile base on a larger ‘mother’ robot Kanga, which will be equipped with arms and tools for repairs to the pipes.

Beneath our streets lies a maze of pipes, conduits for water, sewage, and gas. Regular inspection of these pipes for leaks, or repair, normally requires these to be dug up. The latter is not only onerous and expensive – with an estimated annual cost of £5.5bn in the UK alone – but causes disruption to traffic as well as nuisance to people living nearby, not to mention damage to the environment.

Now imagine a robot that can find its way through the narrowest of pipe networks and relay images of damage or obstructions to human operators. This isn’t a pipedream anymore, shows a study in Frontiers in Robotics and AI by a team of researchers from the University of Leeds.

“Here we present Joey – a new miniature robot – and show that Joeys can explore real pipe networks completely on their own, without even needing a camera to navigate,” said Dr Netta Cohen, a professor at the University of Leeds and the final author on the study.

Joey is the first to be able to navigate all by itself through mazes of pipes as narrow as 7.5 cm across. Weighing just 70 g, it’s small enough to fit in the palm of your hand.

Pipebots project

The present work forms part of the ‘Pipebots’ project of the universities of Sheffield, Bristol, Birmingham, and Leeds, in collaboration with UK utility companies and other international academic and industrial partners.

First author Dr Thanh Luan Nguyen, a postdoctoral scientist at the University of Leeds who developed Joey’s control algorithms (or ‘brain’), said: “Underground water and sewer networks are some of the least hospitable environments, not only for humans, but also for robots. Sat Nav is not accessible undergound. And Joeys are tiny, so have to function with very simple motors, sensors, and computers that take little space, while the small batteries must be able to operate for long enough.”

Joey moves on 3D-printed ‘wheel-legs’ that roll through straight sections and walk over small obstacles. It is equipped with a range of energy-efficient sensors that measure its distance to walls, junctions, and corners, navigational tools, a microphone, and a camera and ‘spot lights’ to film faults in the pipe network and save the images. The prototype cost only £300 to produce.

Mud and slippery slopes

The team showed that Joey is able to find its way, without any instructions from human operators, through an experimental network of pipes including a T-junction, a left and right corner, a dead-end, an obstacle, and three straight sections. On average, Joey managed to explore about one meter of pipe network in just over 45 seconds.

To make life more difficult for the robot, the researchers verified that the robot easily moves up and down inclined pipes with realistic slopes. And to test Joey’s ability to navigate through muddy or slippery tubes, they also added sand and gooey gel (actually dishwashing liquid) to the pipes – again with success.

Importantly, the sensors are enough to allow Joey to navigate without the need to turn on the camera or use power-hungry computer vision. This saves energy and extends Joey’s current battery life. Whenever the battery runs low, Joey will return to its point of origin, to ‘feed’ on power.

Currently, Joeys have one weakness: they can’t right themselves if they inadvertently turn on their back, like an upside-down tortoise. The authors suggest that the next prototype will be able to overcome this challenge. Future generations of Joey should also be waterproof, to operate underwater in pipes entirely filled with liquid.

Joey’s future is collaborative

The Pipebots scientists aim to develop a swarm of Joeys that communicate and work together, based off a larger ‘mother’ robot named Kanga. Kanga, currently under development and testing by some of the same authors at Leeds School of Computing, will be equipped with more sophisticated sensors and repair tools such as robot arms, and carry multiple Joeys.

“Ultimately we hope to design a system that can inspect and map the condition of extensive pipe networks, monitor the pipes over time, and even execute some maintenance and repair tasks,” said Cohen.

“We envision the technology to scale up and diversify, creating an ecology of multi-species of robots that collaborate underground. In this scenario, groups of Joeys would be deployed by larger robots that have more power and capabilities but are restricted to the larger pipes. Meeting this challenge will require more research, development, and testing over 10 to 20 years. It may start to come into play around 2040 or 2050.” 

Top half: navigating through a T-junction in the pipe network. Bottom half: encountering an obstruction and turning back. Image credit: TL Nguyen, A Blight, A Pickering, A Barber, GH Jackson-Mills, JH Boyle, R Richardson, M Dogar, N Cohen

Top half: moving through sand, slippery goo, or mud. Bottom half: moving through pipe sloped at an angle. Image credit: TL Nguyen, A Blight, A Pickering, A Barber, GH Jackson-Mills, JH Boyle, R Richardson, M Dogar, N Cohen

]]>
Science Magazine robot videos 2022 (+ breakthrough of the year) https://robohub.org/science-magazine-robot-videos-2022-breakthrough-of-the-year/ Tue, 03 Jan 2023 12:01:48 +0000 https://robohub.org/?p=206272

Image generated by DALLE 2 using prompt “a hyperrealistic image of a robot watching robot videos on a laptop”

Did you manage to watch all the holiday robot videos of 2022? If you did but are still hungry for more, I have a few more videos from Science Magazine featuring robotics research that were released during last year. Enjoy!

Extra: breakthrough of the year

]]>
Five ways drones will change the way buildings are designed https://robohub.org/five-ways-drones-will-change-the-way-buildings-are-designed/ Mon, 02 Jan 2023 10:00:07 +0000 https://robohub.org/?p=206252

elwynn/Shutterstock

By Paul Cureton (Senior Lecturer in Design (People, Places, Products), Lancaster University) and Ole B. Jensen (Professor of Urban Theory and Urban Design, Aalborg University)

Drones are already shaping the face of our cities – used for building planning, heritage, construction and safety enhancement. But, as studies by the UK’s Department of Transport have found, swathes of the public have a limited understanding of how drones might be practically applied.

It’s crucial that the ways drones are affecting our future are understood by the majority of people. As experts in design futures and mobility, we hope this short overview of five ways drones will affect building design offers some knowledge of how things are likely to change.

Infographic showcasing other ways drones will influence future building design. Nuri Kwon, Drone Near-Futures, Imagination Lancaster, Author provided

1. Creating digital models of buildings

Drones can take photographs of buildings, which are then used to build 3D models of buildings in computer-aided design software.

These models have accuracy to within a centimetre, and can be combined with other data, such as 3D scans of interiors using drones or laser scanners, in order to provide a completely accurate picture of the structure for surveyors, architects and clients.

Using these digital models saves time and money in the construction process by providing a single source thaOle B. Jensent architects and planners can view.

2. Heritage simulations

Studio Drift are a multidisciplinary team of Dutch artists who have used drones to construct images through theatrical outdoor drone performances at damaged national heritage sites such as the Notre Dame in Paris, Colosseum in Rome and Gaudí’s Sagrada Familia in Barcelona.

Drones could be used in the near-future in a similar way to help planners to visualise the final impact of restoration or construction work on a damaged or partially finished building.

3. Drone delivery

The arrival of drone delivery services will see significant changes to buildings in our communities, which will need to provide for docking stations at community hubs, shops and pick-up points.

Wingcopter are one of many companies trialling delivery drones. Akash 1997, CC BY-SA

There are likely to be landing pads installed on the roofs of residential homes and dedicated drone-delivery hubs. Research has shown that drones can help with the last mile of any delivery in the UK, Germany, France and Italy.

Architects of the future will need to add these facilities into their building designs.

4. Drones mounted with 3D printers

Two research projects from architecture, design, planning, and consulting firm Gensler and another from a consortium led by Imperial College London (comprising University College London, University of Bath, University of Pennsylvania, Queen Mary University of London, and Technical University of Munich) named Empa have been experimenting with drones with mounted 3D printers. These drones would work at speed to construct emergency shelters or repair buildings at significant heights, without the need for scaffolding, or in difficult to reach locations, providing safety benefits.

Gensler have already used drones for wind turbine repair and researchers at Imperial College are exploring bee-like drone swarms that work together to construct blueprints. The drones coordinate with each other to follow a pre-defined path in a project called Aerial Additive Manufacturing. For now, the work is merely a demonstration of the technology, and not working on a specific building.

In the future, drones with mounted 3D printers could help create highly customised buildings at speed, but how this could change the workforce and the potential consequences for manual labour jobs is yet to be understood.

5. Agile surveillance

Drones offer new possibilities for surveillance away from the static, fixed nature of current systems such as closed circuit television.

Drones with cameras and sensors relying on complex software systems such as biometric indicators and “face recognition” will probably be the next level of surveillance applied by governments and police forces, as well as providing security monitoring for homeowners. Drones would likely be fitted with monitoring devices, which could communicate with security or police forces.

Drones used in this way could help our buildings become more responsive to intrusions, and adaptable to changing climates. Drones may move parts of the building such as shade-creating devices, following the path of the sun to stop buildings overheating, for example.The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
Robot Talk Podcast – November & December episodes (+ bonus winter treats) https://robohub.org/robot-talk-podcast-november-december-episodes-bonus-winter-treats/ Fri, 30 Dec 2022 09:30:54 +0000 https://robohub.org/?p=206245

Episode 24 – Gopal Ramchurn

Claire chatted to Gopal Ramchurn from the University of Southampton about artificial intelligence, autonomous systems and renewable energy.

Sarvapali (Gopal) Ramchurn is a Professor of Artificial Intelligence, Turing Fellow, and Fellow of the Institution of Engineering and Technology. He is the Director of the UKRI Trustworthy Autonomous Systems hub and Co-Director of the Shell-Southampton Centre for Maritime Futures. He is also a Co-CEO of Empati Ltd, an AI startup working on decentralised green hydrogen technologies. His research is about the design of Responsible Artificial Intelligence for socio-technical applications including energy systems and disaster management.

Episode 25 – Ferdinando Rodriguez y Baena

Claire chatted to Ferdinando Rodriguez y Baena from Imperial College London about medical robotics, robotic surgery, and translational research.

Ferdinando Rodriguez y Baena is Professor of Medical Robotics in the Department of Mechanical Engineering at Imperial College, where he leads the Mechatronics in Medicine Laboratory and the Applied Mechanics Division. He has been the Engineering Co-Director of the Hamlyn Centre, which is part of the Institute of Global Health Innovation, since July 2020. He is a founding member and great advocate of the Imperial College Robotics Forum, now the first point of contact for roboticists at Imperial College.

Episode 26 – Séverin Lemaignan

Claire chatted to Séverin Lemaignan from PAL Robotics all about social robots, behaviour, and robot-assisted human-human interactions.

Séverin Lemaignan is Senior Scientist at Barcelona-based PAL Robotics. He leads the Social Intelligence team, in charge of designing and developing the socio-cognitive capabilities of robots like PAL TIAGo and PAL ARI. He obtained his PhD in Cognitive Robotics in 2012 from the CNRS/LAAS and the Technical University of Munich, and worked at Bristol Robotics Lab as Associate Professor in Social Robotics, before moving to industry. His research primarily concerns socio-cognitive human-robot interaction, child-robot interaction and human-in-the-loop machine learning for social robots.

Episode 27 – Simon Wanstall

Claire chatted to Simon Wanstall from the Edinburgh Centre for Robotics all about soft robotics, robotic prostheses, and taking inspiration from nature.

Simon Wanstall is a PhD student at the Edinburgh Centre for Robotics, working on advancements in soft robotic prosthetics. His research interests include soft robotics, bioinspired design and healthcare devices. Simon’s current project is to develop soft sensors so that robotic prostheses can feel the world around them. In order to develop his skills in this area, Simon is also undertaking an industrial placement with Touchlab, a robotics company specialising in sensors.

Episode 28 – Amanda Prorok

Claire chatted to Amanda Prorok from the University of Cambridge all about self-driving cars, industrial robots, and multi-robot systems.

Amanda Prorok is Professor of Collective Intelligence and Robotics in the Department of Computer Science and Technology at Cambridge University, and a Fellow of Pembroke College. She is interested in finding practical methods for hard coordination problems that arise in multi-robot and multi-agent systems.

Episode 29 – Sina Sareh

Claire chatted to Sina Sareh from the Royal College of Art all about industrial inspection, soft robotics, and robotic grippers.

Sina Sareh is the Academic Leader in Robotics at Royal College of Art. He is currently a Reader (Associate Professor) in Robotics and Design Intelligence at RCA, and a Fellow of EPSRC, whose research develops technological solutions to problems of human safety, access and performance involved in a range of industrial operations. Dr Sareh holds a PhD from the University of Bristol, 2012, and served as an impact assessor of Sub-panel 12: Engineering in the assessment phase of the Research Excellence Framework (REF) 2021.

Episode 30 – Ana Cavalcanti

Claire chatted to Ana Cavalcanti from the University of York all about software development, testing and verification, and autonomous mobile robots.

Ana Cavalcanti is a Royal Academy of Engineering Chair in Emerging Technologies. She is the leader of the RoboStar centre of excellence on Software Engineering for Robotics. The RoboStar approach to model-based Software Engineering complements current practice of design and verification of robotic systems, covering simulation, testing, and proof. It is practical, supported by tools, and yet mathematically rigorous.

Bonus winter treats

What is your favourite fictional robot?

What is your advice for a robotics career?

What is your favourite machine or tool?

Could you be friends with a robot?

A day in the life

]]>
Holiday robot videos 2022 updated (+ how robots prepare an Amazon warehouse for Christmas) https://robohub.org/holiday-robot-videos-2022-how-robots-prepare-an-amazon-warehouse-for-christmas/ Thu, 29 Dec 2022 09:00:21 +0000 https://robohub.org/?p=206216

Image generated by OpenAI’s DALL-E 2 with prompt “a robot surrounded by humans, Santa Claus and a Christmas tree at Christmas, digital art”.

Happy holidays everyone! And many thanks to all those that sent us their holiday videos. Here are some robot videos of this year to get you into the spirit of the season. We wish you the very best for these holidays and the year 2023 :)

And here are some very special season greetings from robots!

Recent submissions

Extra: How robots prepare an Amazon warehouse for Christmas


Did we miss your video? You can send it to daniel.carrillozapata@robohub.org and we’ll include it in this list.

]]>
In search for the intelligent machine https://robohub.org/in-search-for-the-intelligent-machine/ Wed, 28 Dec 2022 10:49:52 +0000 https://robohub.org/?p=206238

Elvis Nava is a fellow at ETH’ Zurich’s AI center as well as a doctoral student at the Institute of Neuroinformatics and in the Soft Robotics Lab. (Photograph: Daniel Winkler / ETH Zurich)

By Christoph Elhardt

In ETH Zurich’s Soft Robotics Lab, a white robot hand reaches for a beer can, lifts it up and moves it to a glass at the other end of the table. There, the hand carefully tilts the can to the right and pours the sparkling, gold-coloured liquid into the glass without spilling it. Cheers!

Computer scientist Elvis Nava is the person controlling the robot hand developed by ETH start-up Faive Robotics. The 26-year-old doctoral student’s own hand hovers over a surface equipped with sensors and a camera. The robot hand follows Nava’s hand movement. When he spreads his fingers, the robot does the same. And when he points at something, the robot hand follows suit.

But for Nava, this is only the beginning: “We hope that in future, the robot will be able to do something without our having to explain exactly how,” he says. He wants to teach machines to carry out written and oral commands. His goal is to make them so intelligent that they can quickly acquire new abilities, understand people and help them with different tasks.

Functions that currently require specific instructions from programmers will then be controlled by simple commands such as “pour me a beer” or “hand me the apple”. To achieve this goal, Nava received a doctoral fellowship from ETH Zurich’s AI Center in 2021: this program promotes talents that bridges different research disciplines to develop new AI applications. In addition, the Italian – who grew up in Bergamo – is doing his doctorate at Benjamin Grewe’s professorship of neuroinformatics and in Robert Katzschmann’s lab for soft robotics.

Developed by the ETH start-​up Faive Robotics, the robot hand imitates the movements of a human hand. (Video: Faive Robotics)

Combining sensory stimuli

But how do you get a machine to carry out commands? What does this combination of artificial intelligence and robotics look like? To answer these questions, it is crucial to understand the human brain.

We perceive our environment by combining different sensory stimuli. Usually, our brain effortlessly integrates images, sounds, smells, tastes and haptic stimuli into a coherent overall impression. This ability enables us to quickly adapt to new situations. We intuitively know how to apply acquired knowledge to unfamiliar tasks.

“Computers and robots often lack this ability,” Nava says. Thanks to machine learning, computer programs today may write texts, have conversations or paint pictures, and robots may move quickly and independently through difficult terrain, but the underlying learning algorithms are usually based on only one data source. They are – to use a computer science term – not multimodal.

For Nava, this is precisely what stands in the way of more intelligent robots: “Algorithms are often trained for just one set of functions, using large data sets that are available online. While this enables language processing models to use the word ‘cat’ in a grammatically correct way, they don’t know what a cat looks like. And robots can move effectively but usually lack the capacity for speech and image recognition.”

“Every couple of years, our discipline changes the way we think about what it means to be a researcher,” Elvis Nava says. (Video: ETH AI Center)

Robots have to go to preschool

This is why Nava is developing learning algorithms for robots that teach them exactly that: to combine information from different sources. “When I tell a robot arm to ‘hand me the apple on the table,’ it has to connect the word ‘apple’ to the visual features of an apple. What’s more, it has to recognise the apple on the table and know how to grab it.”

But how does the Nava teach the robot arm to do all that? In simple terms, he sends it to a two-stage training camp. First, the robot acquires general abilities such as speech and image recognition as well as simple hand movements in a kind of preschool.

Open-source models that have been trained using giant text, image and video data sets are already available for these abilities. Researchers feed, say, an image recognition algorithm with thousands of images labelled ‘dog’ or ‘cat.’ Then, the algorithm learns independently what features – in this case pixel structures – constitute an image of a cat or a dog.

A new learning algorithm for robots

Nava’s job is to combine the best available models into a learning algorithm, which has to translate different data, images, texts or spatial information into a uniform command language for the robot arm. “In the model, the same vector represents both the word ‘beer’ and images labelled ‘beer’,” Nava says. That way, the robot knows what to reach for when it receives the command “pour me a beer”.

Researchers who deal with artificial intelligence on a deeper level have known for a while that integrating different data sources and models holds a lot of promise. However, the corresponding models have only recently become available and publicly accessible. What’s more, there is now enough computing power to get them up and running in tandem as well.

When Nava talks about these things, they sound simple and intuitive. But that’s deceptive: “You have to know the newest models really well, but that’s not enough; sometimes getting them up and running in tandem is an art rather than a science,” he says. It’s tricky problems like these that especially interest Nava. He can work on them for hours, continuously trying out new solutions.

Nava spends the majority of his time coding. (Photograph: Elvis Nava)

Nava evaluates his learning algorithm. The results of the experiment in a nutshell. (Photograph: Elvis Nava)

Special training: Imitating humans

Once the robot arm has completed preschool and has learnt to understand speech, recognise images and carry out simple movements, Nava sends it to special training. There, the machine learns to, say, imitate the movements of a human hand when pouring a glass of beer. “As this involves very specific sequences of movements, existing models no longer suffice,” Nava says.

Instead, he shows his learning algorithm a video of a hand pouring a glass of beer. Based on just a few examples, the robot then tries to imitate these movements, drawing on what it has learnt in preschool. Without prior knowledge, it simply wouldn’t be able to imitate such a complex sequence of movements.

“If the robot manages to pour the beer without spilling, we tell it ‘well done’ and it memorises the sequence of movements,” Nava says. This method is known as reinforcement learning in technical jargon.

Elvis Nava teaches robots to carry out oral commands such as “pour me a beer”. (Photograph: Daniel Winkler / ETH Zürich)

Foundations for robotic helpers

With this two-stage learning strategy, Nava hopes to get a little closer to realising the dream of creating an intelligent machine. How far it will take him, he does not yet know. “It’s unclear whether this approach will enable robots to carry out tasks we haven’t shown them before.”

It is much more probable that we will see robotic helpers that carry out oral commands and fulfil tasks they are already familiar with or that closely resemble them. Nava avoids making predictions as to how long it will take before these applications can be used in areas such as the care sector or construction.

Developments in the field of artificial intelligence are too fast and unpredictable. In fact, Nava would be quite happy if the robot would just hand him the beer he will politely request after his dissertation defence.

]]>
ep.363: Going out on a Bionic Limb, with Joel Gibbard https://robohub.org/going-out-on-a-bionic-limb/ Wed, 21 Dec 2022 19:35:49 +0000 https://robohub.org/?p=206175

Many people associate prosthetic limbs with nude-colored imitations of human limbs. Something built to blend into a society where people have all of their limbs while serving functional use cases. On the other end of the spectrum are the highly optimized prosthetics used by Athletes, built for speed, low weight, and appearing nothing like a human limb.

As a child under 12 years old, neither of these categories of prosthetics particularly speaks to you. Open Bionics, founded by Joel Gibbard and Samantha Payne, was started to create a third category of prosthetics. One that targets the fun, imaginative side of children, while still providing the daily functional requirements.

Through partnerships with Disney and Lucasfilms, Open Bionics has built an array of imagination-capturing prosthetic limbs that are straight-up cool.

Joel Gibbard dives into why they founded Open Bionics, and why you should invest in their company as they are getting ready to let the general public invest in them for the first time.

Joel Gibbard

Joel Gibbard lives in Bristol, UK and graduated with a first-class honors degree in Robotics from the University of Plymouth, UK.

He co-founded Open Bionics alongside Samantha Payne with the goal of bringing advanced, accessible bionic arms to the market. Open Bionics offers the Hero Arm, which is available in the UK, USA, France, Australia, and New Zealand. Open Bionics is revolutionizing the prosthetics industry through its line of inspiration-capturing products.

Links

]]>
CLAIRE and euRobotics: all questions answered on humanoid robotics https://robohub.org/claire-and-eurobotics-all-questions-answered-on-humanoid-robotics/ Tue, 20 Dec 2022 11:42:18 +0000 https://robohub.org/?p=206188

On 9 December, CLAIRE and euRobotics jointly hosted an All Questions Answered (AQuA) event. This one hour session focussed on humanoid robotics, and participants could ask questions regarding the current and future state of AI, robotics and human augmentation in Europe.

The questions were fielded by an expert panel, comprising:

  • Rainer Bischoff, euRobotics
  • Wolfram Burgard, Professor of Robotics and AI, University of Technology Nuremberg
  • Francesco Ferro, CEO, PAL Robotics
  • Holger Hoos, Chair of the Board of Directors, CLAIRE

The session was recorded and you can watch in full below:

]]>
12 years of NCCR Robotics https://robohub.org/12-years-of-nccr-robotics/ Sun, 18 Dec 2022 08:30:51 +0000 https://robohub.org/?p=206155 After 12 years of activity, NCCR Robotics officially ended on 30 November 2022.

We can proudly say that NCCR Robotics has had a truly transformational effect on the national robotics research landscape, creating novel synergies, strengthening key areas, and adding a unique signature that made Switzerland prominent and attractive at the international level.

In its 12 years of activity, NCCR Robotics has had a transformational effect on the national robotics research landscape.

Our highlights include:

  • Achieving several breakthroughs in wearable, rescue and educational robotics
  • Creating new master’s and doctoral programmes that will train generations of future robotics engineers
  • Graduating more than 200 PhD students and 100 postdocs, with more than 1’000 peer-reviewed publications
  • Spinning out several projects into companies, many of which have become international leaders and generated more than 400 jobs
  • Improving awareness of gender balance in robotics and substantially increasing the percentage of women in robotics in Switzerland
  • Kick-starting large outreach programs, such as Cybathlon, Swiss Drone Days, and Swiss Robotics Days, which will continue to increase public awareness of robotics for good

It is not the end of the story though: our partner institutions – EPFL, ETH Zurich, the University of Zurich, the University of Bern, the University of Basel, Università della Svizzera Italiana, EMPA – will continue to collaborate through the Innovation Booster Robotics, a new national program aimed at developing technology transfer activities and maintaining the network.

Research

The research programme of NCCR Robotics has been articulated around three Grand Challenges for future intelligent robots that can improve the quality of life: Wearable Robotics, Rescue Robotics, and Educational Robotics.

In the Wearable Robotics Grand Challenge, NCCR Robotics studied and developed a large range of novel prosthetic and orthotic robots, implantable sensors, and artificial intelligence algorithms to restore the capabilities of persons with disabilities and neurological disorders.

For example, researchers developed implantable and assistive technologies that allowed patients with completely paralyzed legs to walk again thanks to a combination of assistive robots (such as Rysen), implantable microdevices that read brain signals and stimulate spinal cord nerves, and artificial intelligence that translate neural signals into gait patterns.

They also developed prosthetic hands with soft sensors and implantable neural stimulators that enable people to feel again the haptic qualities of objects. Along this line, they also studied and developed prototypes of an extra arm and artificial intelligence that could allow humans to control the additional artificial limb in combination with their natural arms for situations that normally require more than one person.

Researchers also developed the MyoSuit textile soft exoskeletons that allow persons on wheelchairs to stand up, take a few steps, and then sit back on the wheelchair without external help.

In the Rescue Robotics Grand Challenge, researchers developed and deployed legged and flying robots with self-learning capabilities for use in disaster mitigation as well as in civil and industrial inspection.

Among the most notable results are ANYMal, a quadruped robot that won the first prize in the international DARPA challenge, by exploring underground tunnels and identifying a number of objects; K-rock, an amphibious robot inspired by salamanders and crocodiles that can swim, walk, and squat under narrow passages; a collision-resilient drone that has become the most widely used robot in the world by rescue teams, governments, and companies for inspection of confined spaces, bridges, boilers, and ship tankers, to mention a few; a whole family of foldable drones that can change shape to squeeze through narrow passage, protect nearby persons from propellers, carry cargo of various size and weight, and twist their arms to get close to surfaces and perform repair operations; an avian-inspired drone with artificial feathers that can approximate the flight agility of birds of prey.

In addition, researchers developed powerful learning algorithms that enabled legged robots to walk up mountains and grassy land by adapting their gait, and flying robots that learn to fly and avoid high-speed moving objects using bio-inspired vision systems, and even learned to race through a circuit beating world-champion humans. Researchers also proposed new methods to let inexperienced humans and rescue officers interact with, and easily control, drones as if they were an extension of their own body.

In the Educational Robotics Grand Challenge, NCCR researchers created Thymio, a mobile robot for teaching programming and elements of robotics that has been deployed in more than 80’000 units in classrooms across Switzerland, and Cellulo, a smaller modular robot that allows richer forms of interaction with pupils, as wells as a broad range of learning activities (physics, mathematics, geography, games) and training methods for teaching teachers how to integrate robots in their lectures.

Researchers also teamed with Canton Vaud on a large-scale project to introduce robotics and computer science into all primary-school classes and have already trained more than one thousand teachers.

Outreach

Communication, knowledge (and technology) transfer to society and the economy

Over 12 years, NCCR Robotics researchers published approximately 500 articles in peer-review journals and 500 articles in peer-reviewed conferences and filed approximately 50 patents (one third of which have already been granted). They also developed a tech-transfer support programme to help young researchers translate research results into commercially viable products. As a result, 16 spin-offs were supported, out of which 14 have been incorporated and are still active. Some of these start-ups have become full scale-up companies with products sold all over the world, have raised more than five times capital than the total funds of the NCCR over 12 years, and generated several hundreds of new high-tech jobs in Switzerland.

Several initiatives were aimed at the public to communicate the importance of robotics for the quality of life.

  • For example, during the first phase NCCR Robotics organized an annual Robotics Festival at EPFL that attracted at its peak 17’000 visitors in one day.
  • In the second phase, Cybathlon was launched, a world-first Olympic-style competition for athletes with disabilities and supported by assistive devices, which was later taken over by ETH Zurich that will ensure its continuation.
  • Additionally, NCCR Robotics launched the Swiss Drone Days at EPFL that combine drone exhibitions, drone races, and public presentations, and were later taken over by EPFL and most recently by University of Zurich.
  • NCCR Robotics also organized the annual Swiss Robotics Day with the aim of bringing together researchers and industry representatives in a full day of high-profile technology presentations from top Swiss and international speakers, demonstrations of research prototypes and robotics products, carousel of pitch presentations by young spin-offs, and several panel discussions and networking events.

Promotion of young scientists and of academic careers of women

The NCCR Robotics helped develop a new master’s programme and a new PhD programme in robotics at EPFL, created exchange programmes and fellowships with ETH Zurich and top international universities with a program in robotics, and issued several awards for excellence in study, research, technology transfer, and societal impact.

NCCR Robotics has also been very active in addressing equal opportunities. Activities aimed at improving gender balance included dedicated exchange and travel grants, awards for supporting career development, master study fellowships, outreach campaigns, promotional movies and surveys for continuous assessment of the effectiveness of actions. As a result, the percentage of women in the EPFL robotics masters almost doubled in four years, and the number of women postgraduate and assistant professors doubled too. Although much remains to be done in Switzerland, the initial results of these actions are promising and the awareness of the importance of equal opportunity has become pervasive throughout the NCCR Robotics community and in all its research, outreach, and educational activities.

Beyond NCCR Robotics

In order to sustain the long-term impact of NCCR Robotics, EPFL launched a new Center of Intelligent Systems where robotics is a major research pillar and ETH Zurich created a Center for Robotics that includes large research facilities, annual summer schools, and activities to favor collaborations with industry.

Furthermore, EPFL built on NCCR Robotics’ educational technologies and further programmes to create the LEARN Center that will continue training teachers in the use of robots and digital technologies in schools. Similarly, ETH Zurich built on the research and competence developed in the Wearable Robotics Grand Challenge to create the Competence Centre for Rehabilitation Engineering and Science with the goal of restoring and maintaining independence, productivity, and quality of life for people with physical disabilities and contribute towards an inclusive society.

As the project approaches its conclusion, NCCR Robotics members applied for additional funding for the National Thematic Network Innovation Booster Robotics that was launched in 2022 and will continue supporting networking activities and technology transfer in medical and mobile robotics for the next 4 years. Finally, a Swiss Robotics Association comprising stakeholders from academia and industry will be created in order to manage the Innovation Booster programme and offer a communication and collaboration platform for the transformed and enlarged robotics community that NCCR Robotics has contributed to create.

]]>
Soft robots gain new strength and make virtual reality gloves feel more real https://robohub.org/soft-robots-gain-new-strength-and-make-virtual-reality-gloves-feel-more-real/ Fri, 16 Dec 2022 09:30:12 +0000 https://robohub.org/?p=206140

Soft robots, or those made with materials like rubber, gels and cloth, have advantages over their harder, heavier counterparts, especially when it comes to tasks that require direct human interaction. Robots that could safely and gently help people with limited mobility grocery shop, prepare meals, get dressed, or even walk would undoubtedly be life-changing.

However, soft robots currently lack the strength needed to perform these sorts of tasks. This long-standing challenge — making soft robots stronger without compromising their ability to gently interact with their environment — has limited the development of these devices.

With the relationship between strength and softness in mind, a team of Penn Engineers has devised a new electrostatically controlled clutch which enables a soft robotic hand to be able to hold 4 pounds – about the weight of a bag of apples – which is 40 times more than the hand could lift without the clutch. In addition, the ability to perform this task requiring both a soft touch and strength was accomplished with only 125 volts of electricity, a third of the voltage required for current clutches.

Their safe, low-power approach could also enable wearable soft robotic devices that would simulate the sensation of holding a physical object in augmented- and virtual-reality environments.

James Pikul, Assistant Professor in Mechanical Engineering and Applied Mechanics (MEAM), Kevin Turner, Professor and Chair of MEAM with a secondary appointment in Materials Science Engineering, and their Ph.D. students, David Levine, Gokulanand Iyer and Daelan Roosa, published a study in Science Robotics describing a new, fracture-mechanics-based model of electroadhesive clutches, a mechanical structure that can control the stiffness of soft robotic materials.

Using this new model, the team was able to realize a clutch 63 times stronger than current electroadhesive clutches. The model not only increased force capacity of a clutch used in their soft robots, it also decreased the voltage required to power the clutch, making soft robots stronger and safer.

Current soft robotic hands can hold small objects, such as an apple for example. Being soft, the robotic hand can delicately grasp objects of various shapes, understand the energy required to lift them, and become stiff or tense enough to pick an object up, a task similar to how we grasp and hold things in our own hands. An electroadhesive clutch is a thin device that enhances the change of stiffness in the materials which allows the robot to perform this task. The clutch, similar to a clutch in a car, is the mechanical connection between moving objects in the system. In the case of electroadhesive clutches, two electrodes coated with a dielectric material become attracted to each other when voltage is applied. The attraction between the electrodes creates a friction force at the interface that keeps the two plates from slipping past each other. The electrodes are attached to the flexible material of the robotic hand. By turning the clutch on with an electrical voltage, the electrodes stick to each other, and the robotic hand holds more weight than it could previously. Turning the clutch off allows the plates to slide past each other and the hand to relax, so the object can be released.

Traditional models of clutches are based on a simple assumption of Coulombic friction between two parallel plates, where friction keeps the two plates of the clutch from sliding past each other. However, this model does not capture how mechanical stress is nonuniformly distributed in the system, and therefore, does not predict clutch force capacity well. It is also not robust enough to be used to develop stronger clutches without using high voltages, expensive materials, or intensive manufacturing processes. A robotic hand with a clutch created using the friction model may be able to pick up an entire bag of apples, but will require high voltages which make it unsafe for human interaction.

“Our approach tackles the force capacity of clutches at the model level,” says Pikul. “And our model, the fracture-mechanics-based model, is unique. Instead of creating parallel plate clutches, we based our design on lap joints and examined where fractures might occur in these joints. The friction model assumes that the stress on the system is uniform, which is not realistic. In reality, stress is concentrated at various points, and our model helps us understand where those points are. The resulting clutch is both stronger and safer as it requires only a third of the voltage compared to traditional clutches.”

“The fracture mechanics framework and model in this work have been used for the design of bonded joints and structural components for decades,” says Turner. “What is new here is the application of this model to the design of electroadhesive clutches.”

The researchers’ improved clutch can now be easily integrated into existing devices.

“The fracture-mechanics-based model provides fundamental insight into the workings of an electroadhesive clutch, helping us understand them more than the friction model ever could,” says Pikul. “We can already use the model to improve current clutches just by making very slight changes to material geometry and thickness, and we can continue to push the limits and improve the design of future clutches with this new understanding.”

To demonstrate the strength of their clutch, the team attached it to a pneumatic finger. Without the researchers’ clutch, the finger was able to hold the weight of one apple while inflated into a curled position; with it, the finger could hold an entire bag of them.

In another demonstration, the clutch was able to increase the strength of an elbow joint to be able to support the weight of a mannequin arm at the low energy demand of 125 volts.

Future work that the team is excited to delve into includes using this new clutch model to develop wearable augmented and virtual-reality devices.

“Traditional clutches require about 300 volts, a level that can be unsafe for human interaction,” says Levine. “We want to continue to improve our clutches, making them smaller, lighter and less energetically costly to bring these products to the real world. Eventually, these clutches could be used in wearable gloves that simulate object manipulation in a VR environment.”

“Current technologies provide feedback through vibrations, but simulating physical contact with a virtual object is limited with today’s devices,” says Pikul. “Imagine having both the visual simulation and feeling of being in another environment. VR and AR could be used in training, remote working, or just simulating touch and movement for those who lack those experiences in the real world. This technology gets us closer to those possibilities.”

Improving human-robot interactions is one of the main goals of Pikul’s lab and the direct benefits that this research presents is fuel for their own research passions.

“We haven’t seen many soft robots in our world yet, and that is, in part, due to their lack of strength, but now we have one solution to that challenge,” says Levine. “This new way to design clutches might lead to applications of soft robots that we cannot imagine right now. I want to create robots that help people, make people feel good, and enhance the human experience, and this work is getting us closer to that goal. I’m really excited to see where we go next.”

]]>
2nd call for robot holiday videos 2022 (with first submissions!) https://robohub.org/2nd-call-for-robot-holiday-videos-2022-with-first-submissions/ Thu, 15 Dec 2022 10:43:42 +0000 https://robohub.org/?p=206137

That’s right! You better not run, you better not hide, you better watch out for brand new robot holiday videos on Robohub!

Drop your submissions down our chimney at daniel.carrillozapata@robohub.org and share the spirit of the season.

Here are our first two submissions of the roundup:

]]>
Featured video: Creating a sense of feeling https://robohub.org/featured-video-creating-a-sense-of-feeling/ Sun, 11 Dec 2022 09:30:00 +0000 https://news.mit.edu/2022/shriya-srinivasan-dance-research-1127

Shriya Srinivasan as a dancer and researcher | Snapshots taken from ‘Connecting the human body to the outside world’ video on YouTube

“The human body is just engineered so beautifully,” says Shriya Srinivasan PhD ’20, a research affiliate at MIT’s Koch Institute for Integrative Cancer Research, a junior fellow at the Society of Fellows at Harvard University, and former doctoral student in the Harvard-MIT Program in Health Sciences and Technology.

Both a biomedical engineer and a dancer, Srinivasan is dedicated to investigating the body’s movements and sensations. As a PhD student she worked in Media Lab Professor Hugh Herr’s Biomechatronics Group on a system that helps patients with amputation feel what their prostheses are feeling and send feedback from the device to the body. She has also studied the south Indian classical dance form Bharathanatyam for 22 years and co-directs the Anubhava Dance Company.

“The kind of relief and sense of fulfillment I get from the arts is very different from what I get from research and science,” she says. “I find that research often nourishes my intellectual curiosity, and the arts are helping to build that emotional and spiritual growth. But in both worlds, I’m thinking about how we create a sense of feeling, how we control emotion and your physiological response. That’s really beautiful to me.”

Video by: Jason Kimball/MIT News | 5 minutes 34 seconds.

]]>
China overtakes USA in robot density, according to World Robotics 2022 Report https://robohub.org/china-overtakes-usa-in-robot-density-according-to-world-robotics-2022-report/ Sat, 10 Dec 2022 09:33:07 +0000 https://robohub.org/?p=206091 China’s massive investment in industrial robotics has put the country in the top ranking of robot density, surpassing the United States for the first time. The number of operational industrial robots relative to the number of workers hit 322 units per 10,000 employees in the manufacturing industry. Today, China ranks in fifth place. The world´s top 5 most automated countries in manufacturing 2021 are: South Korea, Singapore, Japan, Germany and China.

World average of robot density more than doubles compared to six years ago (2015: 69 units)

“Robot density is a key indicator of automation adoption in the manufacturing industry around the world,” says Marina Bill, President of the International Federation of Robotics. “The new average of global robot density in the manufacturing industry surged to 141 robots per 10,000 employees – more than double the number six years ago. China’s rapid growth shows the power of its investment so far, but it still has much opportunity to automate.”

Robot density by region

Driven by the high volume of robot installations in recent years, Asia’s average robot density surged by 18% compound annual growth rate (CAGR) since 2016 to 156 units per 10,000 employees in 2021. The European robot density had been growing by 8% (CAGR) in the same period of time reaching 129 units. In the Americas it was 117 robots – plus 8% (CAGR).

Top countries

The Republic of Korea hit an all-time high of 1,000 industrial robots per 10,000 employees in 2021. This is more than three times the number reached in China and makes the country number one worldwide. With its globally recognized electronics industry and a distinct automotive sector, the Korean economy profits from two large customer industries for industrial robots.

Singapore takes second place with a rate of 670 robots per 10,000 employees in 2021. Singapore’s robot density had been growing by 24% on average each year since 2016.

There is a remarkable gap to Japan (399 robots per 10,000 employees) which ranks third. Japan’s robot density had grown by 6% on average each year since 2016. Germany in fourth place (397 units) is the largest robot market in Europe.

China is by far the fastest growing robot market in the world. The country has the highest number of annual installations, and since 2016 it has each year had the largest operational stock of robots.

United States

Robot density in the United States rose from 255 units in 2020 to 274 units in 2021. The country ranks ninth in the world, down from seventh – now head-to-head with Chinese Taipei (276 units) and behind Hong Kong (304 units) and Sweden (321 units). 

Orders for World Robotics 2022 Service Robots and Industrial Robots reports can be placed online. Further downloads on the content are available here.

Videos

FACTS video about ROBOT DENSITY

Video of recorded World Robotics press conference

]]>
Looking beyond “technology for technology’s sake” https://robohub.org/looking-beyond-technology-for-technologys-sake/ Thu, 08 Dec 2022 11:50:00 +0000 https://news.mit.edu/2022/austen-roberson-robots-1130

“Learning about the social implications of the technology you’re working on is really important,” says senior Austen Roberson. Photo: Jodi Hilton

By Laura Rosado | MIT News correspondent

Austen Roberson’s favorite class at MIT is 2.S007 (Design and Manufacturing I-Autonomous Machines), in which students design, build, and program a fully autonomous robot to accomplish tasks laid out on a themed game board.

“The best thing about that class is everyone had a different idea,” says Roberson. “We all had the same game board and the same instructions given to us, but the robots that came out of people’s minds were so different.”

The game board was Mars-themed, with a model shuttle that could be lifted to score points. Roberson’s robot, nicknamed Tank Evans after a character from the movie “Surf’s Up,” employed a clever strategy to accomplish this task. Instead of spinning the gears that would raise the entire mechanism, Roberson realized a claw gripper could wrap around the outside of the shuttle and lift it manually.

“That wasn’t the intended way,” says Roberson, but his outside-of-the-box strategy ending up winning him the competition at the conclusion of the class, which was part of the New Engineering Education Transformation (NEET) program. “It was a really great class for me. I get a lot of gratification out of building something with my hands and then using my programming and problem-solving skills to make it move.”

Roberson, a senior, is majoring in aerospace engineering with a minor in computer science. As his winning robot demonstrates, he thrives at the intersection of both fields. He references the Mars Curiosity Rover as the type of project that inspires him; he even keeps a Lego model of Curiosity on his desk. 

“You really have to trust that the hardware you’ve made is up to the task, but you also have to trust your software equally as much,” says Roberson, referring to the challenges of operating a rover from millions of miles away. “Is the robot going to continue to function after we’ve put it into space? Both of those things have to come together in such a perfect way to make this stuff work.”

Outside of formal classwork, Roberson has pursued multiple research opportunities at MIT that blend his academic interests. He’s worked on satellite situational awareness with the Space Systems Laboratory, tested drone flight in different environments with the Aerospace Controls Laboratory, and is currently working on zero-shot machine learning for anomaly detection in big datasets with the Mechatronics Research Laboratory.

“Whether that be space exploration or something else, all I can hope for is that I’m making an impact, and that I’m making a difference in people’s lives,” says Roberson. Photo: Jodi Hilton

Even while tackling these challenging technical problems head-on, Roberson is also actively thinking about the social impact of his work. He takes classes in the Program on Science, Technology, and Society, which has taught him not only how societal change throughout history has been driven by technological advancements, but also how to be a thoughtful engineer in his own career.

“Learning about the social implications of the technology you’re working on is really important,” says Roberson, acknowledging that his work in automation and machine learning needs to address these questions. “Sometimes, we get caught up in technology for technology’s sake. How can we take these same concepts and bring them to people to help in a tangible, physical way? How have we come together as a scientific community to really affect social change, and what can we do in the future to continue affecting that social change?”

Roberson is already working through what these questions mean for him personally. He’s been a member of the National Society of Black Engineers (NSBE) throughout his entire college experience, which includes serving on the executive board for two years. He’s helped to organize workshops focused on everything from interview preparation to financial literacy, as well as social events to build community among members.

“The mission of the organization is to increase the number of culturally responsible Black engineers that excel academically, succeed professionally, and positively impact the community,” says Roberson. “My goal with NSBE was to be able to provide a resource to help everybody get to where they wanted to be, to be the vehicle to really push people to be their best, and to provide the resources that people needed and wanted to advance themselves professionally.”

In fact, one of his most memorable MIT experiences is the first conference he attended as a member of NSBE.

“Being able to see all different these people from all of these different schools able to come together as a family and just talk to each other, it’s a very rewarding experience,” Roberson says. “It’s important to be able to surround yourself with people who have similar professional goals and share similar backgrounds and experiences with you. It’s definitely the proudest I’ve been of any club at MIT.”

Looking toward his own career, Roberson wants to find a way to work on fast-paced, cutting-edge technologies that move society forward in a positive way.

“Whether that be space exploration or something else, all I can hope for is that I’m making an impact, and that I’m making a difference in people’s lives,” says Roberson. “I think learning about space is learning about ourselves as well. The more you can learn about the stuff that’s out there, you can take those lessons to reflect on what’s down here as well.”

]]>
Estimating manipulation intentions to ease teleoperation https://robohub.org/estimating-manipulation-intentions-to-ease-teleoperation/ Tue, 06 Dec 2022 11:54:47 +0000 https://robohub.org/?p=206080

Teleoperation is one of the longest-standing application fields in robotics. While full autonomy is still work in progress, the possibility to remotely operate a robot has already opened scenarios where humans can act in risky environments without endangering their own safety, such as when defusing explosives or decommissioning nuclear waste. It also allows one to be present and act even at great distance: underwater, in space, or inside a patient miles away from the surgeon. These are all critical applications, where skilled and qualified operators control the robot after receiving specific training to learn to use the system safely.

Teleoperation for everyone?

The recent pandemic has yet made even more apparent the need for immersive telepresence and remote action also for non-expert users: not only could teleoperated robots take vitals or bring drugs to infectious patients, but we could assist our elderly living far away with chores like moving heavy stuff, or cooking, for example. Also, numerous physical jobs could be executed from home.

The recent ANA-Xprize finals have shown how far teleoperation can go (see this impressive video of the winning team), but in such situations both the perceptual and control load lie entirely on the operator. This can be quite taxing on a cognitive level: both perception and action are mediated, by cameras and robotic arms respectively, reducing the user’s situation awareness and natural eye-hand coordination. While robot sensing capabilities and actuators have undergone relevant technological progress, the interface with the user still lacks intuitive solutions facilitating the operator’s job (Rea & Seo, 2022).

Human and robot joining forces

Shared control has gained popularity in recent years, as an approach championing human-machine cooperation: low-level motor control is carried out by the robot, while the human is focused on high-level action planning. To achieve such a blend, the robotic system still needs a timely way to infer the operator intention, so as to consequently assist with the execution. Usually, motor intentions are inferred by tracking arm movements or motion control commands (if the robot is operated by means of a joystick), but especially during object manipulation the hand is tightly following information collected by the gaze. In the last decades, increasing evidence in eye-hand coordination studies has shown that gaze reliably anticipates the hand movement target (Hayhoe et al., 2012), providing an early cue about human intention.

Gaze and motion features to estimate intentions

In a contribution presented at IROS 2022 last month (Belardinelli et al., 2022), we introduced an intention estimation model that relies on both gaze and motion features. We collected pick-and-place sequences in a virtual environment, where participants could operate two robotic grippers to grasp objects on a cluttered table. Motion controllers were used to track arm motions and to grasp objects by button press. Eye movements were tracked by the eye-tracker embedded in the virtual reality headset.

Gaze features were computed by defining a Gaussian distribution centered at the gaze position and taking for each object the likelihood for it to be the target of visual attention, which was given by the cumulative distribution collected by the object bounding box. For the motion features, the hand pose and velocity were used to estimate the hand’s current trajectory which was compared to an estimated optimal trajectory to each object. The normalized similarity between the two trajectories defined the likelihood of each object to be the target of the current movement.


Figure 1: Gaze features (top) and motion features (bottom) used for intention estimation. In both videos the object highlighted in green is the most likely target of visual attention and of hand movement, respectively.

These features along with the binary grasping state were used to train two Gaussian Hidden Markov Models, one on pick and one on place sequences. For 12 different intentions (picking of 6 different objects and placing at 6 different locations) the general accuracy (F1 score) was above 80%, even for occluded objects. Importantly, for both actions already 0.5 seconds before the end of the movement a prediction with over 90% accuracy was available for at least 70% of the observations. This would allow for an assisting plan to be instantiated and executed by the robot.

We also conducted an ablation study to determine the contribution of different feature combinations. While the models with gaze, motion, and grasping features performed better in the cross validation, the improvement with respect to only gaze and grasping state was minimal. Even when checking obstacles nearby at first, in fact, the gaze was already on the target before the hand trajectory became sufficiently discriminative.

We also ascertained that our models could generalize from one hand to the other (when fed the corresponding hand motion features), hence the same models could be used to concurrently estimate each hand intention. By feeding each hand prediction to a simple rule-based framework, basic bimanual intentions could also be recognized. So, for example, reaching for an object with the left hand while the right hand is going to place the same object on the left hand is considered a bimanual handover.

Figure 2: Online intention estimation: the red frame denotes the current right-hand intention prediction, the green frame the left-hand prediction. Above the scene, the bimanual intention is shown in capital letters.

Such an intention estimation model could help an operator to execute such manipulations without focusing on selecting the parameters for the exact motor execution of the pick and place, something we don’t usually do consciously in natural eye-hand coordination, since we automated such cognitive processes. For example, once a grasping intention is estimated with enough confidence, the robot could autonomously select the best grasp and grasping position and execute the grasp, relieving the operator of carefully monitoring a grasp without tactile feedback and possibly with inaccurate depth estimation.

Further, even if in our setup motion features were not decisive for early intention prediction, they might play a larger role in more complex settings and when extending the spectrum of bimanual manipulations.

Combined with suitable shared control policies and feedback visualizations, such systems could also enable untrained operators to control robotic manipulators transparently and effectively for longer times, improving the general mental workload of remote operation.

References

Belardinelli, A., Kondapally, A. R., Ruiken, D., Tanneberg, D., & Watabe, T. (2022). Intention estimation from gaze and motion features for human-robot shared-control object manipulation. 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022.

Hayhoe, M. M., McKinney, T., Chajka, K., & Pelz, J. B. (2012). Predictive eye movements in natural vision. Experimental brain research, 217(1), 125-136.

Rea, D. J., & Seo, S. H. (2022). Still Not Solved: A Call for Renewed Focus on User-Centered Teleoperation Interfaces. Frontiers in Robotics and AI, 9.

]]>
Countering Luddite politicians with life (and cost) saving machines https://robohub.org/countering-luddite-politicians-with-life-and-cost-saving-machines/ Sun, 04 Dec 2022 09:30:00 +0000 https://robotrabbi.com/?p=25005 Read More]]>

Earlier this month, Candy Crush celebrated its decade birthday by hosting a free party in lower Manhattan. The climax culminated with a drone light display of 500 Unmanned Ariel Vehicles (UAVs) illustrating the whimsical characters of the popular mobile game over the Hudson. Rather than applauding the decision, New York lawmakers ostracized the avionic wonders to Jersey. In the words of Democratic State Senator, Brad Hoylman, “Nobody owns New York City’s skyline – it is a public good and to allow a private company to reap profits off it is in itself offensive.” The complimentary event followed the model of Macy’s New York fireworks that have illuminated the Hudson skies since 1958. Unlike the department store’s pyrotechnics that release dangerous greenhouse gases into the atmosphere, drones are a quiet climate-friendly choice. Still, Luddite politicians plan to introduce legislation to ban the technology as a public nuisance, citing its impact on migratory birds, which are often more spooked by skyscrapers in Hoylman’s district.

Beyond aerial tricks, drones are now being deployed in novel ways to fill the labor gap of menial jobs that have not returned since the pandemic. Founded in 2018, Andrew Ashur’s Lucid Drones has been power-washing buildings throughout the United States for close to five years. As the founder told me: “I saw window washers hanging off the side of the building on a swing stage and it was a mildly windy day. You saw this platform get caught in the wind and all of a sudden the platform starts slamming against the side of the building. The workers were up there, hanging on for dear life, and I remember having two profound thoughts in this moment. The first one, thank goodness that’s not me up there. And then the second one was how can we leverage technology to make this a safer, more efficient job?” At the time, Ashur was a junior at Davidson College playing baseball. The self-starter knew he was on to a big market opportunity.

Each year, more than 160,000 emergency room injuries, and 300 deaths, are caused by falling off of ladders in the United States. Entrepreneurs like Ashur understood that drones were uniquely qualified to free humans from such dangerous work. This first required building a sturdy tethered quadcopter, capable of a 300 psi flow rate, connected to a tank for power and cleaning fluid for less than the cost of the annual salary of one window cleaner. After overcoming the technical hurdle, the even harder task was gaining sales traction. Unlike many hardware companies that set out to disrupt the market and sell directly to end customers; Lucid partnered with existing building maintenance operators. “Our primary focus is on existing cleaning companies. And the way to think about it is we’re now the shiniest tool in their toolkit that helps them do more jobs with less time and less liability to make more revenue,” explains Ashur. This relationship was further enhanced this past month with the announcement of a partnership with Sunbelt Rentals, servicing its 1,000 locations throughout California, Florida, and Texas. Lucid’s drones are now within driving distance of the majority of the 86,000 facade cleaning companies in America.

According to Commercial Buildings Energy Consumption Survey, there are 5.9 million commercial office buildings in the United States, with an average height of 16 floors. This means there is room for many robot cleaning providers. Competing directly with Lucid are several other drone operators, including Apellix, Aquiline Drones, Alpha Drones, and a handful of local upstarts. In addition, there are several winch-powered companies, such as Skyline Robotics, HyCleaner, Serbot, Erlyon, Kite Robotics, and SkyPro. Facade cleaning is ripe for automation as it is a dangerous, costly, repetitive task that can be safely accomplished by an uncrewed system. As Ashur boasts, “You improve that overall profitability because it’s fewer labor hours. You’ve got lower insurance on ground cleaner versus an above ground cleaner as well as the other equipment.” His system being tethered, ground-based, and without any ladders is the safest way to power wash a multistory office building. He elaborated further on the cost savings, “It lowers insurance cost, especially when you look at how workers comp is calculated… we had a customer, one of their workers missed the bottom rung of the ladder, the bottom rung, he shattered his ankle. OSHA classifies it as a hazardous workplace injury. Third workers comp rates are projected to increase by an annual $25,000 over the next five years. So it’s a six-figure expense for just that one business from missing one single bottom rung of the ladder and unfortunately, you hear stories of people falling off a roof or other terrible accidents that are life changing or in some cases life lost. So that’s the number one thing you get to eliminate with the drone by having people on the ground.”

As older construction workers are retiring at alarming numbers and a declining younger population of skilled laborers, I pressed Ashur on the future of Lucid in expanding to other areas. He retorted, “Cleaning drones, that’s just chapter one of our story here at Lucid. We look all around us at service industries that are being crippled by labor shortages.” He continued to suggest that robots could inspire a younger, more creative workforce, “When it comes to the future of work, we really believe that robotics is the answer because what makes us distinctly human isn’t our ability to do a physical task in a repetitive fashion. It’s our ability to be creative and problem solve… And that’s the direction that the younger populations are showing they’re gravitating towards” He hinted further at some immediate areas of revenue growth, “Since we launched a website many years ago, about 50% of our requests come from international opportunities. So it is very much so a global problem.” In New York, buildings taller than six stories are required to have their facades inspected and repaired every five years (Local Law 11). Rather than shunting drones, State Senator Hoylman should be contacting companies like Lucid for ideas to automate facade work and create a new Manhattan-launched industry.
]]>
Call for robot holiday videos 2022 https://robohub.org/call-for-robot-holiday-videos-2022/ Fri, 02 Dec 2022 11:12:05 +0000 https://robohub.org/?p=206016

That’s right! You better not run, you better not hide, you better watch out for brand new robot holiday videos on Robohub!

Drop your submissions down our chimney at daniel.carrillozapata@robohub.org and share the spirit of the season.

For inspiration, here are our lists from previous years.

]]>
The Utah Bionic Leg: A motorized prosthetic for lower-limb amputees https://robohub.org/the-utah-bionic-leg-a-motorized-prosthetic-for-lower-limb-amputees/ Thu, 01 Dec 2022 09:53:25 +0000 https://robohub.org/?p=206051

The Utah Bionic Leg is a motorized prosthetic for lower-limb amputees developed by University of Utah mechanical engineering associate professor Tommaso Lenzi and his students in the HGN Lab.

Lenzi’s Utah Bionic Leg uses motors, processors, and advanced artificial intelligence that all work together to give amputees more power to walk, stand-up, sit-down, and ascend and descend stairs and ramps. The extra power from the prosthesis makes these activities easier and less stressful for amputees, who normally need to over-use their upper body and intact leg to compensate for the lack of assistance from their prescribed prosthetics. The Utah Bionic Leg will help people with amputations, particularly elderly individuals, to walk much longer and attain new levels of mobility.

“If you walk faster, it will walk faster for you and give you more energy. Or it adapts automatically to the height of the steps in a staircase. Or it can help you cross over obstacles,” Lenzi says.

The Utah Bionic Leg uses custom-designed force and torque sensors as well as accelerometers and gyroscopes to help determine the leg’s position in space. Those sensors are connected to a computer processor that translates the sensor inputs into movements of the prosthetic joints. Based on that real-time data, the leg provides power to the motors in the joints to assist in walking, standing up, walking up and down stairs, or maneuvering around obstacles. The leg’s “smart transmission system” connects the electrical motors to the robotic joints of the prosthetic. This optimized system automatically adapts the joint behaviors for each activity, like shifting gears on a bike.

Finally, in addition to the robotic knee joint and robotic ankle joint, the Utah Bionic Leg has a robotic toe joint to provide more stability and comfort while walking. The sensors, processors, motors, transmission system, and robotic joints enable users to control the prosthetic intuitively and continuously, as if it was an intact biological leg.

Details of the leg’s newest technologies are described in a paper published in the journal. The paper was authored by University of Utah mechanical engineering graduate students Minh Tran, Lukas Grabert, Sarah Hood and Lenzi. You can read the paper here.

Lenzi and the university recently forged a new partnership with the worldwide leader in the prosthetics industry, Ottobock, to license the technology behind the Utah Bionic Leg and bring it to individuals with lower-limb amputations.



This article was originally published here.

]]>
Touch sensing: An important tool for mobile robot navigation https://robohub.org/touch-sensing-an-important-tool-for-mobile-robot-navigation/ Tue, 29 Nov 2022 11:45:10 +0000 https://robohub.org/?p=206044 In mammals, the touch modality develops earlier than the other senses, yet it is a less studied sensory modality than the visual and auditory counterparts. It not only allows environmental interactions, but also, serves as an effective defense mechanism.

Figure 1: Rat using the whiskers to interact with environment via touch

The role of touch in mobile robot navigation has not been explored in detail. However, touch appears to play an important role in obstacle avoidance and pathfinding for mobile robots. Proximal sensing often is a blind spot for most long range sensors such as cameras and lidars for which touch sensors could serve as a complementary modality.

Overall, touch appears to be a promising modality for mobile robot navigation. However, more research is needed to fully understand the role of touch in mobile robot navigation.

Role of touch in nature

The touch modality is paramount for many organisms. It plays an important role in perception, exploration, and navigation. Animals use this mode of navigation extensively to explore their surroundings. Rodents, pinnipeds, cats, dogs, and fish use this mode differently than humans. While humans primarily use touch sense for prehensile manipulation, mammals such as rats and shrews rely on touch sensing for exploration and navigation due to their poor visual system via the vibrissa mechanism. This vibrissa mechanism is essential for short-range sensing, which works in tandem with the visual system.

Artificial touch sensors for robots

Artificial touch sensor design has evolved over the last four decades. However, these sensors are not as widely used in mobile robot systems as cameras and lidars. Mobile robots usually employ these long range sensors, but short range sensing receives relatively less attention.
When designing the artificial touch sensors for mobile robot navigation, we typically draw inspiration from nature, i.e., biological whiskers to derive bio-inspired artificial whiskers. One such early prototype is shown in figure below.

Figure 2: Bioinspired artificial rat whisker array prototype V1.0

However, there is no reason for us to limit the design innovations to 100% accurately mimicking biological whisker-like touch sensors. While some researchers are attempting to perfect the tapering of whiskers [1], we are currently investigating abstract mathematical models that can further inspire a whole array of touch sensors [2].

Challenges with designing touch sensors for robots

There are many challenges when designing touch sensors for mobile robots. One key challenge is the trade-off between weight, size, and power consumption. The power consumption of the sensors can be significant, which can limit their applicability in mobile robot applications.

Another challenge is to find the right trade-off between touch sensitivity and robustness. The sensors need to be sensitive enough to detect small changes in the environment, yet robust enough to handle the dynamic and harsh conditions in most mobile robot applications.

Future directions

There is a need for more systematic studies to understand the role of touch in mobile robot navigation. The current studies are mostly limited to specific applications and scenarios geared towards dexterous manipulation and grasping. We need to understand the challenges and limitations of using touch sensors for mobile robot navigation. We also need to develop more robust and power-efficient touch sensors for mobile robots.
Logistically, another factor that limits the use of touch sensors is the lack of openly available off the shelf touch sensors. Few research groups around the world are working towards their own touch sensor prototype, biomimetic or otherwise, but all such designs are closed and extremely hard to replicate and improve.

References

  1. Williams, Christopher M., and Eric M. Kramer. “The advantages of a tapered whisker.” PLoS one 5.1 (2010): e8806.
  2. Tiwari, Kshitij, et al. “Visibility-Inspired Models of Touch Sensors for Navigation.” 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2022
]]>
Study: Automation drives income inequality https://robohub.org/study-automation-drives-income-inequality/ Sun, 27 Nov 2022 09:50:00 +0000 https://news.mit.edu/2022/automation-drives-income-inequality-1121

A newly published paper quantifies the extent to which automation has contributed to income inequality in the U.S., simply by replacing workers with technology — whether self-checkout machines, call-center systems, assembly-line technology, or other devices. Image: Jose-Luis Olivares, MIT

By Peter Dizikes

When you use self-checkout machines in supermarkets and drugstores, you are probably not — with all due respect — doing a better job of bagging your purchases than checkout clerks once did. Automation just makes bagging less expensive for large retail chains.

“If you introduce self-checkout kiosks, it’s not going to change productivity all that much,” says MIT economist Daron Acemoglu. However, in terms of lost wages for employees, he adds, “It’s going to have fairly large distributional effects, especially for low-skill service workers. It’s a labor-shifting device, rather than a productivity-increasing device.”

A newly published study co-authored by Acemoglu quantifies the extent to which automation has contributed to income inequality in the U.S., simply by replacing workers with technology — whether self-checkout machines, call-center systems, assembly-line technology, or other devices. Over the last four decades, the income gap between more- and less-educated workers has grown significantly; the study finds that automation accounts for more than half of that increase.

“This single one variable … explains 50 to 70 percent of the changes or variation between group inequality from 1980 to about 2016,” Acemoglu says.

The paper, “Tasks, Automation, and the Rise in U.S. Wage Inequality,” is being published in Econometrica. The authors are Acemoglu, who is an Institute Professor at MIT, and Pascual Restrepo PhD ’16, an assistant professor of economics at Boston University.

So much “so-so automation”

Since 1980 in the U.S., inflation-adjusted incomes of those with college and postgraduate degrees have risen substantially, while inflation-adjusted earnings of men without high school degrees has dropped by 15 percent.

How much of this change is due to automation? Growing income inequality could also stem from, among other things, the declining prevalence of labor unions, market concentration begetting a lack of competition for labor, or other types of technological change.

To conduct the study, Acemoglu and Restrepo used U.S. Bureau of Economic Analysis statistics on the extent to which human labor was used in 49 industries from 1987 to 2016, as well as data on machinery and software adopted in that time. The scholars also used data they had previously compiled about the adoption of robots in the U.S. from 1993 to 2014. In previous studies, Acemoglu and Restrepo have found that robots have by themselves replaced a substantial number of workers in the U.S., helped some firms dominate their industries, and contributed to inequality.

At the same time, the scholars used U.S. Census Bureau metrics, including its American Community Survey data, to track worker outcomes during this time for roughly 500 demographic subgroups, broken out by gender, education, age, race and ethnicity, and immigration status, while looking at employment, inflation-adjusted hourly wages, and more, from 1980 to 2016. By examining the links between changes in business practices alongside changes in labor market outcomes, the study can estimate what impact automation has had on workers.

Ultimately, Acemoglu and Restrepo conclude that the effects have been profound. Since 1980, for instance, they estimate that automation has reduced the wages of men without a high school degree by 8.8 percent and women without a high school degree by 2.3 percent, adjusted for inflation. 

A central conceptual point, Acemoglu says, is that automation should be regarded differently from other forms of innovation, with its own distinct effects in workplaces, and not just lumped in as part of a broader trend toward the implementation of technology in everyday life generally.

Consider again those self-checkout kiosks. Acemoglu calls these types of tools “so-so technology,” or “so-so automation,” because of the tradeoffs they contain: Such innovations are good for the corporate bottom line, bad for service-industry employees, and not hugely important in terms of overall productivity gains, the real marker of an innovation that may improve our overall quality of life.

“Technological change that creates or increases industry productivity, or productivity of one type of labor, creates [those] large productivity gains but does not have huge distributional effects,” Acemoglu says. “In contrast, automation creates very large distributional effects and may not have big productivity effects.”

A new perspective on the big picture

The results occupy a distinctive place in the literature on automation and jobs. Some popular accounts of technology have forecast a near-total wipeout of jobs in the future. Alternately, many scholars have developed a more nuanced picture, in which technology disproportionately benefits highly educated workers but also produces significant complementarities between high-tech tools and labor.

The current study differs at least by degree with this latter picture, presenting a more stark outlook in which automation reduces earnings power for workers and potentially reduces the extent to which policy solutions — more bargaining power for workers, less market concentration — could mitigate the detrimental effects of automation upon wages.

“These are controversial findings in the sense that they imply a much bigger effect for automation than anyone else has thought, and they also imply less explanatory power for other [factors],” Acemoglu says.

Still, he adds, in the effort to identify drivers of income inequality, the study “does not obviate other nontechnological theories completely. Moreover, the pace of automation is often influenced by various institutional factors, including labor’s bargaining power.”

Labor economists say the study is an important addition to the literature on automation, work, and inequality, and should be reckoned with in future discussions of these issues.

“Acemoglu and Restrepo’s paper proposes an elegant new theoretical framework for understanding the potentially complex effects of technical change on the aggregate structure of wages,” says Patrick Kline, a professor of economics at the University of California, Berkeley. “Their empirical finding that automation has been the dominant factor driving U.S. wage dispersion since 1980 is intriguing and seems certain to reignite debate over the relative roles of technical change and labor market institutions in generating wage inequality.”

For their part, in the paper Acemoglu and Restrepo identify multiple directions for future research. That includes investigating the reaction over time by both business and labor to the increase in automation; the quantitative effects of technologies that do create jobs; and the industry competition between firms that quickly adopted automation and those that did not.

The research was supported in part by Google, the Hewlett Foundation, Microsoft, the National Science Foundation, Schmidt Sciences, the Sloan Foundation, and the Smith Richardson Foundation.

]]>
Flocks of assembler robots show potential for making larger structures https://robohub.org/flocks-of-assembler-robots-show-potential-for-making-larger-structures/ Fri, 25 Nov 2022 08:39:00 +0000 https://news.mit.edu/2022/assembler-robots-structures-voxels-1122

Researchers at MIT have made significant steps toward creating robots that could practically and economically assemble nearly anything, including things much larger than themselves, from vehicles to buildings to larger robots. The new system involves large, usable structures built from an array of tiny identical subunits called voxels (the volumetric equivalent of a 2-D pixel). Courtesy of the researchers.

By David L. Chandler

Researchers at MIT have made significant steps toward creating robots that could practically and economically assemble nearly anything, including things much larger than themselves, from vehicles to buildings to larger robots.

The new work, from MIT’s Center for Bits and Atoms (CBA), builds on years of research, including recent studies demonstrating that objects such as a deformable airplane wing and a functional racing car could be assembled from tiny identical lightweight pieces — and that robotic devices could be built to carry out some of this assembly work. Now, the team has shown that both the assembler bots and the components of the structure being built can all be made of the same subunits, and the robots can move independently in large numbers to accomplish large-scale assemblies quickly.

The new work is reported in the journal Nature Communications Engineering, in a paper by CBA doctoral student Amira Abdel-Rahman, Professor and CBA Director Neil Gershenfeld, and three others.

A fully autonomous self-replicating robot assembly system capable of both assembling larger structures, including larger robots, and planning the best construction sequence is still years away, Gershenfeld says. But the new work makes important strides toward that goal, including working out the complex tasks of when to build more robots and how big to make them, as well as how to organize swarms of bots of different sizes to build a structure efficiently without crashing into each other.

As in previous experiments, the new system involves large, usable structures built from an array of tiny identical subunits called voxels (the volumetric equivalent of a 2-D pixel). But while earlier voxels were purely mechanical structural pieces, the team has now developed complex voxels that each can carry both power and data from one unit to the next. This could enable the building of structures that can not only bear loads but also carry out work, such as lifting, moving and manipulating materials — including the voxels themselves.

“When we’re building these structures, you have to build in intelligence,” Gershenfeld says. While earlier versions of assembler bots were connected by bundles of wires to their power source and control systems, “what emerged was the idea of structural electronics — of making voxels that transmit power and data as well as force.” Looking at the new system in operation, he points out, “There’s no wires. There’s just the structure.”

The robots themselves consist of a string of several voxels joined end-to-end. These can grab another voxel using attachment points on one end, then move inchworm-like to the desired position, where the voxel can be attached to the growing structure and released there.

Gershenfeld explains that while the earlier system demonstrated by members of his group could in principle build arbitrarily large structures, as the size of those structures reached a certain point in relation to the size of the assembler robot, the process would become increasingly inefficient because of the ever-longer paths each bot would have to travel to bring each piece to its destination. At that point, with the new system, the bots could decide it was time to build a larger version of themselves that could reach longer distances and reduce the travel time. An even bigger structure might require yet another such step, with the new larger robots creating yet larger ones, while parts of a structure that include lots of fine detail may require more of the smallest robots.

Credit: Amira Abdel-Rahman/MIT Center for Bits and Atoms

As these robotic devices work on assembling something, Abdel-Rahman says, they face choices at every step along the way: “It could build a structure, or it could build another robot of the same size, or it could build a bigger robot.” Part of the work the researchers have been focusing on is creating the algorithms for such decision-making.

“For example, if you want to build a cone or a half-sphere,” she says, “how do you start the path planning, and how do you divide this shape” into different areas that different bots can work on? The software they developed allows someone to input a shape and get an output that shows where to place the first block, and each one after that, based on the distances that need to be traversed.

There are thousands of papers published on route-planning for robots, Gershenfeld says. “But the step after that, of the robot having to make the decision to build another robot or a different kind of robot — that’s new. There’s really nothing prior on that.”

While the experimental system can carry out the assembly and includes the power and data links, in the current versions the connectors between the tiny subunits are not strong enough to bear the necessary loads. The team, including graduate student Miana Smith, is now focusing on developing stronger connectors. “These robots can walk and can place parts,” Gershenfeld says, “but we are almost — but not quite — at the point where one of these robots makes another one and it walks away. And that’s down to fine-tuning of things, like the force of actuators and the strength of joints. … But it’s far enough along that these are the parts that will lead to it.”

Ultimately, such systems might be used to construct a wide variety of large, high-value structures. For example, currently the way airplanes are built involves huge factories with gantries much larger than the components they build, and then “when you make a jumbo jet, you need jumbo jets to carry the parts of the jumbo jet to make it,” Gershenfeld says. With a system like this built up from tiny components assembled by tiny robots, “The final assembly of the airplane is the only assembly.”

Similarly, in producing a new car, “you can spend a year on tooling” before the first car gets actually built, he says. The new system would bypass that whole process. Such potential efficiencies are why Gershenfeld and his students have been working closely with car companies, aviation companies, and NASA. But even the relatively low-tech building construction industry could potentially also benefit.

While there has been increasing interest in 3-D-printed houses, today those require printing machinery as large or larger than the house being built. Again, the potential for such structures to instead be assembled by swarms of tiny robots could provide benefits. And the Defense Advanced Research Projects Agency is also interested in the work for the possibility of building structures for coastal protection against erosion and sea level rise.

The new study shows that both the assembler bots and the components of the structure being built can all be made of the same subunits, and the robots can move independently in large numbers to accomplish large-scale assemblies quickly. Courtesy of the researchers.

Aaron Becker, an associate professor of electrical and computer engineering at the University of Houston, who was not associated with this research, calls this paper “a home run — [offering] an innovative hardware system, a new way to think about scaling a swarm, and rigorous algorithms.”

Becker adds: “This paper examines a critical area of reconfigurable systems: how to quickly scale up a robotic workforce and use it to efficiently assemble materials into a desired structure. … This is the first work I’ve seen that attacks the problem from a radically new perspective — using a raw set of robot parts to build a suite of robots whose sizes are optimized to build the desired structure (and other robots) as fast as possible.”

The research team also included MIT-CBA student Benjamin Jenett and Christopher Cameron, who is now at the U.S. Army Research Laboratory. The work was supported by NASA, the U.S. Army Research Laboratory, and CBA consortia funding.

]]>
Holiday robot wishlist for/from Women in Robotics https://robohub.org/holiday-robot-wishlist-for-from-women-in-robotics/ Thu, 24 Nov 2022 11:07:37 +0000 https://svrobo.org/?p=23685

Are you looking for a gift for the women in robotics in your life? Or the up and coming women in robotics in your family? Perhaps these suggestions from our not-for-profit Women in Robotics organization will inspire! We hope these are also good suggestions for non binary people in robotics, and I personally reckon they are ideal for men in the robotics community too. It’s all about the robotics, eh!

Plus OMG it’s less than 50 days until 2023!!! So we’re going to do a countdown with a social media post every day until Dec 31st featuring one of the recent ’50 women in robotics you need to know about 2022′. It’s in a random order and today we have…

…. Follow us on Twitter, on Facebook, on Linked In, Pinterest or Instagram to find out 🙂

Holiday gift ideas

Visit the Women in Robotics store for t-shirts, mugs, drink bottles, notebooks, stickers, tote bags and more!

 

From Aniekan @_aniekan_

From @mdn_nrbl

From Vanessa Van Decker @VanessaVDecker

From Andra @robotlaunch

Do you have a great robot gift idea?

]]>
TRINITY, the European network for Agile Manufacturing https://robohub.org/trinity-the-european-network-for-agile-manufacturing/ Sun, 20 Nov 2022 09:30:03 +0000 https://robohub.org/?p=205906

The fast-changing customer demands in modern society seek flexibility, innovation and a rapid response from manufacturers and organisations that, in order to respond to market needs, are creating tools and processes in order to adopt an approach that welcomes change.

That approach is found to be Agile Manufacturing – and the Trinity project is the magnet that connects every segment of agile with everyone involved, creating a network that supports people, organisations, production and processes.

The main objective of TRINITY is to create a network of multidisciplinary and synergistic local digital innovation hubs (DIHs) composed of research centres, companies, and university groups that cover a wide range of topics that can contribute to agile production: advanced robotics as the driving force and digital tools, data privacy and cyber security technologies to support the introduction of advanced robotic systems in the production processes.

Trinity network

The Trinity project is funded by Horizon 2020 the European Union research and innovation programme.

Currently, Trinity brings together a network of 16 Digital Innovation Hubs (DIHs) and so far has 37 funded projects with 8.1 million euros in funding.

The network starts its operation by developing demonstrators in the areas of robotics it identified as the most promising to advance agile production, e.g. collaborative robotics including sensory systems to ensure safety, effective user interfaces based on augmented reality and speech, reconfigurable robot workcells and peripheral equipment (fixtures, jigs, grippers, …), programming by demonstration, IoT, secure wireless networks, etc.

These demonstrators will serve as a reference implementation for two rounds of open calls for application experiments, where companies with agile production needs and sound business plans will be supported by TRINITY DIHs to advance their manufacturing processes.

Trinity services

Besides technology-centred services, primary laboratories with advanced robot technologies and know-how to develop innovative application experiments, the TRINITY network of DIHS also offers training and consulting services, including support for business planning and access to financing.

All the data (including the current network list with partners, type of organisation and contact information) is available to everybody searching for the right type of help and guidance. The list also contains information regarding the Trinity funded projects and can be found on the website.

Robotics solution catalogue

Discover a wide range of Trinity solutions – from use cases like Predictable bin picking of shafts and axles, Robotic solution for accurate grinding of complex metal parts, End-to-end automatic handling of small packages and many more, to modules such are Additive TiG welding, Depth-sensor Safety Model for HRC, Environment Detection, Mobile Robot Motion Control and other, all that can be found in robotics solution catalogue on Trinity website.

Each of the solutions is followed up by a video that shows what made them successful, so whether it‘s about catalogue, modules, training materials, SME results, webinars or Trinity-related topics, the Trinity YouTube channel is where you will find it all.

Join us

Trinity wants to expand its current community made of more than 90 SMEs and around 20 organisations. If you are a Digital Innovation Hub, an innovative SME, a technical university, or a research centre focused on agile manufacturing, do not hesitate to contact us and exploit all the opportunities that Trinity offers.

You will be on board an ecosystem full of sectoral industry experts, facilities to put in practice your innovative ideas, and several partners with whom to develop new projects. Everything is on a user-centric platform and you will receive continuous support from the community!

Discuss with us

Be a part of the Trinity world by joining in on the discussions on agile production on social media or stay in touch with the latest news by signing in for the newsletter or simply by visiting the Trinity website.

Whatever your preferred type of communication is, all the contact information can be found here – so let‘s stay in touch!

]]>
Fighting tumours with magnetic bacteria https://robohub.org/fighting-tumours-with-magnetic-bacteria/ Sat, 19 Nov 2022 10:30:18 +0000 https://robohub.org/?p=205901

Magnetic bacteria (grey) can squeeze through narrow intercellular spaces to cross the blood vessel wall and infiltrate tumours. (Visualisations: Yimo Yan / ETH Zurich)

By Fabio Bergamin

Scientists around the world are researching how anti-cancer drugs can most efficiently reach the tumours they target. One possibility is to use modified bacteria as “ferries” to carry the drugs through the bloodstream to the tumours. Researchers at ETH Zurich have now succeeded in controlling certain bacteria so that they can effectively cross the blood vessel wall and infiltrate tumour tissue.

Led by Simone Schürle, Professor of Responsive Biomedical Systems, the ETH Zurich researchers chose to work with bacteria that are naturally magnetic due to iron oxide particles they contain. These bacteria of the genus Magnetospirillum respond to magnetic fields and can be controlled by magnets from outside the body; for more on this, see an earlier article in ETH News.

Exploiting temporary gaps

In cell cultures and in mice, Schürle and her team have now shown that a rotating magnetic field applied at the tumour improves the bacteria’s ability to cross the vascular wall near the cancerous growth. At the vascular wall, the rotating magnetic field propels the bacteria forward in a circular motion.

To better understand the mechanism to cross the vessel wall works, a detailed look is necessary: The blood vessel wall consists of a layer of cells and serves as a barrier between the bloodstream and the tumour tissue, which is permeated by many small blood vessels. Narrow spaces between these cells allow certain molecules from the to pass through the vessel wall. How large these intercellular spaces are is regulated by the cells of the vessel wall, and they can be temporarily wide enough to allow even bacteria to pass through the vessel wall.

Strong propulsion and high probability

With the help of experiments and computer simulations, the ETH Zurich researchers were able to show that propelling the bacteria using a rotating magnetic field is effective for three reasons. First, propulsion via a rotating magnetic field is ten times more powerful than propulsion via a static magnetic field. The latter merely sets the direction and the bacteria have to move under their own power.

The second and most critical reason is that bacteria driven by the rotating magnetic field are constantly in motion, travelling along the vascular wall. This makes them more likely to encounter the gaps that briefly open between vessel wall cells compared to other propulsion types, in which the bacteria’s motion is less explorative. And third, unlike other methods, the bacteria do not need to be tracked via imaging. Once the magnetic field is positioned over the tumour, it does not need to be readjusted.

“Cargo” accumulates in tumour tissue

“We make use of the bacteria’s natural and autonomous locomotion as well,” Schürle explains. “Once the bacteria have passed through the blood vessel wall and are in the tumour, they can independently migrate deep into its interior.” For this reason, the scientists use the propulsion via the external magnetic field for just one hour – long enough for the bacteria to efficiently pass through the vascular wall and reach the tumour.

Such bacteria could carry anti-cancer drugs in the future. In their cell culture studies, the ETH Zurich researchers simulated this application by attaching liposomes (nanospheres of fat-like substances) to the bacteria. They tagged these liposomes with a fluorescent dye, which allowed them to demonstrate in the Petri dish that the bacteria had indeed delivered their “cargo” inside the cancerous tissue, where it accumulated. In a future medical application, the liposomes would be filled with a drug.

Bacterial cancer therapy

Using bacteria as ferries for drugs is one of two ways that bacteria can help in the fight against cancer. The other approach is over a hundred years old and currently experiencing a revival: using the natural propensity of certain species of bacteria to damage tumour cells. This may involve several mechanisms. In any case, it is known that the bacteria stimulate certain cells of the immune system, which then eliminate the tumour.

“We think that we can increase the efficacy of bacterial cancer therapy by using a engineering approach.”

– Simone Schürle

Multiple research projects are currently investigating the efficacy of E. coli bacteria against tumours. Today, it is possible to modify bacteria using synthetic biology to optimise their therapeutic effect, reduce side effects and make them safer.

Making non-magnetic bacteria magnetic

Yet to use the inherent properties of bacteria in cancer therapy, the question of how these bacteria can reach the tumour efficiently still remains. While it is possible to inject the bacteria directly into tumours near the surface of the body, this is not possible for tumours deep inside the body. That is where Professor Schürle’s microrobotic control comes in. “We believe we can use our engineering approach to increase the efficacy of bacterial cancer therapy,” she says.

E. coli used in the cancer studies is not magnetic and thus cannot be propelled and controlled by a magnetic field. In general, magnetic responsiveness is a very rare phenomenon among bacteria. Magnetospirillum is one of the few genera of bacteria that have this property.

Schürle therefore wants to make E. coli bacteria magnetic as well. This could one day make it possible to use a magnetic field to control clinically used therapeutic bacteria that have no natural magnetism.

]]>
Combating climate change with a soft robotics fish https://robohub.org/tetherless-open-water-swimming/ Thu, 17 Nov 2022 10:00:09 +0000 https://robohub.org/?p=205890

Growing up in Rhode Island (the Ocean State), I lived very close to the water. Over the years, I have seen the effects of sea level rise and rapid erosion. Entire houses and beaches have slowly been consumed by the tide. I have witnessed first hand how climate change is rapidly changing the ocean ecosystem. Sometimes I feel overwhelmed by the inexorability of climate change. What can we do in the face of such a global, almost incomprehensible dilemma? The only way I can overcome this perception is by committing to doing something with my life to help, even if it’s in a small way. I think with such a big issue, the only way forward is by starting small, identifying one niche I can work in, and seeing how I can shape my research around solving that challenge.

One major challenge is rapid global ocean temperature rise. When scientists look to make climate associations using temperature data, they generally use fixed temperature loggers attached to buoys or on the ocean floor. Unfortunately, this approach discounts the area between the ocean’s surface and floor. Variable ocean conditions create microclimates, pockets of the ocean that are unaffected by general climate trends. Scientists have shown that most organisms experience climate change via these microclimates. Fish are greatly affected by this rapid increase in temperature as they can only lay eggs in a minimal range of temperatures. Microclimates are changing temperature with celerity. Hence, many species cannot adapt quickly enough to survive. At this rate, 60% of fish species could go extinct by 2100.

Of course, fish are not the only organisms affected by the rapid increase in temperature. Coral in the Great Barrier Reef can only survive in a minimal temperature threshold, and as temperature increases, reefs are experiencing mass coral bleaching. AIMS, the Australian Institute for Marine Science, the government agency that monitors the Great Barrier Reef, utilizes divers pulled behind boats to record reef observations and collect data. Unfortunately, this has led to some casualties due to shark attacks. They have begun deploying large, almost seven feet in length, ocean gliders that can mitigate this risk. These robots come with a hefty price tag of $125,000 to $500,000. They are also too large to navigate portions of the reef.

Our solution in the Soft Robotics Lab at Worcester Polytechnic Institute is building a free-swimming (tetherless), biologically inspired robotic fish, funded in part by the National Science Foundation Future of Robots in the Workplace Research and Development Program. Our goal is for the robot to navigate the complex environment of the Great Barrier Reef and record dense three-dimensional temperature data throughout the water column. Moreover, we will use non-hazardous and affordable material for the fish’s body. Since our motivation is to create a tool to use in climate research, a robot that is cheap and easy to manufacture will increase its effectiveness. Our approach is in stark contrast to traditional autonomous underwater vehicles that utilize propellers that are noisy and incongruous to underwater life. We chose to mimic the motion of real fish to reduce the environmental impact of our robot and enable close observation of other real fish.

We are, of course, not the first people to build a robot fish. In 1994, MIT produced the RoboTuna, a fully rigid fish robot, and since then, there have been many different iterations of fish robots. Some have been made of fully rigid materials like the RoboTuna and used motors that run the caudal tail (rear fin) actuation that powers the fish. However, this does not replicate the fluid motion achieved by real fish as they swim. A possible solution would be to use soft materials. Designs using soft materials, up to this point, utilize a silicone, pneumatically or hydraulically actuated tail. Unfortunately, these robots cannot operate in rough environments since any cuts or abrasions to the silicone could cause a leak in the system and lead to a total failure in the actuation of the tail. Other robots have combined the more durable rigid materials, actuated with cables, and then attached a soft silicone end that bends with the force of the water. All these previous robots are difficult to manufacture and require institutional knowledge to recreate.

MIT Robotuna and MIT SOFI robots

We have fabricated a 3D printed, cable-actuated wave spring tail made from soft materials that can drive a small robot fish. The wave spring gives the robot its biologically inspired shape, but it can bend fluidly like the silicone-based robots and real fish. The wave springis entirely 3D printed from a flexible material that is affordable and easy to use. This material and method creates a very soft yet durable robot, withstands harsh treatment, and runs for hundreds of thousands of cycles without any degradation to any of the robot’s systems. The robot sets itself apart by being very easy to assemble, with only a handful of parts, most of which can be 3D printed.

The wave spring itself has a biologically inspired design. Reef fish are morphologically diverse but share a similar body shape which we emulate with a tapered oval design. The wave spring itself is composed of a mesh of diamond-shaped cells that can compress and bend. To restrict our robot to only lateral bending, we added supports down the dorsal and ventral edges of the wave spring.

Using this design, we have successfully created a robot fish. The robot is able to swim freely in a fish tank, swimming pool, and in a lake. While testing the fish in these environments, we found that the speed and performance of our robot was comparable to other fish robots operating under similar parameters. In order to waterproof the robot (to protect the electronics required for tetherless swimming), we had to add a latex skin. This does increase the manufacturing complexity of the design, so we will look to improve not only the robot’s performance, but also its design to ensure a simplistic yet high functioning robot.

Most importantly, we will add the sensors required to collect data like temperature, which is imperative to a better understanding of the oceans’ rapidly changing microclimates. It’s crucial that we remain focused on this goal, as it drives not only the robot’s design, but our motivation for why we do this work. Climate change is the foremost crisis facing our world. I encourage everyone to connect their interests and work, no matter the field, in some way to this issue as we are the only ones who can do something about it.

]]>
#IROS2022 best paper awards https://robohub.org/iros2022-best-paper-awards/ Mon, 14 Nov 2022 09:00:14 +0000 https://robohub.org/?p=205912

Did you have the chance to attend the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022) in Kyoto? Here we bring you the papers that received an award this year in case you missed them. Congratulations to all the winners and finalists!

Best Paper Award on Cognitive Robotics

Gesture2Vec: Clustering Gestures using Representation Learning Methods for Co-speech Gesture Generation. Payam Jome Yazdian, Mo Chen, and Angelica Lim.

Best RoboCup Paper Award

RCareWorld: A Human-centric Simulation World for Caregiving Robots. Ruolin Ye, Wenqiang Xu, Haoyuan Fu, Rajat Kumar, Jenamani, Vy Nguyen, Cewu Lu, Katherine Dimitropoulou, and Tapomayukh Bhattacharjee.

SpeedFolding: Learning Efficient Bimanual Folding of Garments. Yahav Avigal, Lars Burscheid, Tamim Asfour, Torsten Kroeger, and Ken Goldberg.

Best Paper Award on Robot Mechanisms and Design

Aerial Grasping and the Velocity Sufficiency Region. Tony G. Chen, Kenneth Hoffmann, JunEn Low, Keiko Nagami, David Lentink, and Mark Cutkosky.

Best Entertainment and Amusement Paper Award

Robot Learning to Paint from Demonstrations. Younghyo Park, Seunghun Jeon, and Taeyoon Lee.

Best Paper Award on Safety, Security, and Rescue Robotics

Power-based Safety Layer for Aerial Vehicles in Physical Interaction using Lyapunov Exponents. Eugenio Cuniato, Nicholas Lawrance, Marco Tognon, and Roland Siegwart.

Best Paper Award on Agri-Robotics

Explicitly Incorporating Spatial Information to Recurrent Networks for Agriculture. Claus Smitt, Michael Allan Halstead, Alireza Ahmadi, and Christopher Steven McCool.

Best Paper Award on Mobile Manipulation

Robot Learning of Mobile Manipulation with Reachability Behavior Priors. Snehal Jauhri, Jan Peters, and Georgia Chalvatzaki.

Best Application Paper Award

Soft Tissue Characterisation Using a Novel Robotic Medical Percussion Device with Acoustic Analysis and Neural Networks. Pilar Zhang Qiu, Yongxuan Tan, Oliver Thompson, Bennet Cobley, and Thrishantha Nanayakkara.

Best Paper Award for Industrial Robotics Research for Applications

Absolute Position Detection in 7-Phase Sensorless Electric Stepper Motor. Vincent Groenhuis, Gijs Rolff, Koen Bosman, Leon Abelmann, and Stefano Stramigioli.

ABB Best Student Paper Award

FAR Planner: Fast, Attemptable Route Planner using Dynamic Visibility Update. Fan Yang, Chao Cao, Hongbiao Zhu, Jean Oh, and Ji Zhang.

Best Paper Award

SpeedFolding: Learning Efficient Bimanual Folding of Garments. Yahav Avigal, Lars Berscheid, Tamim Asfour, Torsten Kroeger, and Ken Goldberg.

]]>
Robot Talk Podcast – October episodes https://robohub.org/robot-talk-podcast-october-episodes/ Sat, 12 Nov 2022 09:41:53 +0000 https://robohub.org/?p=205938

Episode 20 – Paul Dominick Baniqued

Claire talked to Dr Paul Dominick Baniqued from The University of Manchester all about brain-computer interface technology and rehabilitation robotics.

Paul Dominick Baniqued received his PhD in robotics and immersive technologies at the University of Leeds. His research tackled the integration of a brain-computer interface with virtual reality and hand exoskeletons for motor rehabilitation and skills learning. He is currently working as a postdoc researcher on cyber-physical systems and digital twins at the Robotics for Extreme Environments Group at the University of Manchester.

Episode 21 – Sean Katagiri

Claire chatted to Sean Katagiri from The National Robotarium all about underwater robots, offshore energy, and other industrial applications of robotics.

Sean Katagiri is a robotics engineer who has the pleasure of being surrounded by and working with robots for a living. His experience in robotics mainly comes from the subsea domain, but has also worked with wheeled and legged ground robots as well. Sean is very excited to have recently started his role at The National Robotarium, whose goal is to bring ideas from academia and turn them into real world solutions.

Episode 22 – Iveta Eimontaite

Claire talked to Dr Iveta Eimontaite from Cranfield University about psychology, human-robot interaction, and industrial robots.


Iveta Eimontaite studied Cognitive Neuroscience at the University of York and completed her PhD in Cognitive Psychology at Hull University. Prior to joining Cranfield University, Iveta held research positions at Bristol Robotics Laboratory and Sheffield Robotics. Her work mainly focuses on behavioural and cognitive aspects of Human-Technology Interaction, with particular interest in user needs and requirements for the successful integration of technology within the workplace/social environments.

Episode 23 – Mickey Li

Claire talked to Mickey Li from the University of Bristol about aerial robotics, building inspection and multi-robot teams.

Mickey Li is a Robotics and Autonomous systems PhD researcher at the Bristol Robotics Laboratory and the University of Bristol. His research focuses on optimal multi-UAV path planning for building inspection, in particular how guarantees can be provided despite vehicle failures. Most recently he has been developing a portable development and deployment infrastructure for multi-UAV experimentation for the BRL Flight Arena inspired by advances in cloud computing.

]]>
The original “I, Robot” had a Frankenstein complex https://robohub.org/the-original-i-robot-had-a-frankenstein-complex/ Wed, 09 Nov 2022 09:32:38 +0000 https://robohub.org/?p=205933 Eando Binder’s Adam Link scifi series predates Isaac Asimov’s more famous robots, posing issues in trust, control, and intellectual property.

Read more about these challenges in my Science Robotics article here.

And yes, there’s a John Wick they-killed-my-dog scene in there too.

Snippet for the article with some expansion:

In 1939, Eando Binder began a short story cycle about a robot named Adam Link. The first story in Binder’s series was titled “I, Robot.” That clever phrase would be recycled by Isaac Asimov’s publisher (against Asimov’s wishes) for his famous short story cycle that started in 1940 about the Three Laws of Robotics. But the Binder series had another influence on Asimov: the stories explicitly related Adam’s poor treatment to how humans reacted to the Creature in Frankenstein. (After the police killed his dog- did I mention John Wick?- and put him in jail, Adam conveniently finds a copy of Mary Shelley’s Frankenstein and the penny drops on why everyone is so mean to him…) In response, Asimov coined the term “the Frankenstein Complex” in his stories[1], with his characters stating that Three Laws of Robotics gave humans the confidence in robots to overcome this neurosis.

Note that the Frankenstein Complex is different from the Uncanny Valley; in the Uncanny Valley, the robot is creepy because it almost looks and moves like a human or animal but not quite, in the Frankenstein Complex people believe that intelligent robots regardless of what they look like will rise up against their creators.

Whether humans really have a Frankenstein Complex is a source of endless debate. Frederic Kaplan in a seminal paper presented the baseline assessment of the cultural differences and the role of popular media in trust of robots that everyone still uses[2]. Humanoid robotics researchers even have developed a formal measure of a user’s perception of the Frankenstein Complex.[3] So that group of HRI researchers believes the Frankenstein Complex is a real phenomena. But Binder’s Adam Link story cycle is also worth reexamining because it foresaw two additional challenges for robots and society that Asimov, and other early writers, did not: what is the appropriate form of control and can a robot own intellectual property.

You can get the Adam Link stories from the web as individual stories published in the online back issues of Amazing Stories but it is probably easier to get the story collection here. Binder did a fix-up novel where he organized the stories to form a chronology and added segue ways between stories.

If you’d like to learn more about

References

[1] Frankenstein Monster, Encyclopedia of Science Fiction, https://sf-encyclopedia.com/entry/frankenstein_monster, accessed July 28, 2022

[2] F. Kaplan, “Who is afraid of the humanoid? Investigating cultural differences in the acceptance of robots,” International Journal of Humanoid Robotics, 1–16 (2004)

[3] Syrdal, D.S., Nomura, T., Dautenhahn, K. (2013). The Frankenstein Syndrome Questionnaire – Results from a Quantitative Cross-Cultural Survey. In: Herrmann, G., Pearson, M.J., Lenz, A., Bremner, P., Spiers, A., Leonards, U. (eds) Social Robotics. ICSR 2013. Lecture Notes in Computer Science(), vol 8239. Springer, Cham. https://doi.org/10.1007/978-3-319-02675-6_27

]]>
General purpose robots should not be weaponized: An open letter to the robotics industry and our communities https://robohub.org/general-purpose-robots-should-not-be-weaponized-an-open-letter-to-the-robotics-industry-and-our-communities/ Mon, 07 Nov 2022 09:31:53 +0000 https://robohub.org/?p=205928

Over the course of the past year Open Robotics has taken time from our day-to-day efforts to work with our colleagues in the field to consider how the technology we develop could negatively impact society as a whole. In particular we were concerned with the weaponization of mobile robots. After a lot of thoughtful discussion, deliberation, and debate with our colleagues at organizations like Boston Dynamics, Clearpath Robotics, Agility Robotics, AnyBotics, and Unitree, we have co-authored and signed an open letter to the robotics community entitled, “General Purpose Robots Should Not Be Weaponized.” You can read the letter, in its entirety, here. Additional media coverage of the letter can be found in Axios, and The Robot Report.

The letter codifies internal policies we’ve had at Open Robotics since our inception and we think it captures the sentiments of much of the ROS community. For our part, we have pledged that we will not weaponize mobile robots, and we do not support others doing so either. We believe that the weaponization of robots raises serious ethical issues and harms public trust in technologies that can have tremendous benefits to society. This is but a first step, and we look forward to working with policy makers, the robotics community, and the general public, to continue to promote the ethical use of robots and prohibit their misuse. This is but one of many discussions that must happen between robotics professionals, the general public, and lawmakers about advanced technologies, and quite frankly, we think it is long overdue.

Due to the permissive nature of the licenses we use for ROS, Gazebo, and our other projects, it is difficult, if not impossible, for us to limit the use of the technology we develop to build weaponized systems. However, we do not condone such efforts, and we will have no part in directly assisting those who do with our technical expertise or labor. This has been our policy from the start, and will continue to be our policy. We encourage the ROS community to take a similar stand and to work with their local lawmakers to prevent the weaponization of robotic systems. Moreover, we hope the entire ROS community will take time to reflect deeply on the ethical implications of their work, and help others better understand both the positive and negative outcomes that are possible in robotics.

]]>
How shoring up drones with artificial intelligence helps surf lifesavers spot sharks at the beach https://robohub.org/how-shoring-up-drones-with-artificial-intelligence-helps-surf-lifesavers-spot-sharks-at-the-beach/ Sat, 05 Nov 2022 11:16:25 +0000 https://robohub.org/?p=205884

A close encounter between a white shark and a surfer. Author provided.

By Cormac Purcell (Adjunct Senior Lecturer, UNSW Sydney) and Paul Butcher (Adjunct Professor, Southern Cross University)

Australian surf lifesavers are increasingly using drones to spot sharks at the beach before they get too close to swimmers. But just how reliable are they?

Discerning whether that dark splodge in the water is a shark or just, say, seaweed isn’t always straightforward and, in reasonable conditions, drone pilots generally make the right call only 60% of the time. While this has implications for public safety, it can also lead to unnecessary beach closures and public alarm.

Engineers are trying to boost the accuracy of these shark-spotting drones with artificial intelligence (AI). While they show great promise in the lab, AI systems are notoriously difficult to get right in the real world, so remain out of reach for surf lifesavers. And importantly, overconfidence in such software can have serious consequences.

With these challenges in mind, our team set out to build the most robust shark detector possible and test it in real-world conditions. By using masses of data, we created a highly reliable mobile app for surf lifesavers that could not only improve beach safety, but help monitor the health of Australian coastlines.

White shark being observed by a drone.A white shark being tracked by a drone. Author provided.

Detecting dangerous sharks with drones

The New South Wales government has invested more than A$85 million in shark mitigation measures over the next four years. Of all approaches on offer, a 2020 survey showed drone-based shark surveillance is the public’s preferred method to protect beach-goers.

The state government has been trialling drones as shark-spotting tools since 2016, and with Surf Life Saving NSW since 2018. Trained surf lifesaving pilots fly the drone over the ocean at a height of 60 metres, watching the live video feed on portable screens for the shape of sharks swimming under the surface.

Identifying sharks by carefully analysing the video footage in good conditions seems easy. But water clarity, sea glitter (sea-surface reflection), animal depth, pilot experience and fatigue all reduce the reliability of real-time detection to a predicted average of 60%. This reliability falls further when conditions are turbid.

Pilots also need to confidently identify the species of shark and tell the difference between dangerous and non-dangerous animals, such as rays, which are often misidentified.

Identifying shark species from the air.

AI-driven computer vision has been touted as an ideal tool to virtually “tag” sharks and other animals in the video footage streamed from the drones, and to help identify whether a species nearing the beach is cause for concern.

AI to the rescue?

Early results from previous AI-enhanced shark-spotting systems have suggested the problem has been solved, as these systems report detection accuracies of over 90%.

But scaling these systems to make a real-world difference across NSW beaches has been challenging.

AI systems are trained to locate and identify species using large collections of example images and perform remarkably well when processing familiar scenes in the real world.

However, problems quickly arise when they encounter conditions not well represented in the training data. As any regular ocean swimmer can tell you, every beach is different – the lighting, weather and water conditions can change dramatically across days and seasons.

Animals can also frequently change their position in the water column, which means their visible characteristics (such as their outline) changes, too.

All this variation makes it crucial for training data to cover the full gamut of conditions, or that AI systems be flexible enough to track the changes over time. Such challenges have been recognised for years, giving rise to the new discipline of “machine learning operations”.

Essentially, machine learning operations explicitly recognises that AI-driven software requires regular updates to maintain its effectiveness.

Examples of the drone footage used in our huge dataset.

Building a better shark spotter

We aimed to overcome these challenges with a new shark detector mobile app. We gathered a huge dataset of drone footage, and shark experts then spent weeks inspecting the videos, carefully tracking and labelling sharks and other marine fauna in the hours of footage.

Using this new dataset, we trained a machine learning model to recognise ten types of marine life, including different species of dangerous sharks such as great white and whaler sharks.

And then we embedded this model into a new mobile app that can highlight sharks in live drone footage and predict the species. We worked closely with the NSW government and Surf Lifesaving NSW to trial this app on five beaches during summer 2020.

Drone flying at a beach.A drone in surf lifesaver NSW livery preparing to go on patrol. Author provided.

Our AI shark detector did quite well. It identified dangerous sharks on a frame-by-frame basis 80% of the time, in realistic conditions.

We deliberately went out of our way to make our tests difficult by challenging the AI to run on unseen data taken at different times of year, or from different-looking beaches. These critical tests on “external data” are often omitted in AI research.

A more detailed analysis turned up common-sense limitations: white, whaler and bull sharks are difficult to tell apart because they look similar, while small animals (such as turtles and rays) are harder to detect in general.

Spurious detections (like mistaking seaweed as a shark) are a real concern for beach managers, but we found the AI could easily be “tuned” to eliminate these by showing it empty ocean scenes of each beach.

Seaweed identified as sharks.Example of where the AI gets it wrong – seaweed identified as sharks. Author provided.

The future of AI for shark spotting

In the short term, AI is now mature enough to be deployed in drone-based shark-spotting operations across Australian beaches. But, unlike regular software, it will need to be monitored and updated frequently to maintain its high reliability of detecting dangerous sharks.

An added bonus is that such a machine learning system for spotting sharks would also continually collect valuable ecological data on the health of our coastline and marine fauna.

In the longer term, getting the AI to look at how sharks swim and using new AI technology that learns on-the-fly will make AI shark detection even more reliable and easy to deploy.

The NSW government has new drone trials for the coming summer, testing the usefulness of efficient long-range flights that can cover more beaches.

AI can play a key role in making these flights more effective, enabling greater reliability in drone surveillance, and may eventually lead to fully-automated shark-spotting operations and trusted automatic alerts.

The authors acknowledge the substantial contributions from Dr Andrew Colefax and Dr Andrew Walsh at Sci-eye.The Conversation

This article appeared in The Conversation.

]]> ep.362: Precise Navigation using LEO Satellites, with Tyler Reid https://robohub.org/precise-navigation-using-leo-satellites/ Wed, 02 Nov 2022 17:21:28 +0000 https://robohub.org/?p=205310

Dr. Tyler Reid, co-founder and CTO of Xona Space Systems, discusses a new type of global navigation satellite system (GNSS). Xona Space Systems plans to provide centimeter-level positioning accuracy and will serve the emerging autonomous vehicle community, where precise navigation is key. Reid discusses the advantages and technical challenges of a low Earth orbit (LEO) solution.

Tyler Reid

Tyler Reid is co-founder and CTO of Xona Space Systems. Previously, Tyler worked as a Research Engineer at the Ford Motor Company in localization and mapping for self-driving cars. He has also worked as an engineer at Google and as a lecturer at Stanford University, where he co-taught the GPS course. Tyler received his PhD (2017) and MSc (2012) in Aeronautics and Astronautics from Stanford and B.Eng. (’10) in Mechanical Engineering from McGill.

 

 

Links

 

]]>
Robots come out of the research lab https://robohub.org/robots-come-out-of-the-research-lab/ Wed, 02 Nov 2022 14:33:52 +0000 https://robohub.org/?p=205875

This year’s Swiss Robotics Day – an annual event run by the EPFL-led National Centre of Competence in Research (NCCR) Robotics – will be held at the Beaulieu convention center in Lausanne. For the first time, this annual event will take place over two days: the first day, on 4 November, will be reserved for industry professionals, while the second, on 5 November, will be open to the public.

Visitors at this year’s Swiss Robotics Day are in for a glimpse of some exciting new technology: a robotic exoskeleton that enables paralyzed patients to ski, a device the width of a strand of hair that can be guided through a human vein, a four-legged robot that can walk over obstacles, an artificial skin that can diagnose early-stage Parkinson’s, a swarm of flying drones, and more.

The event, now in its seventh year, was created by NCCR Robotics in 2015. It has expanded into a leading conference for the Swiss robotics industry, bringing together university researchers, businesses and citizens from across the country. For Swiss robotics experts, the event provides a chance to meet with peers, share ideas, explore new business opportunities and look for promising new hires. That’s what they’ll do on Friday, 4 November – the day reserved for industry professionals.

On Saturday, 5 November, the doors will open to the general public. Visitors of all ages can discover the latest inventions coming out of Swiss R&D labs and fabricated by local companies – including some startups. The event will feature talks and panel discussions on topics such as ethics in robotics, space robotics, robotics in art and how artificial intelligence can be used to promote sustainable development – all issues that will shape the future of the industry. PhD students will provide a snapshot of where robotics research stands today, while school-age children can sign up for robot-building workshops. Teachers can take part in workshops given by the Roteco robot teaching community and see how robotics technology can support learning in the classroom.

In the convention center’s 5,000 m² exhibit hall, some 70 booths will be set up with all sorts of robot demonstrations, complete with an area for flying drones. Technology developed as part of the Cybathlon international competition will be on display; this competition was introduced by NCCR Robotics in 2016 to encourage research on assistance systems for people with disabilities. Silke Pan will give a dance performance with a robotic exoskeleton, choregraphed by Antoine Le Moal of the Béjart ballet company. Talks will be given in French and English. Entrance is free of charge but registration is required.

Laying the foundation for future success

The 2022 Swiss Robotics Day will mark the end of NCCR Robotics, capping 12 years of cutting-edge research. The center was funded by the Swiss National Science Foundation and has sponsored R&D at over 30 labs in seven Swiss institutions: EPFL, ETH Zurich, the University of Zurich, IDSIA-SUPSI-USI in Lugano, the University of Bern, EMPA and the University of Basel. NCCR Robotics has given rise to 16 spin-offs in high-impact fields like portable robots, drones, search-and-rescue systems and education. Together the spin-offs have raised over CHF 100 million in funding and some of them, like Flyability and ANYbotics, have grown into established businesses creating hundreds of high-tech jobs. The center has also rolled out several educational and community initiatives to further the teaching of robotics in Switzerland.

After the center closes, some of its activities – especially those related to technology transfer – will be carried out by the Innovation Booster Robotics program sponsored by Innosuisse and housed at EPFL. This program, initially funded for three years, is designed to promote robotics in universities and the business world.

A day for industry professionals only

The first day of the event, 4 November, is intended for robotics-industry businesses, investors, researchers, students and journalists. It will kick off with a talk by Robin Murphy, a world-renowned expert in rescue robotics and a professor at Texas A&M University; she will be followed by Auke Ijspeert from EPFL’s Biorobotics Laboratory, Elena García Armada from the Center for Automation and Robotics in Spain, Raffaello D’Andrea (a pioneer in robotics-based inventory management) from ETH Zurich, Thierry Golliard from Swiss Post and Adrien Briod, the co-founder of Flyability.

In the afternoon, a panel discussion will explore how robots and artificial intelligence are changing the workplace. Experts will include Dario Floreano from NCCR Robotics and EPFL, Rafael Lalive from the University of Lausanne, Alisa Rupenyan-Vasileva from ETH Zurich, Agnès Petit Markowski from Mobbot and Pierre Dillenbourg from EPFL. Event participants will also have a chance to network that afternoon. The day will conclude with an awards ceremony to designate Switzerland’s best Master’s thesis on robotics. The booths and robot demonstrations will take place on both days of the event.

A virtual glimpse of NCCR Robotics research

At NCCR Robotics, a new generation of robots that can work side by side with humans (fighting disabilities, facing emergencies and transforming education) is developed. Check out the videos below to see them in more detail.

]]>
Magnetic sensors track muscle length https://robohub.org/magnetic-sensors-track-muscle-length/ Sun, 30 Oct 2022 10:00:00 +0000 https://news.mit.edu/2022/magnetic-sensors-muscle-prosthetics-1025

A small, bead-like magnet used in a new approach to measuring muscle position. Image: Courtesy of the researchers

By Anne Trafton | MIT News Office

Using a simple set of magnets, MIT researchers have come up with a sophisticated way to monitor muscle movements, which they hope will make it easier for people with amputations to control their prosthetic limbs.

In a new pair of papers, the researchers demonstrated the accuracy and safety of their magnet-based system, which can track the length of muscles during movement. The studies, performed in animals, offer hope that this strategy could be used to help people with prosthetic devices control them in a way that more closely mimics natural limb movement.

“These recent results demonstrate that this tool can be used outside the lab to track muscle movement during natural activity, and they also suggest that the magnetic implants are stable and biocompatible and that they don’t cause discomfort,” says Cameron Taylor, an MIT research scientist and co-lead author of both papers.

In one of the studies, the researchers showed that they could accurately measure the lengths of turkeys’ calf muscles as the birds ran, jumped, and performed other natural movements. In the other study, they showed that the small magnetic beads used for the measurements do not cause inflammation or other adverse effects when implanted in muscle.

“I am very excited for the clinical potential of this new technology to improve the control and efficacy of bionic limbs for persons with limb-loss,” says Hugh Herr, a professor of media arts and sciences, co-director of the K. Lisa Yang Center for Bionics at MIT, and an associate member of MIT’s McGovern Institute for Brain Research.

Herr is a senior author of both papers, which appear in the journal Frontiers in Bioengineering and Biotechnology. Thomas Roberts, a professor of ecology, evolution, and organismal biology at Brown University, is a senior author of the measurement study.

Tracking movement

Currently, powered prosthetic limbs are usually controlled using an approach known as surface electromyography (EMG). Electrodes attached to the surface of the skin or surgically implanted in the residual muscle of the amputated limb measure electrical signals from a person’s muscles, which are fed into the prosthesis to help it move the way the person wearing the limb intends.

However, that approach does not take into account any information about the muscle length or velocity, which could help to make the prosthetic movements more accurate.

Several years ago, the MIT team began working on a novel way to perform those kinds of muscle measurements, using an approach that they call magnetomicrometry. This strategy takes advantage of the permanent magnetic fields surrounding small beads implanted in a muscle. Using a credit-card-sized, compass-like sensor attached to the outside of the body, their system can track the distances between the two magnets. When a muscle contracts, the magnets move closer together, and when it flexes, they move further apart.

The new muscle measuring approach takes advantage of the magnetic attraction between two small beads implanted in a muscle. Using a small sensor attached to the outside of the body, the system can track the distances between the two magnets as the muscle contracts and flexes. Image: Courtesy of the researchers

In a study published last year, the researchers showed that this system could be used to accurately measure small ankle movements when the beads were implanted in the calf muscles of turkeys. In one of the new studies, the researchers set out to see if the system could make accurate measurements during more natural movements in a nonlaboratory setting.

To do that, they created an obstacle course of ramps for the turkeys to climb and boxes for them to jump on and off of. The researchers used their magnetic sensor to track muscle movements during these activities, and found that the system could calculate muscle lengths in less than a millisecond.

They also compared their data to measurements taken using a more traditional approach known as fluoromicrometry, a type of X-ray technology that requires much larger equipment than magnetomicrometry. The magnetomicrometry measurements varied from those generated by fluoromicrometry by less than a millimeter, on average.

“We’re able to provide the muscle-length tracking functionality of the room-sized X-ray equipment using a much smaller, portable package, and we’re able to collect the data continuously instead of being limited to the 10-second bursts that fluoromicrometry is limited to,” Taylor says.

Seong Ho Yeon, an MIT graduate student, is also a co-lead author of the measurement study. Other authors include MIT Research Support Associate Ellen Clarrissimeaux and former Brown University postdoc Mary Kate O’Donnell.

Biocompatibility

In the second paper, the researchers focused on the biocompatibility of the implants. They found that the magnets did not generate tissue scarring, inflammation, or other harmful effects. They also showed that the implanted magnets did not alter the turkeys’ gaits, suggesting they did not produce discomfort. William Clark, a postdoc at Brown, is the co-lead author of the biocompatibility study.

The researchers also showed that the implants remained stable for eight months, the length of the study, and did not migrate toward each other, as long as they were implanted at least 3 centimeters apart. The researchers envision that the beads, which consist of a magnetic core coated with gold and a polymer called Parylene, could remain in tissue indefinitely once implanted.

“Magnets don’t require an external power source, and after implanting them into the muscle, they can maintain the full strength of their magnetic field throughout the lifetime of the patient,” Taylor says.

The researchers are now planning to seek FDA approval to test the system in people with prosthetic limbs. They hope to use the sensor to control prostheses similar to the way surface EMG is used now: Measurements regarding the length of muscles will be fed into the control system of a prosthesis to help guide it to the position that the wearer intends.

“The place where this technology fills a need is in communicating those muscle lengths and velocities to a wearable robot, so that the robot can perform in a way that works in tandem with the human,” Taylor says. “We hope that magnetomicrometry will enable a person to control a wearable robot with the same comfort level and the same ease as someone would control their own limb.”

In addition to prosthetic limbs, those wearable robots could include robotic exoskeletons, which are worn outside the body to help people move their legs or arms more easily.

The research was funded by the Salah Foundation, the K. Lisa Yang Center for Bionics at MIT, the MIT Media Lab Consortia, the National Institutes of Health, and the National Science Foundation.

]]>