Relay: A delivery robot for the hospitality industry


Delivery robots are not a new concept and several systems have been in operation in past decades. For example, delivery robots have been used in hospital environments to deliver medicine. In fact, delivery was on the first imagined practical applications for mobile robots. However, in the early days of robotics, the delivery robots were slow, difficult to interact with, and required that the environment be modified to aid their operations, e.g., navigational markers painted on the floor to allow localization.

Today, robotics scientists and engineers have solved (from a practical point of view) some of the most fundamental problems in robotics, especially those of Simultaneous Localization and Mapping (SLAM), and navigation in complex, dynamic, environments. Hardware improvements in sensing have been essential in these advances. Furthermore, a better understanding of Human-Robot Interaction (HRI) principles allow the development of robots that are both easy and pleasant to use. Large decreases in manufacturing costs and electronics components coupled with a more open approach to sharing robotics software components via an Open Source movement are now making it possible for specialized worker robots to begin entering our lives.

Savioke’s Relay is one such robot. Relay is designed to deliver small items or other amenities, efficiently and securely to hotel guests. The robot is stylish, easy to interact with, and frees hotel stuff from performing menial tasks and focus on more important activities that improve stay satisfaction for guests.

Relay’s software is derived from the open source Robot Operating System (ROS) and it utilizes some of the most advanced sensors for perception, including Intel’s RealSense 3D camera. In fact, Relay even made an appearance during the keynote speech at the recently held Intel Developer Forum (IDF15) in San Francisco.

So, here is an overview of the robot from Savioke’s CEO and engineering team. It includes some hints about the future of mobile robots and the company’s plans in that space.

Overexposed photos be gone, say MIT researchers!


Modulo Camera

Overexposed photographs may soon become a thing of the passed with a new invention out of MIT. The proposed modulo camera overcomes the problem of limited bit resolution of modern light sensors to produce High Dynamic Range (HDR) images with little hassle on the photographer’s side.

As you can see in the example at the top of this post, the results, which are the combination of clever hardware and a textbook use of machine learning, can be spectacular. the work was presented at the 2015 IEEE International Conference on Computational Photography and the paper was the event’s best paper runner up.

The below video created by the inventors describe at a high level how the modulo camera works. It includes several other examples demonstrating the capabilities of the new technology.


Google Project Wing delivery drones


After all the excitement that Amazon was working on unmanned drones for package delivery (and many people doubting to what degree such a delivery system is even realistic for many reasons other than technological barriers), Google released information on Project Wing aiming at developing a similar drone delivery system.

Project Wing is another of the research projects worked at Google [X]. While we still wait for Google’s self driving cars, a team lead by MIT professor Nick Roy on a two year sabbatical at Google and in collaboration with Unmanned Systems Australia developed a drone for package delivery. It was recently tested in Australia and a video of a successful delivery was posted online.

I don’t know to what degree it is all that useful for drones to deliver packages in a city (do we really need to have items purchased online at our doorstep within an hour?) but they could be useful for deliveries in remote areas. I can imagine these drones delivering supplies to rural areas that are hit but some natural catastrophe or to people trapped in the middle of a war zone with little hope of receiving humanitarian assistance any other way.

Anyway, I think Google’s new delivery drone is pretty cool, especially the system that drops the cargo to the ground without the need for the vehicle to land. The video below show the fruits of Project Wing and explains a little bit about the technology behind it.

Google Project Tango mobile phone


After acquiring just about any robotics company worth something, Google research have now revealed one of their internal projects that aims to create a mobile phone that has all the necessary hardware and software to create beautiful 3D models of indoor environments. Project Tango is the first prototype of a highly customized smart phone with all the necessary sensors for localization and mapping.

In the past, we have introduced the problem of SLAM (Simultaneous Localization and Mapping) and its significance in robotics.

Project Tango adds 2 computer visions processors and a depth sensor (not clear what this depth sensor is but probably a kinect-like sensor of smaller size and resolution is a possible candidate) along with a motion tracking camera to accompany a standard 4MP phone camera. With such a great collection of sensors and a myriad of very advanced SLAM algorithms developed over the last 15 years by the robotics community, the reality of Project Tango should not be surprising.

It will be interesting to see what kinds of applications could be built using such a platform. For the time being, creating 3D models for gaming is an obvious focus but they also suggest assistive living, e.g., guide application for the visually impaired, as another possibility. Google is giving away 200 prototype phones for those interested in developing applications.

The following video was released by the Project Tango team and tries to explain some of the reasoning behind the creation of this project and also discuss some future directions.

KUKA robot versus Tim Boll


First, robots defeated man in chess.

Second, robots defeated man in the game of Jeopardy.

Third, robots proved to man that they can be better drivers.

Now, they will get the chance to best man in the game of ping pong. I saw it coming a mile away!

So, industrial robot manufacturer KUKA will be hosting a table tennis match between their KR AGILUS robot arm and puny human champion Timo Boll. The purpose of the exhibition is to celebrate the opening of a new robot factory, or so they say! We all know that the robots are finally coming out of the closet and after embarrassing humans in tasks of intellect, will now also demonstrate their superior physical abilities. All, jokes aside, however, the trailer looks pretty cool (see it below) and the game will be broadcast live on March 11th. You can watch it here.

BigDog dynamic manipulation: It throws a cement block!


Boston Dynamics have amazed us more than once over the years having created some of the most incredible and, at times, scariest (not in the uncanny valley sense but more on the Terminator is real sense) robots. Their creations range from the jumping Precision Urban Hopper, the incredibly realistic dynamically balanced humanoid PETMAN, and, of course, the very familiar 4-legged robot mule BigDog.

So, what are they up to recently?

They just released a new video of BigDog equipped with a powerful robotics arm lifting and throwing a cement block with ease and elegance; also, it does this while maintaining perfect balance. And this is what much of the work at Boston Dynamics is focused on, real-time control of the most advanced robots on this planet. The main reason we are also so flabbergasted every time they release a new video.

So, watch BigDog picking up and throwing a cement block across the room and join me in hoping that Daniel H. Wilson will soon released an updated version of his How To Survive a Robot Uprising: Tips on Defending Yourself Against the Coming Rebellion including tips on how to avoid getting knocked out by flying cement blocks :)

Multi-user spatial collaboration using augmented reality on mobile devices


Augmented reality for mobile devices has grown in popularity in recent years partly because of the proliferation of smart phones and tablet computers equipped with exceptional cameras and partly because of developments in computer vision algorithms that make implementing such technologies on embedded systems possible.

Said augmented reality applications have always been limited to a single user receiving additional information about a physical entity or interacting with a virtual agent. Researchers at MIT’s Media Lab have taken augmented reality to the next level by developing a multi-user collaboration tool that allows users to augment reality and share that we other users essentially turning the real world into a digital canvas for all to share.

The Second Surface project as it is known is described as,

…a novel multi-user Augmented reality system that fosters a real-time interaction for user-generated contents on top of the physical environment. This interaction takes place in the physical surroundings of everyday objects such as trees or houses. The system allows users to place three dimensional drawings, texts, and photos relative to such objects and share this expression with any other person who uses the same software at the same spot.

If you still have difficulty understanding how this works and why I believe when made available to the general masses it will be a game changing technology for augmented reality and mobile devices, check out the following explanatory video.

Now, imagine combining this technology with Google Glass and free-form gesture recognition. How awesome would that be?


Quadrocopter inverted pendulum acrobatics


Quadrocopters are all the rage these day but nobody gets them to more impressive acrobatics than the machine learning and control team, led by Prof. Raffaello D’Andrea, at ETH.

Whereas just last year the team demonstrated a single quadrocopter balancing an inverted pendulum, just recently they demonstrated how two flying machines not only can individually do the balancing trick but can also throw and catch a pole between them. The quadrocopters use machine learning algorithms to improve their throwing and catching performance over time. The results are rather impressive as you can tell for yourself after watching the team’s demonstration video following.

I wonder what they will achieve next!


UK’s Oxford robot car is here


UK robot carI guess English scientists and engineers don’t like to be outdone by their American counterparts so after many years of hearing constantly about Google’s rapid development of self-driving cars, a team from Oxford university recently unveiled their very own robot car. And they make some very big claims about the new vehicle’s capabilities compared to the competition.

So, UK’s robot car is the brainchild of a small team (22 members) of researcher from Oxford led by Pr. P. Newman. The vehicle is a modified Nissan LEAF, i.e., all electric. The team outfitted the car with camera and laser sensors as well as an additional on board computer for number crunching, all (hold on to your hats folks!) for no more than 5000 pounds. And they estimate that in just a few years, they will reduce the cost down to only 100 pounds. Now, if that hasn’t gotten your attention, I don’t know what will.

We all want to spend as little money as possible for access to the newest technology but does this robot car actually work? The researcher claim and have published several videos demonstrating the car driving autonomously in an urban environment. It can achieve speeds of up to 40Km/h which is not particularly impressive but certainly a great start.

robot car laser sensor data visualization

And before I forget, the team has developed new vision and laser-based navigation and localization algorithms that allow the car to drive to a destination without the use of GPS; now, GPS is not the most useful when driving in the downtown of a city since the buildings tend to block the satellite signals so I can understand the need for GPS-free navigation. The experience-based approach the researchers have developed uses stereo vision to localize and track the vehicle. That’s great, but having worked with vision systems in outdoor environments, I am not completely convinced that this would work in all cases, especially when the weather turns bad or the vehicle gets stuck behind a truck or SUV. So, fusing data from multiple sensors such as cameras, laser, accelerometers, and, of course, GPS would be the more reliable navigation solution and the one more likely to allow for autonomous cars to be given legal permission to drive in our cities.

But enough talk. Let us enjoy the videos the Oxford team has published showcasing their new toy :)

The team’s introductory video of the autonomously driving Nissan LEAF vehicle.

Demonstration of laser-based semantic map used for navigation and dynamic obstacle detection, tracking and avoidance.

Experience-based navigation system using vision and laser sensing.

Sir James Dyson on robot vacuums: Not good enough!


Sir James DysonSir James Dyson, the man whose name is synonymous with vacuum cleaners that work (among other wonderful inventions) recently expressed his opinion on the robot vacuums available in the market today. He is not impressed considering them just a gimmick!

It was iRobot just a few years ago that created the Roomba essentially the first robot vacuum that was worth paying money for (and at the time reasonable priced as well!) The company has been extremely successful selling Roombas and other related cleaning robots (and the also very successful and useful Packbot line of military and law enforcement robots). Since Roomba was introduced just 10 years ago, it has sold more than 3 million units. That’s good for a domestic robot that costs only a few hundred dollars. This explains why in the last few years, there has been a plethora of other companies introducing similar robot vacuum cleaners often at a higher price but not necessarily doing a better job.

Regardless, Dyson is not shy on declaring that these gimmicks are both terrible vacuum cleaners (and he knows a lot about vacuuming) and robots. He things that a vacuum cleaners current offerings do a bad job at actually cleaning the floor. And as robots, they are not very intelligent in how they do their job and tend to be very inefficient at that; this should not be a surprise to those of us who are familiar with Rodney Brook’s (co-founder of iRobot) ideas on robot design and programming which advocate simplistic, insect-like, sensing and decision making for robots.

So, other than criticizing others, what is Dyson up to when it comes to robot vacuums? Well, he did not say that he doesn’t think that it is a good product idea; he simply expressed his opinion that there is plenty of room for a better product. And since he did not dismiss the possibility of his company entering this space, I would say that a Dyson robot vacuum is very likely to be introduced within a year or two. The question will then be, is it all that Sir Sames Dyson thinks it is going to be compared to the competition?

Time will tell!


Go to Top