By Nirmit Desai
Sharing photos, videos and one-liners on Instagram and Twitter was a major part of the fun of last week’s MTV’s Video Music Awards. Pop stars traded gibes and images faster than VMA host Miley Cyrus changed outfits–and fans watching from around the globe joined in.
But that kind of willy-nilly sharing isn’t a good fit for every event and venue. The United States Tennis Association, for instance, focuses on providing ticketholders with a rich multimedia experience on site at the US Open in New York, which is building to its crescendo this week.
So, to enrich the fans’ enjoyment, IBM Research scientists are testing a new service at the Open we call Simulcastr. Fans at the tennis center who download the US Open app to their iPhones can choose real-time video feeds from various parts of the venue–anything from scenes of athletes heading for matches to shots of the queues at the refreshment stands. Unlike with the popular video streaming service Meerkat and Periscope, the videos can’t be seen by anybody outside the tennis center.
Collaborating closely with the USTA, we’re experimenting with a technology that could someday emerge as a game-changer for organizations that run sports and entertainment venues–making it even more enjoyable to attend an event in person rather than watching on TV.
At the US Open this year, we’re just scratching the surface of what might be possible in coming years. For now, fans aren’t permitted to shoot videos and share them with others. The video for Simulcastr is being shot by USTA employees or fixed cameras. But you can imagine the possibilities for sports venues of all types in the future: Fans might be able to choose from dozens or even hundreds of video feeds shot by other fans scattered around a stadium or ballpark. Or a fan may “subscribe” to a channel focused on their favorite athlete and gain access to a constantly updated collection of videos featuring her or him.
Simulcastr is the first harvest of a vision of the future of technology that’s emerging within IBM Research. Today, about 90 percent of the digital data in the world is generated at the outer edge of our networks–via smartphones, sensors and the like. As the volume of such data continues to explode, it will be difficult to transport and store and process–too costly and time consuming. So, rather than handling it all via the Internet and cloud computing centers, why not use local Wi-Fi networks and nearby computing and storage resources instead? That way we can unlock the value of all that data.
At the center of this approach is the idea of sharing. Everybody understands the advantages of open source software. Many people contribute to and share software packages–for the benefit of all participants. With our new approach to computing, people will instead share hardware resources–everything from Wi-Fi hotspots to sensors to the underutilized storage and data-processing capabilities on our smartphones and tablets.
This approach presents major technological challenges, though. You’re connecting and managing a large collection of different kinds of devices that are temporarily in a specific place. The management system has to be able to identify the devices that are present, confirm that they’re ready to participate in peer-to-peer sharing, and then distribute computing tasks to the various devices in a highly-efficient way. Then the system has to aggregate the content from different sources, present viewing choices to participants, and serve up pieces of content to the people who choose to see them.
Because of the complexity and real-time nature of these tasks, they must be managed autonomously by computers. That will require the use of powerful cognitive technologies–including image analytics that recognize specific people and activities in video streams, and deep learning algorithms that enable computers to gain knowledge through their interactions with data. I have focused on videos primarily in this blog post, but many other types of data will be addressable in this way, as well, including information collected from accelerometers, GPS devices, connected vehicles and other types of equipment.
I can imagine many scenarios where this approach to computing could come into play. Here are just a few:
–Authorities in a city could harness it when there’s a power blackout or a damaging storm–both for understanding what’s going on and for telling citizens how to respond. Further, when conventional communications break down in such events, as they often do, the personal devices and vehicles can provide alternative channels for communication.
–Media organizations could use it to gather real-time video reports during a storm, a fire, a riot or a crime in progress.
–Insurance companies or police could use videos captured by connected cars to quickly gather evidence about the cause of a traffic accident.
Ever since I grew up under modest circumstances in the Indian state of Gujarat, I have dreamed of playing a major part in defining where the world goes. Working at IBM Research makes it possible for me to fulfill my dreams. The Simulcastr project is a small beginning, but I believe that we have embarked on the road to a new approach to computing. It may take many years or decades to get there, fully, but what a fascinating journey it will be.