888-726-7730
888-726-7730
Carnival Cruise Lines recently introduced its Ocean Medallion wearable tech, which adds convenience, information, and fun to cruises. Though it is small (only about the size of a quarter), it connects passengers to shipwide systems. For the best results passengers should pair the device with a smartphone app, though the tech will work without it. This connectivity enables passengers to access their cabins, locate members of their party, check their onboard schedule on ship screens, as well as order a poolside beverage without having to get up. Thanks to NFC and Bluetooth technology, passengers are easy for waitstaff to locate based on location data. This cuts down lines at bars and since everything is paired to the passenger, payments are easy too.
With the app, passengers can also participate in interactive activities on the ship such as trivia and enjoy prizes awarded quickly and in a variety of forms.
The location services have some additional perks too - because ship staff can see where passengers are at all times, cleaning staff can tend to your room while they know you're away, without the worry of interrupting an afternoon nap. Plus, muster drills are easier knowing that everyone is accounted for. Security for the tech is handled by way of connecting a passenger photo to the medallion, which serves as two factor authentication and also ID on the ship. This speeds up disembarking and reboarding at ports, and overall streamlines all points of interaction on board.
So far, only three ships in the Carnival fleet have the tech and two more are slated for its implementation by the end of the year. Plans are in place to have all 117 ships outfitted with the tech in the near future.
This article was based on a May 23, 2019 Venture Beat article by Dean Takahasi
Take a look at just about any new car these days and it'll be filled with tech - lane departure warning systems, cameras everywhere, and even self-driving cars. But new concern is developing about the infotainment systems now found in cars, their integration with owners' mobile phones, and what happens with the data generated.
In many cases, cars are in constant contact with the manufacturer's home base by way of telematics - always on wireless transmitters. In theory, this data consists of performance and maintenance data to help engineers know how their product is used and performs, but more and more evidence is showing that new vehicles are collecting much more. They know when we gain weight, where we work, the size of our families, our incomes, and so on. From a marketing standpoint, this data is gold. From a consumer standpoint, this is troubling. Throw a mobile phone into the mix via Bluetooth and now the car knows who we communicate with, what music we listen to, and more.
So who owns this data? Us, the creators of the information, or them, the collectors of that information? The answer is not clear, and at this time it appears that we sign away the rights to own this information before we drive off the lot.
The flip side to the concern about our data being shared with those we don't want to have it is what happens when we want the data but aren't allowed access to it? This is the case at the mechanic - more and more vehicle computers are being locked down so that repairs can only be performed at factory shops or by independent shops who've payed for a software license to access data from a single make of car. Today, mechanics can simply connect misbehaving cars to diagnostic tools to get a readout of error codes, which can direct their diagnostic workflow. That may not be the case in future years, making repairs more expensive and making business more difficult for smaller shops. This could make simple at-home repairs by shade tree mechanics just about impossible.
This circles back to the gray area of who owns in-car data...and ultimately that choice should be left to the owner of the vehicle. Many of the mobile apps we use as well as mobile operating systems offer some degree of data privacy customization...why can't our cars?
This article was based on a May 20, 2019 New York Times article by Bill Hanvey
PaperFree recently had the opportunity to sponsor the Single A Muckdogs little league team's end of season pizza party. The team, a part of the Vista American Little League, enjoyed the celebration and looks forward to next season!
PaperFree is again delighted to be able to support events in the surrounding communities.
Those of us who've had the opportunity to know the deaf or blind/visually impaired have had a small glimpse into an entirely new world where connectively tools must work differently. The deaf "hear" with their eyes and the blind "see" with their ears. As a consequence, mobile phones that rely on both audio and visual functions rarely serve their needs straight out of the box. To address this a variety of helpful mobile apps have entered the market and are revolutionizing communications for the deaf and blind.
Before mobile devices with accessiblity apps, the blind would often have to plan ahead for simple things like shopping lists, and often ask for help. Now, with apps like Microsoft Seeing AI, they can have text read aloud to them. This gives the blind independence as well as lessens the guilt over having to bother others for assistance. Plus, it gives them privacy to handle things like email. Many of these applications use AI to understand what the user is asking them to do.
Not only do these apps make life easier for the disabled, they also open up a market for hardware and software developers. More products are in the works to introduce apps that caption live audio as well as apps that use AI to help those with speech impediments communicate. Online delivery services like InstaCart, which has a mobile app, make shopping easier for those who have difficulty traveling, and there are even more apps that have made the world more accessible.
This article was based on a May 13, 2019 CNet article by Shelby Brown
Recently, a human kidney took its own way to the hospital - by way of drone. The organ, destined for transplant in an ill 44 year old from Baltimore, found itself as part of a larger project organized by doctors, researchers, aviation experts, and engineers from the University of Maryland School of Medicine. The project has also been supported by The Living Legacy Foundation of Maryland, a nonprofit that helps to enable organ and tissue donation and their needed transportation steps.
The project aimed to investigate whether the problematic hiccups in organ transportation could be reduced or eliminated by taking to the (lower altitude) skies. Currently, organ donations travel by vehicle or commercial air, which is subject to delays and cancellations beyond the doctor's control. It's also critical that organs be brought to their recipient as quickly as possible. The theory is that by using drones, the organ can bypass the rush hour traffic faster, avoid airline delays, and get to the hospital faster. By getting organs to the hospital faster not only are the organs in better condition, they can come from further away too. This expands the pool of potential matches for very sick patients.
Currently, the technology relies on a specially built drone with expansive GPS tracking features, numerous power and propulsion redundancies, and an emergency parachute. Interestingly enough, this isn't the drone's first time carrying a kidney. Previous test flights involved transporting a nonviable kidney, blood samples, and bags of saline.
The use of drones for critical medical needs reveals an interesting intersection of where one science can help another - and the people waiting at the other end.
This article was based on a May 1, 2019 CNN article by Susan Scutti.
Remember the days of your computer mouse getting a little squirrely and the remedy being to partially disassemble the mouse and do away with all of the lint, dirt, and general desk gunk that had found its way onto the roller ball and its sensors? (Which were also the days where a mousepad was mandatory. And the roller ball made a pretty great toy too.) Thanks to Microsoft, that is fortunately a distant memory as on April 14, 1999 their first optical computer mouse was introduced at a tech convention in Las Vegas and eventually revolutionized our mousing habits.
Microsoft's first iteration of an optical mouse, the IntelliMouse Explorer, used LEDs and a digital camera to track movement digitally without the need for physical input methods. Despite its steep price tag of $75, or about $115 in today's dollars, the mouse was well received, as it eliminated the frustration that came from misbehaving mice as well as returned consistently smooth, predictable motion (important for digital artists, photo retouchers, or for gaming).
The IntelliMouse Explorer was based on tech from Hewlett-Packard, and though it wasn't the first optical mouse, it was the first to reach the masses. Competitors arrived on the scene shortly after and efforts have been underway for the past 20 years to perfect the tech. Adding wireless features to mice have been the most significant mouse upgrade since, but now mice are available for specific tasks, with scroll wheels, numerous buttons, and more.
Though a lowly peripheral for most, the computer mouse has been an important development for modern computing. Even if it could be sidelined by some fuzz at one point in its history.
This article was based on a April 26, 2019 Gizmodo story by Andrew Liszewski.
AI researchers from Facebook recently announced that they've developed a method to more finely cater video game characters to real-life people. This method, which relies on videos of people going through basic motions, uses AI to understand the nuances of a person's movement and then translate it to a 3D character in your game. Even though many modern video games offer highly customizable characters, none offer the ability to customize a character's movement.
Video footage has been used in the past to establish character movement - as is the case with Mortal Kombat's use of actors on a sound stage that were later digitized. However, it's not been used before to customize characters to a player's preference.
The AI system utilizes two different neural networks that each video a five to eight minute video of a person going through the motions involved in a game, say, playing tennis. The first neural network analyzes the human movement for the rendering engine while the second analyzes shadows and reflections to be rendered on a gameplay background. As the technology is still new, the result isn't as smooth as current 3D game characters, but continuing work will hopefully improve that in time.
The tech isn't ultimately limited to video games either - once all of the kinks are ironed out the possibilities for lifelike renderings of real people are limitless. Marketing, education, media...the possibilities are endless.
This article was based on an April 19, 2019 Gizmodo article by Andrew Liszewski
Pandora, the largest streaming audio service in the United States, recently selected OpenText Media Management as the storage solution for its audio and display advertising assets. Pandora, as many are familiar, is powered by the Music Genome Project and intelligently selects music based on a users' thumbs up or down of various streamed tracks. It also offers a great advertising opportunity with this detailed information on user preferences. With that information, Pandora can serve targeted, scalable ads to help advertisers hit their mark. OpenText Media Management will serve the entire enterprise by helping users "extend business processes with digital media workflows and digital asset management services."
OpenText Media Management will handle the production and management of over 35,000 advertisements per year. Pandora also utilizes Amazon Web Services for hosting via Risetime managed services, and Cyangate assists with implementation services.
“Pandora’s creative team helps thousands of advertisers bring their brands to life on our unique platform and maintaining the quality and accessibility of our digital assets in a streamlined fashion is key to scaling our success,” said Casey Baker, Pandora’s Director of Advertising Creative Operations.
“When evaluating our options, OpenText Media Management stood out as a solution to manage creative assets through the entire lifecycle. Its robust customization capabilities and seamless systems integrations were critical in our selection process,” Baker continued.
“Every customer touch point is critically important, including brand engagements and advertising,” said OpenText SVP and CMO Patricia Nagle. “OpenText Media Management solutions enable efficient creation, review and distribution of assets, which Pandora has done a great job leveraging to uplift its advertising operations. It’s a best-in-class solution for this demanding industry.”
The business of media streaming is an ever-evolving one. With a constantly-changing library of assets and more and more demands from users, streaming companies must always be on their toes to adapt to the next wave of change.
This article was based on an April 11, 2019 OpenText press release.
It's estimated that by 2022 the number of jobs in the artificial intelligence field will outnumber the number of qualified workers by 30%. Microsoft has taken notice and partnered with OpenClassrooms, a leading online educator and developed a collaboration to provide more students with masters-level education in AI tech.
“As AI is changing the way we work and the nature of jobs, we have a responsibility to ensure graduates are prepared for the workplace of tomorrow,” says Jean-Philippe Courtois, Executive Vice President and President, Global Sales, Marketing and Operations at Microsoft. “We are excited to partner with OpenClassrooms to help equip people with the skills and opportunities they need to thrive in the digital economy.”
Using Microsoft tech, OpenClassrooms will develop curriculum that includes project-based tasks tailored to up-to-date AI knowledge. In addition, OpenClassrooms has connections to leading employers in the industry and makes those connections available to students who complete the course. As a bonus, OpenClassrooms guarantees that graduates will find a job within six months or their course fees will be refunded.
The course will be held online and is accredited in Europe - US and UK accreditation is in the works.
Currently, the curriculum is only available in Fran
This article was based on an April 3, 2019 Microsoft Press Release
PaperFree partner OpenText recently announced a new project with Canadian pharmaceutical firm Pharmascience. Pharmascience produces over 2,000 products with worldwide distribution.
The project, centering around the OpenText ContentSuite solution, works to manage the firm's complex document management needs. They were in need of a scalable enterprise content management solutiion to "manage and secure its document lifestyle and help respond to increasing regulatory demands." The solution provides this by way of document governance tools, workflow management, version control, and audit trails. Pharmascience's valuable documentation is now stored in a central repository for easy access across its enterprise.
“Operating with the highest standards for quality and regulatory compliance leaves us no room for error and we needed a partner that could operate at our level,” said Denis Beauchemin, head of IT, Pharmascience. “Our work is high-stakes and highly-regulated, but we need to continue to move quickly and innovate. The solution from OpenText provides us with a single source of truth for our most critical set of documents and ensures our team can access the information they need in the lab, the shop floor, or the boardroom.”
Visit our Lifesciences & Pharmaceuticals page to learn more about how PaperFree can revolutionize your content management strategy too.
This article was based on a March 28, 2019 OpenText press release