Branding not as you know it.
The new 2013 Audi RS 4 Avant Ultimate Paintball Duel. Way to much fun.
The new 2013 Audi RS 4 Avant Ultimate Paintball Duel. Way to much fun.
Below we outline the steps to building an audience with the help and advice of a handful of industry experts.
1. You exist in a marketplace. Prepare to humble yourself.
We’re often deceived by the Hollywood narrative of being suddenly “discovered” and subsequently rocketing to notoriety. Chances are, we won’t run in to a literary agent at Starbucks who wants to hand us a three-book contract and arm us with a team of publicists.
Remember that you exist in a marketplace, and your job is to figure out where you fit iny testing who your audience is and what content resonates with them. With some up-front preparation work, you’ll save hours of heartache later.
But remember: “People can smell inauthentic community building a mile away,” says Pamela Slim, author of the blog and book Escape from Cubicle Nation. “Create something that means something to you and means something to your audience. If you’re in doubt about that, I’d suggest a different topic.”
2.Your goal will help put your work in context.
Many creatives state “getting published” as an end goal but your creative and professional struggles won’t simply disappear with your project’s completion. Getting published is only the beginning.
“Too many people can’t see past that first book,” says Blank. When that happens, we can set ourselves up for disappointment if our writing doesn’t take off as planned. For long-term projects like a book, the effects of a dud can be especially painful, but there’s hope.
3. Pick your community and leverage communities that already exist.
It’s tempting to state “my work is for everyone” but all great creative work is a hit with a core audience before appealing to the masses. To increase the likelihood of success, build a solid base of supporters to refine your work and eventually broaden its reach.
“If you can’t build a small audience, how can you expect to build a large audience?” says Blank.
Blank has a test for forcing creatives to think about choosing the right community: If he offered you a prize of $50,000 to find five people who would be interested in your project in the next three hours, where would you go? Who would you call? What groups would you reach out to? Where are these people already congregating?
4. Share with your community.
The most popular ways to connect with readers typically utilize a blog, a newsletter, or a book trailer. Some authors use all three.
Before she even considered writing a book, Slim had been blogging for over two years, sharing helpful advice with her readers about becoming entrepreneurs based on her years as a career coach. So when it came time to write Escape from Cubicle Nation, Slim shared everything with her readers in advance. She offered them the chance to be on her “advisory council” – a group she often emailed when she hit road blocks during the writing process. Around 150 people signed up.
credits. Sean Blanda
With 4 distinctive broadcast quality 3D scenes, this augmented reality activation delighted crowds providing opportunities to place themselves right inside the content, where they were able to interact with the animals of the Polar Region. This event acted as an organic cross between an art installation and advertising. With live streaming from each event, family and friends were able to share their experience around the world. This fantastic example was developed by Appshaker and BBC.
This AR example demonstrates the capabilities of augmented reality and how it can captures minds, engage audiences, generate awareness, enhances brand experiences, increase sales and entertain simultaneously. For further information on AR contact Applause Digital.
A live event for National Geographic to promote the National Geographic Channel in HD. People were invited to ‘step inside the world of National Geographic,’ able to pet leopards, see dinosaurs, and take part in a conga-line with an astronaut on the moon with the help of augmented reality.
Augmented reality (AR) is a familiar concept to moviegoers. Countless films have featured heroes and villains navigating through a world in which virtual images are superimposed on everything from visors to store windows.
Think Minority Report, Avatar or Iron Man and you get the picture.
But today AR fact is rapidly closing in on consumer applications.
The technology made its first appearance in the real world in 1968 when computer scientist Ivan Sutherland introduced his virtual reality concept: an optical see-through, head-mounted display complete with trackers. While its limited processing power meant the images were simple wire-frame drawings, this was an important launch for AR innovation.
The most widely used applications in the early stages were for the military, says Maarten Lens-FitzGerald, co-founder of mobile AR application developer Layar in Amsterdam. “Headset and windshield displays showing speed or targets have been going on for awhile.”
For the most part, the applications under development, while exciting, tended to be “rough” models that required big computers and were error-prone, he says. “A lot of schools were working with it but there were so many limitations in terms of processing power and mobility.”
In the 1990s, some important technological developments allowed AR to make its way into the consumer sector. Laptops became more powerful, smartphones were introduced and GPS technology was becoming increasingly accessible and popular.
The decade witnessed an extraordinary burst of innovation in which GPS receivers, electronic compasses and processing capabilities were combined to transform everyday experience. The first mobile augmented reality system (MARS) appeared in the mid-1990s. Clunky by today’s standards, it featured a see-through, head-worn display with an orientation tracker and a backpack to hold the computer, differential GPS and radio for wireless Web access.
When the first camera phones were introduced in the late ’90s, it became possible to repackage the various elements in a much more compact format. At that point, AR was able to move beyond backpack-toting users conducting navigation tasks to an accessible handheld technology with limitless options.
“Once the convergence of GPS, compasses and cameras was complete, mobile AR applications started coming to people in big numbers,” Mr. Lens-Fitzgerald says. “Finally the phones became powerful enough to combine camera images with information and interact with in real time.”
He says the three elements had to come together in one device for that to happen. “GPS tracks where you are, a compass will tell you where the phone is pointing and the visual part recognizes what you’re looking at.”
“Mobile AR applications are being optimized and made better every day,” Mr. Lens-Fitzgerald says. “It’s amazing what we now have in our pockets and what it can do.”
credits: Financial Post
On public transit, we all tend to become jerks. It’s a defense mechanism, a means to get from point A to point B with the least amount of social harassment (because, really, when does anyone have anything nice to say to you on the train or bus?). People want money, they want your seat, or they want you to listen to their loud phone conversation with their ex.
B Line Pulse is a social app that’s attempting to buck this trend. Developed by Hornall Anderson and 4Culture for Seattle’s RapidRide B Line bus–the Bellevue-Redmond route that ferries many Microsoft employees to work–it’s a web app that asks the bus riding community questions and creates artistic visualizations from the collective answers.
The daily questions are icebreakers (“What color do you feel like today?”), plain trivia (“Guess the average age of a Bellevue resident”), and a means to vent about your experience (“How do you feel when a fellow rider talks to you on the bus?”). Using the app is like filling out a comment card for your life, and answers are tallied in colors and shapes, instavisualizations that are as satisfying as the anonymous answers themselves. And at the same time, the app is tracking stats like how often and quickly you answer, awarding badges for participation in a touch of gamification that adds levity to the experience.
So is B Line Pulse a game? Is it art? The designers won’t categorize the experience under one umbrella, and that’s what makes it interesting. At the end of the day, it’s just a satisfying phone experience that’s attempting to, not distract you from your daily commute with Angry Birds and Facebook updates, but maybe improve your whole outlook on it.
And, in a sense, it gets you talking to your fellow a-holes, reminding everyone that, yes, we’re all in this stinky dirty box together. (Plus, who knew, about 50% of riders won’t mind if you strike up a conversation.) If you’d like to try B Line Pulse, you don’t have to fly to Seattle and get a job at or around the Microsoft campus. Just visit the link below on your mobile device.
Furby, AIBO and Pleo might be fantastic robot pets, but can they carry hundreds of pounds, outrun a human or lead a school of fish? We think not.
Wired scoured the world’s laboratories for the coolest and cutest animal robots.
An elastic, flexible robotic worm on wheels can inch its way through a simple set of obstacles.
Mechanical engineer Jordan Boyle modeled the 3-D-printed serpentine ‘bot after Caenorhabditis elegans, one of the most widely used animal models in neuroscience and genetics research.
RoboWorm can adapt to its environment, but it’s not “powerful and robust enough to actually throw out there in the real world,” Boyle said. It still lacks the mechanical and computational prowess to work in search-and-rescue missions, which Boyle hopes it will someday do. For now, the mechanical crawler can’t burrow through rubble nor sense its surroundings — both necessary capabilities for a rescue bot.
“It looks like it’s detecting its environment and responding to it, but it’s actually doing that solely on the basis of proprioception, or one’s sense of body posture,” Boyle said. That’s cool, but not entirely useful for a rescue mission.
Pending funding, Boyle will start working a new prototype that might actually be able to help emergency responders.
Until recently, the idea of holding a conversation with a computer seemed pure science fiction. If you asked a computer to “open the pod bay doors”—well, that was only in movies.
But things are changing, and quickly. A growing number of people now talk to their mobile smart phones, asking them to send e-mail and text messages, search for directions, or find information on the Web.
“We’re at a transition point where voice and natural-language understanding are suddenly at the forefront,” says Vlad Sejnoha, chief technology officer of Nuance Communications, a company based in Burlington, Massachusetts, that dominates the market for speech recognition with its Dragon software and other products. “I think speech recognition is really going to upend the current [computer] interface.”
Progress has come about thanks in part to steady progress in the technologies needed to help machines understand human speech, including machine learning and statistical data-mining techniques. Sophisticated voice technology is already commonplace in call centers, where it lets users navigate through menus and helps identify irate customers who should be handed off to a real customer service rep.
Now the rapid rise of powerful mobile devices is making voice interfaces even more useful and pervasive.
Jim Glass, a senior research scientist at MIT who has been working on speech interfaces since the 1980s, says today’s smart phones pack as much processing power as the laboratory machines he worked with in the ’90s. Smart phones also have high-bandwidth data connections to the cloud, where servers can do the heavy lifting involved with both voice recognition and understanding spoken queries. “The combination of more data and more computing power means you can do things today that you just couldn’t do before,” says Glass. “You can use more sophisticated statistical models.”
The most prominent example of a mobile voice interface is, of course, Siri, the voice-activated personal assistant that comes built into the latest iPhone. But voice functionality is built into Android, the Windows Phone platform, and most other mobile systems, as well as many apps. While these interfaces still have considerable limitations (see Social Intelligence), we are inching closer to machine interfaces we can actually talk to.
Nuance is at the heart of the boom in voice technology. The company was founded in 1992 as Visioneer and has acquired dozens of other voice technology businesses. It now has more than 6,000 staff members at 35 locations around the world, and its revenues in the second quarter of 2012 were $390.3 million, a 22.4 percent increase over the same period in 2011.
In recent years, Nuance has deftly applied its expertise in voice recognition to the emerging market for speech interfaces. The company supplies voice recognition technology to many other companies and is widely believed to provide the speech component of Siri.
Speech is ideally suited to mobile computing, says Nuance’s CTO, partly because users have their hands and eyes otherwise occupied—but also because a single spoken command can accomplish tasks that would normally require a multitude of swipes and presses. “Suddenly you have this new building block, this new dimension that you can bring to the problem,” says Sejnoha. “And I think we’re going to be designing the basic modern device UI with that in mind.”
Inspired by the success of voice recognition software on mobile phones, Nuance hopes to put its speech interfaces in many more places, most notably the television and the automobile. Both are popular and ripe for innovation.
To find a show on TV, or to schedule a DVR recording, viewers currently have to navigate awkward menus using a remote that was never designed for keying in text queries. Products that were supposed to make finding a show easier, such as Google TV, have proved too complex for people who just want to relax for an evening’s entertainment.
At Nuance’s research labs, Sejnoha demonstrated software called Dragon TV running on a television in a mocked-up living room. When a colleague said, “Dragon TV, find movies starring Meryl Streep,” the interface instantly scanned through channel listings to select several appropriate movies. A version of this technology is already in some televisions sold by Samsung.
Apple is widely rumored to be developing its own television, and it’s speculated that Siri will be its controller. The idea has been fueled by Walter Isaacson’s biography of Steve Jobs, in which the late CEO is said to have claimed that he’d “finally solved” the TV interface.
Meanwhile, the Sync entertainment system in Ford automobiles already uses Nuance’s technology to let drivers pull up directions, weather information, and songs. About four million Ford cars on the road have Sync with voice recognition. Last week, Nuance introduced software called Dragon Drive that will let other car manufacturers add voice-control features to vehicles.
Both these new contexts are challenging. One reason voice interfaces have become popular on smart phones is that users speak directly into the device’s microphone. To ensure that the system works well in televisions and cars, where there is more background noise, the company is experimenting with array microphones and noise-canceling technology.
Nuance makes a number of software development kits available to anyone who wants to include voice recognition technology in an application. Montrue Technologies, a company based in Ashland, Oregon, used Nuance’s mobile medical SDK to develop an iPad app that lets physicians dictate notes.
“It’s astonishingly accurate,” says Brian Phelps, CEO and cofounder of Montrue and himself an ER doctor. “Speech has turned a corner; it’s gotten to a point where we’re getting incredible accuracy right out of the box.”
In turn, the kits shore up Nuance’s position, helping the company improve its voice recognition and language processing algorithms by sending ever more voice data through its servers. As MIT’s Glass says, “there has been a long-time saying in the speech-recognition community: ‘There’s no data like more data’.” Nuance says it stores the data in an anonymous format to protect privacy.
Sejnoha believes that within a few years, mobile voice interfaces will be much more pervasive and powerful. “I should just be able to talk to it without touching it,” he says. “It will constantly be listening for trigger words, and will just do it—pop up a calendar, or ready a text message, or a browser that’s navigated to where you want to go.”
Perhaps people will even speak to computers they wear, like the photo-snapping eyeglasses in development at Google. Sources at Nuance say they are actively planning how speech technology would have to be architected to run on wearable computers.
Credits: Will Knight