Visme launched its first “Visualize Me” infographic contest, and we’re now pleased to announce the winners of this fun initiative which challenged participants to do something they had never done before:
Create a unique and eye-catching visual of their life story or resume for the chance to win $5,000 in prizes.
Meet the winning contestants and check out the cool designs they created.
iKlip AV YouTube video courtesy IK Multimedia
iKlip AV Review by David Cox
The iKlip series by IK Multimedia has a history of delivering well made sturdy stands and clip/holders for smart devices. These are well made high-density holders for smart devices. From iPad holders for desks, to iPad and iPhone holders for to small folding surfaces to keep smartphones and tablets upright on tables to now rather advanced devices to keep phones and cameras steady when in use while the user is moving around.
iKlip A/V is a device for phones enables the user to deploy professional microphones and monitor the levels all while holding both phone and Klip in the right hand. Now along has come the first smartphone broadcast mount for pro-quality audio/video iKlip A/V
The device has a great feel it is hefty but not heavy. The upright section used to receive the multipin plug from the external microphone or radio mic receiver doubles also as a hand grip.
iKlip AV diagram courtesy IK Multimedia
Any smartphone attached to the Klip suddenly feels like a DSLR, and the display of the phone feels like the viewport of a pro camera. All this due to the horizontal layout and format of the iKlip AV.
The device has a built-in mic preamp that plugs into your device via a 1/8” TRRS analog audio jack. Any mic or similar device can either fit on its grip or fit on iKlip A/V’s standard UNC 1/4”-20 thread tripod/camera mount.
The device has been designed clearly with mobile broadcast professionals in mind, to provide pro-quality audio to mobile videos. It has an integrated microphone preamp with phantom power providing high-quality via an external microphone. Users can insert a microphone or wireless receiver into the preamp’s XLR input, then plug the 1/8” TRRS cable into a smartphone’s headphone/mic jack.
The device has a built-in mic preamp that plugs into your device via a 1/8” TRRS analog audio jack. Any mic or similar device can either fit on its grip or fit on iKlip A/V’s standard UNC 1/4”-20 thread tripod/camera mount.
iKlip AV photo courtesy IK Multimedia
Another nice touch is the addition of a gain dial on the side of the handle which enables the user to give the signal a short boost. A/V provides 48v of power via two AA batteries, and also features built-in support for holding most popular wireless microphone receivers.The main function of a grip is of course to enable a moving shot to remain steady and the overall heft of the KlipAV certain facilitates this. Shots that would otherwise be more shaky are rendered steady by virtue of the device, and should the user need to steady the assembly further, a handy tripod socket is fitted beneath.
On the whole a lovely piece of kit which could easily enable a slew of smartphone reporters and filmmakers to finally lose those biggest of barriers to proper production with phones steadiness of coverage and quality of sound signal. For the best recording quality possible, iKlip A/V also provides real-time monitoring and gain control options: It comes with an input gain knob plus an 1/8” headphone output.
iKlip AV photo courtesy IK Multimedia
iKlip AV Features
- Professional audio and video broadcast recording system for smartphones
- XLR mic preamp with phantom power and gain
Integrated wireless receiver support
- 1/8” TRRS analog audio output
- Headphone output for real-time monitoring
- Standard UNC 1/4”-20 tripod and camera mount threads
- Powered by two standard AA batteries
For more information:
By David Cox
Augmented World Expo to be held from the 1st to the 2nd of June is an extraordinary event that over the past four years I’ve watched grow into a major hub of ideas and activity. Augmented Reality (AR) itself was once a relatively ‘fringe’ notion, the preserve of the research departments of major tech universities, the R&D sections of companies and the basements of serious hackers. In 1998 when I was a visiting scholar at MIT Media Lab, AR & wearables were the kind of thing you would read about in trade journals as the type of technology used by Boeing employees to help them wire fuselages (a use to which they are still put by the way), by ‘cyborgs’ to build communities of advertising-free wireless networks, like Prof. Steve Mann’s students at the University of Toronto, or by, alas, the military.
People that built wearables and who developed AR back then were generally those super advanced researchers and hackers who had the hardware chops to source and build embedded computer components which were very obscure and difficult and expensive to obtain. Thad Starner’s classic Tin Lizzy wearable computer design at MIT was among the first attempts to establish a standard form-factor in the mid-1990s, for example. Such machines needed to the builder to kluge batteries from camcorders and to custom wire these to stacked linux-installed dedicated embedded computers, the sort usually sold to boat builder and light aircraft makers . Wearable computing folks built one-handed chording keyboards and molded these to their hands using special surgical plastics that were heat pliable. The one-off headsets were built from components from other things like video camcorders viewfinders, or ordered from obscure companies who normally only did business with large organizations who ordered amounts like tens and fifties. This stuff was unique, rare, and you needed to be a jack-of-all-trades to do it well. You needed to be obsessed. Today you can buy a wearable computer complete if you look around on ebay for about 500 bucks. Or you can find instructions to make one for half that on instructables.com with a Raspberry Pi or a Lattepanda or a Beagleboard or an Arduino.
Yes, it is all different some twenty two years later. Today, AR has reached into more and more lives by virtue of the simple and total prevalence of the post-iPhone smart device. Tablets, smartphones, smart watches and those small portable embedded computers that you see at the Maker Faire. IP addresses apply to everything it seems today, and even socks and keys and belt buckles might have an RFID tag and a website to monitor its position, telemetry and everything else. Today, the so called IOP (internet of things) is, more than an idea, it is sufficiently widespread a concept to justify its own conferences worldwide and the deployment of a whole new category of IP addressing. The sheer volume of inexpensive Chinese-sourced components and labor, the ability to thus manufacture products on a limited basis close to cost, all point to a new set of realities for the AR and wearable computing world. Hence the explosion in popularity and availability that can justify an event as big and as bold as Augmented World Expo 2016, AR showcase to the world.
I spoke to Ori Inbar last Friday who is the cofounder and executive producer of Augmented World Expo 2016 – Superpowers to the People! convention at the Santa Clara Convention Center this year. AWE2016 promises to attract a record crowd of upwards of 4000 people who will be arriving to see the very latest in augmented reality hardware, ideas software and trends. AWE was actually held in Asia last year in what was the first-ever augmented reality of its type in the region. It showcased many many new startups and companies attracting over 2000 people.
The AWE convention this year in Silicon Valley has taken out double the space for the Santa Clara Convention Center exposition floor and much of that will be focused on what’s known as enterprise end of the market which is the commercial and industrial uses of augmented reality. This is the use of headsets and other devices and software for medical, industrial, and official, large scale big dollar applications.
Architecture firms, the armed services, any group who can buy big and spend big and needs “fleets” of AR units involving groups of people who need data about the building of things, or the viewing of real-time audiovisual data-based phenomena. For example welders who need data about what they are making. Builders who can see instructions about what they are constructing without recourse to paper plans. Doctors who can have data about a patient superimposed over them while doing surgery. Drone pilots who need to see both what the drone sees but all the other information about what the camera is doing onboard. Actual plane pilots who need 3D floating information about flight controls over the view through the cockpit window, “Iron Man” style. These are the ‘enterprise’ buyers; large groups with deep pockets who need lots of units for whole groups of users who also need training in the use of those units.
Then there is the consumer market. That’s really regular people like you and I, the ‘people’ of the convention’s name, who buy AR apps for our phones, or possibly a set of glasses for using lifestyle or productivity software. For this market are firms like Meta who make the famous “Spaceglasses”. In 2012 Meta was a startup, 3D printing its headsets, based then on the high-quality prosumer level “Moverio” glasses by Epson. The distance tracking (enabling the user to appear to be able to ‘pick up’ virtual objects) was done using a modded leapmotion sensor and some very well put together custom software.
Google Glass came and went from the market in the space of several years but as Bruce Sterling, (longtime regular of AWE and its keynote speaker for many years) has noted, Glass was not so much an AR device as an annotated reality system. It popped information above and to the right of the viewer, like as if someone was constantly putting up small virtual post-it notes all the time. There was often little to no relationship between what one was seeing in reality and the information displayed, as is the case with true AR. Perhaps this is one of the reasons why it never took off. This and the fact that Google underestimated the reaction the population would have to being video recorded by Glass wearers in such a way that privacy was assumed to not matter. It is likely that more subtle variations on the Glass concept, less intrusive in terms of social relations may well present themselves this year.
Today they have the backing of serious money and are about to put out a computer-connected headset that looks like something like a cross between a futuristic motorcycle visor and a prop from a science fiction movie. It lets people pass glowing 3D objects to each other, scale rotate, pick up those objects. Meta’s aim is to bypass the keyboard and mouse altogether and offer computer users a completely gesture-based system of interface where the 3D data floats hologram style in front of the face and is manipulated by one’s hands.
Another dimension to the consumer Augmented Reality market includes wearable technology such as the fitness wristbands like Fitbit and the Apple Watch category for the ‘quantified self’ idea of personal telemetry. The gadgets for wearable tech and the market for the data associated with these gadgets is enough to justify Target stores having a whole “Wearables” section in their consumer electronics departments now.
Ori Inbar, says that 2016 will be the year in which we see lots of new hardware and software which is “well past the gimmicky stage” that was prevalent several years ago among these we might include the “scan and see” type systems. I’m thinking here of such technologies as or Aurasma and Layar which were simply smartphone enabled apps offering the scanning of printed documents. Today more serious (presumably real-time data-driven) applications will be on offer. I have nothing against Layar and Aurasma, but apart from the ‘pop up some visual data over a printed image’ there is little these apps do that is actual direct use to the consumer that adds value to a life experience. It is not invaluable, in other words.
The Waygo translator tool for example by contrast is a good example of an app which translates Chinese-to-English written language in real time and pops up information about that translation for the user. This is an example of what we might call an ‘active’ AR smartphone app that processes what it sees and provides the user with information that could only take place by means of AR.
Another important development which is on display this year is smart Fabrics. What are smart Fabrics? They are a technology which is on the rise as clothing and apparel converge more and more with smart devices and the cloud. When programmers view a fabric, they often view a busing system for channeling data. Today fabrics can be used as surfaces for display, for input, and even for feedback in the form of pressure to the user as a means of interaction with virtual data. Fabrics that are worn can be bioluminescent, as the threads used to weave the fabrics can have the properties associated with deep sea fish and glowing insects. This is the brave new world of the intersection of biotech with digital media.
Fabrics might well for example serve as foldable, cuttable displays. A fabric could literally be a screen. Its like projecting a movie onto a dress made of movie projection screen material, only there is no projector. The dress is the display. Flaschen Taschen, an LED array screen by the San Francisco hackerspace Noisebridge is a good example of this type of development at a relatively low resolution and was ‘all over’ the Maker Faire this year.
A comprehensive demonstration display of smart fabrics will be on show at this year’s Augmented World Expo so anyone attending will be treated to that also. The relationship between augmented reality and virtual reality will also be at the forefront this year.
Its going to be great.
See you there.
Augmented World Expo
1st – 2nd June 2016
Santa Clara Convention Center
By David Cox
SuperPowers to the People: Augmented World Expo 2015: An introduction to an audio interview with Professor Steve Mann (see link at end of article). The augmented reality conference AWE2015 is coming up and its theme is “Superpowers to the People”, and as usual the buzz is around Meta AR, the Kickstarter based firm that developed a headset and developer kit based around UNITY. Since 2013, the first year of META’s development has seen it grow considerably from a 3D printed housing prototype variation on the Epsom “Movio” glasses on which it was originally based.
META’s innovation was to add a ‘kinect’ or leapmotion style META tracker to the front bridge-of-the-nose area to act as the basis for where your computer knows where to place objects in your field of view from your ‘point of eye’ (POE) to use the jargon. This tracker knows also to ‘see’ your hand and to interpret it as the device with which objects are being manipulated, moved and transformed.
Steve Mann, Chief Scientist at META AR is a true pioneer of both wearable computing and Augmented Reality, and has been building his own wearable devices since 1974. I first met him in 1995 at the MIT Media Lab on a research visit.
A strong believer in personal freedom, Mann believes that wearable computing, especially the ability to manage one’s personal space as it pertains to the recorded image is a path to democracy. He views technology like META as a great equalizer in the war against surveillance. Against the top-down vector of ‘surveillance’ he posits ‘sousveillance’ which is ‘seeing from below’.
Simply put, if we are all wearing devices that enable us to view each other, this effectively neutralizes the one-way vector of power that cameras in the hands of the powerful makes possible. Of course in order to for sousveillance to become feasible, there needs to be the social consensus in place first. But one step toward this is to be sure, is an affordable universal principle of wearable technology that facilitates customization and ease of use. The wearer truly should be able to configure their field of view and the nature of all that which is augmented over that field of view. With META AR (AKA Spaceglasses) at least that version of META AR that has been made available to developers since 2014, the tracking technology works well enough to permit this, as do the developer tools, based as they are around the free 3D and 2D game engine UNITY.
I interviewed Steve Mann in the lead up to Augmented World Expo 2015 where he will be delivering a speech on the history of Augmented Reality as well as holding workshops on META viewing tools. Mann spoke of the difference between what he called the “Big AR” of the early 1960s – that of the type popularized by Ivan Sutherland and the famous “Sword of Damocles” head mounted display of the Stanford research labs during the cold war. These were large, tethered rigs tied by cables to mainframe computers hooked up to cumbersome looking binocular visors the size of bike handles.
Mann’s own “Little AR” by contrast, developed in the late 1970s when he was but twelve years old and built from more or less found materials, was aimed squarely at empowering the individual, who thus untethered could walk around, and have his or her data made available to him or her either in motion or in situ.
As the number of AR headsets today proliferates almost exponentially and the market becomes saturated, veterans like Steve Mann are in a position to lay down some of the guiding principles as to what makes an AR ecosystem of content provision by the user successful. One of the defining characteristics is openness of configurability by the user of their resources. If a system is closed, it undermines the whole basis of a meaningful AR, hence the failure of Google glass, according to Mann as he outlines during my interview (see link below).
Google glass exudes privacy. Privacy of sight. Privacy of seeing. And through its utterly closed ecosystem of use and apps, stands in stark contrast the notion of a democratic and participatory role for what should be as free and open to use as the low cost pay-as-you-go cellphone. We have a long way to go before any system of AR is truly of ‘power to the people’, but the lowering of costs is a matter of time. A language of AR and a syntax of use, both incumbent upon the correct management of tools and their education is key here. This is where policy comes in. The relationship of the UK government to the Raspberry Pi foundation comes to mind. Massive subsidy in order to promote broad literacy and creative expression in the population. We need an Arduino style AR revolution. A pi-AR if you will. If Lenin urged Dziga Vertov to make an ‘art of twelve kopeks’, we today need an AR of fifteen dollars.
And, the user must be able to customize to their own specifications as much as possible, right down to the hardware where possible. The iPhone and the iPad are closed models rendering the user a consumer of prepackaged services. AR offers a new opportunity of aesthetics in a way also. A new set of social relations defined by interesting meaningful relationships based on data, places and people. The experimental possibilities of drifting through open fields of participatory urban spaces, and moving to new ways of working and living together through those less managed open spaces might be possible. A non-neoliberal technologically mediated commons, in which AR assists in the development of newly reimagined urban possibility.
Interacting through this environment, both figuratively and literally, we need to encourage democratic and participatory models of use for AR. Just as Bruce Sterling identified a SPIME – time, space and virtual space An augmented subject can often consider herself to be self-consciously a spime in that she occupies both the real world, the virtual world simultaneously as her data influences her decisions and actions as her body occupies space. It is with the proliferation and deployment of very low cost wearable computers based on interoperability and the principle of the user as subject that Augmented Reality is beginning to mature as a medium and as a technology. Therefore just as with any new technological shift, a new language should logically follow. These and other concepts will be discussed by Steve Mann as part of the general theme of this year’s AWE2015 which is “Superpowers to the People”.
From cinema came the language of the close up, the long shot and the jump cut, and from computers came the save-as, the cut-and-paste and the selection box. AR is sure to bring with it its own language with such terms as “flowspace” (the space in which the subject moves such that their data moves with them meaningfully), objects as interface (reaching out to door handle with AR can have the effect of unlocking the door). Thus, a kind of dance of the interplay and overlap of things, places and people with the information pertinent to them, all the time, in real time will spawn its own new kind of terminology and lingo. It is the performative language partly of theater, urban planning, of cinema, and of dance and manners. From the world of filmmaking we might call it the experience of Augmented Reality, with its floating-objects-in-space and holographic dancing-objects-interacting-with-the-world around-us a kind of mis-en-scene and directorial scene blocking in real time. Everyone a director of their own real-time experience.
New ways of seeing are thus required, to quote John Berger, where the age-old Renaissance principle of what Mann calls the ‘point of eye’, the exact position of the iris where the world we view converges on our gaze needs to be rethought, all over again. Its one thing to have all the data of the world around you converge on your eyes only. Quite another to consider these tools for the population beyond yourself and your own personal needs.
Can we strip away from the singular point of view of the typical user as depicted in the PR materials of Augmented Reality his sense of entitlement and ownership and control, and perhaps through the very same tools, replace them with a new set of ways of viewing the world, less possessive, more inclusive, more considerate of the needs of the planet and is all-too fragile membrane of a surface? Along with the need for a new language of AR is a new language of being in the world which possibly such technologies might just help usher in. If so, Professor Steve Mann is just the kind of progressively minded visionary whose pioneering work in the field gives him the right, quite literally, to light the way.
I interviewed Steve Mann on May 15th, 2015
Here is the link to the audio interview
A link to Augmented World Expo 2015
By David Cox
Years ago, portable speakers were heavy, cumbersome affairs. If there were batteries at all, they generally were not rechargeable and ran out of juice quickly.
The iLoud portable speaker
IK Multimedia, iLoud
IK Multimedia, iLoud
The whole point of small speakers was to have powered amplification where you needed it
outside or in situations where you could not plug in easily. The 6X AAA battery-powered Roland Microcube and its ilk filled a niche for a while about 10 years ago, for guitarists and keyboardists, and did the job pretty well, but these were really solid mini-stage amps, scaled down for small cafes and busking, not really suitable for say DJ-ing in galleries or at a party. If you were trying to play your iPad through them, it was like using a loud-haler – not much subtlety to the highs & lows, but okay if you were ripping it like Curt Cobain. The alternative really was to bring a small hi-fi but that again is a different kind of experience, and not really a self-contained speaker as such and you’re still plugged in to that wall socket.
But now both speaker technology AND battery technology have advanced such that very powerful and very high quality speakers can be manufactured that pack a fairly hefty wallop when it comes to delivery of sound and bass response, while at the same time leaving a relatively small footprint. Studio monitor speakers, once the sole preserve of high end recording booths have escaped into the laptop bags and DJ kits of the smart device generation and have joined the plethora of hardware of peripherals that accompany the sample driven music performance world of today.
IK Multimedia, today launched iLoud®, the first portable stereo speaker designed for studio monitor quality on the go, is now available from music instrument and consumer electronics retailers worldwide. The iLoud battery-operated speakers combine superior power, pristine frequency response and amazing low end in an ultra-portable design that makes it the perfect alternative to studio speakers for music creation, composition and playback on the go.
Loud Clear and Bassy, like a Lo-Rider at Night in San Francisco’s Mission District going by Low and Slow my Brother.
The iLoud speaker is indeed very loud. In fact, it’s 2 to 3 times louder than comparable size speakers – a blasting 40W RMS of power. But iLoud is extremely clear at all volume levels thanks to an onboard DSP processor and a bi-amped 4-driver array of high efficient neodymium loudspeakers, that provide accurate, even response across the entire frequency spectrum for unbelievable realism of sound. For deep bass response iLoud’s bass-reflex allows frequencies to go down to 50hz, an amazing low end for this small enclosure.
I’ve been using the iLoud for a few days now with Netflix and DVDs and have been amazed at how much I can actually hear on these movie soundtracks that would otherwise remain hidden. I’m talking about very densely mixed films like Ip Man (both 1 and 2) and that true litmus test for all movie sound design perfectionists, Dennis Hopper’s 1988 Gang-vs-Gang-vs-Cops film Colors (play it LOUD!!). For more on why this film is so important for understanding the importance of film sound, see this excellent article by Philip Brophy.
iLoud is the ideal speaker for musicians and audiophiles who demand an accurate reproduction of a wide range of musical styles from rock, hip-hop and electronic dance music, to more nuanced and sonically demanding genres like jazz, classical and acoustic.
Portability and the types of gigs this implies.
The iLoud speaker is powered by a high-performance Li-ion rechargeable battery with smart power-management features that reduce its power consumption so that it can be used for up to 10 hours without recharging. This makes iLoud an ideal portable speaker solution for mobile musicians. I find it will fit in a backpack very easily for gigs I can prepare for of the sort previously that would have required different ways of thinking about in terms of transport. I’m thinking; playing soft electric guitar via the iPhone at the cafe table or in the backseat of a car. Or playing keyboard WITH movie soundtrack in small gallery with a dataprojector to a group of 20 visitors, but on the sidewalk or in the alleyway with the barbecue and the beer buckets.
The Real Innovation – Wired and Wireless Connectivity
iLoud supports Bluetooth operation for wireless audio streaming anywhere and everywhere from a mobile device such as an iPhone, iPad, iPod touch, Android smartphone or tablet for casual listening. For sound sources like MP3 players that do not have Bluetooth capabilities, the iLoud also has a stereo 1/8″ mini-jack input for connecting line-level devices such as home stereos, DJ gear, mixers, MP3 players, and more.
Plug and Play Convenience
iLoud also offers the ability to connect a guitar, bass or dynamic microphone directly to the speaker and process the sound with a multitude of real-time effects apps on iOS devices. It features the same circuitry as IK’s iRig – the most popular mobile interface of all time – and allows users to plug in guitars or other instruments and access AmpliTube or other audio apps on their mobile device for practicing, performing and recording. The input also accommodates dynamic microphones, making it possible to run an app like IK’s VocaLive for real-time vocal effects and recording.
I recommend the iLoud for the experience of having a well-made and truly portable RECHARGEABLE (very important) speaker that is truly studio quality with you whenever you need it. And watch “Colors” with it when you get a chance. LOUD!!
Pricing and Availability
iLoud is priced at $299.99/€239.99 (excl. tax) and is available now from the IK network of music and electronic retailers around the world.
For more information, go to:
For a comprehensive collection of videos that showcases iLoud’s feature set, go to:
By David Cox
It’s a rainy night in the Mission people move back and forth along Mission Street. There is the smell of burritos, tobacco, perfume and the effervescent sense that something is happening. There is rain there is marijuana wafting up and down the street. There are cafes and nighclubs. There are taxis and cop cars crawling up and down the street. There is Gamebridge.
It began three or four years ago at Noisebridge Hackerspace between 18th and 17th streets in the Mission District to enable those without the means to build and construct electronic inventions the means to access resources, to converge and share tools. It’s been a hub of activity for anyone interested in putting together ideas build something make a robot, 3D print an object, use fabrics, recycle computers, design games or simply use a soldering iron when they don’t have this equipment at home. If a hackerspace game club (an adhocracy by its very nature) could be said to have an organizer, it is definitely the sharply intelligent and quickly spoken programmer Canadian Alex Peake who has a background as a Game Developer. Peake peppers his descriptions of processes with vivid metaphors and always has a great visual concept to illustrate his ideas. He has an amazing passion for games, for programming and for teaching and is one of the best in the business. Brennan Hatton and Bud Leiser also contribute with equally passionate delivery detailed lessons that keep the Gamebridge regulars glued to the screen and their own laptops in equal measure for hours at a time.
Someone has ordered Pizza. It arrives steaming filling the space with the scent of tomato paste and warm melting cheese and garlic. There is cold Diet Coke and then the paper cups and and napkins a broken out and discussions happen. It’s a great scene and everyone has something to offer.
One of the crowning triumps of Gamebridge recently is a collaboratively developed augmented reality project called SimBridge in which the entire Noisebridge space itself has been replicated in virtual 3D space so that it’s possible to move through it online while wearing a headset. While you are in the building, you can hold up a portable device like an iPhone or tablet and to see the same space superimposed over the real space but this 3D game-like metadate annotates the real space and tells you what sections of the real one are, and what they are for. It also enables people to share a virtual representation of Northbridge at a distance these and other experiments of pushing Noisebridge forward as a key activity at a time when virtual reality and augmented reality are starting to really push the boundaries of what’s possible with the new technology.
Nobody really expects to make money out of this. The whole thing is really grassroots. This is the spirit of the original ‘homebrew computer clubs’ of the 1970s and it’s about experimentation and ideas for their own sake. To that extent it’s a utopian testing ground and it is made up largely of young people with laptops and passion.
It’s a great thing. It’s Gamebridge