Twinthesis has been featured on The Next Web, take a look: http://thenextweb.com/shareables/2011/03/21/twinthesis-mac-app-explores-the-sounds-of-twitter/
Twinthesis has been featured on Synthtopia, take a look here: http://www.synthtopia.com/content/2011/03/15/twinthesis/
UPDATE: Official Bournemouth Uni Website Updated: http://bit.ly/kp0ODy
Further Information: Charles Elder, Press & PR Manager
(tel): 01202 961032 email: firstname.lastname@example.org
17 June 2011
How does your room ‘sound’? New app can help!
The creator of the ‘Twinthesiser’ – the unique web-based software which turns posts made on Twitter into real sounds – will present his latest project as part of the 2011 Festival of Design and Innovation at Bournemouth University.
Sam Harman, who is just completing his BSc (Hons) in Music and Audio Technology, will demonstrate his new iPhone application as part of the Festival which opens for a private view on Thursday, 23 June before opening to the general public on Friday, 24 June.
Sam’s iPhone Impulse Response Application is designed to capture the acoustical characteristics of a room, (otherwise known as an impulse response) which can then be duplicated through a computer. “It’s really designed for musicians, audio technicians or acousticians but the application makes it easy for anyone to use,” Sam enthuses. “Previously it’s required a lot of microphones, cables, laptops, etc but now you can just do it all on your iPhone and then plug-in to your computer and use the data collected by the application to make any audio on your computer sound like it was being performed or recorded within the room or environment that you’ve captured.”
Earlier this year, Sam introduced the world to his ‘Twinthesiser’ which he designed to “explore the ‘sound’ of twitter, in an attempt to sonify the human randomness being generated on the service.”
Through the ‘Twinthesis’ software, Sam has assigned each character its own distinctive tone. The software then accesses a Twitter feed every 30 seconds or so, selecting the top 20 tweets at random and repeats it to produce a kind of rhythm or ‘symphony’ of high pitched bleeps and deeper humming sounds.
“The Twinthesisier can then go through the tweets a character at a time to produce a sort of melody,” says Sam. “In time I hope we could get to the stage where it could pull data off Twitter at more than 100 times every second and this would produce a sort of global symphony.”
“Theoretically the application could be configured to draw data from Facebook or Twitter or from any other source of random data,” Sam continues. “You could also apply the engine to groups of people so you could take the tweets from one country and compare them with the sound of tweets from another country.
“It could become a sort of worldwide controllable instrument, which I think is really cool,” Sam concludes. “There are limitless things you can do.”
“Sam’s work on Twinthesis along with the audio application he developed for the iPhone is a perfect example of the brilliant work that our students in Music and Audio Technology are able to deliver,” says Dr Alain Renaud (title). “His work, along with other students, blends creativity and complex technologies, to ultimately deliver products that have a commercial potential in the field of Creative Technologies.”
BU’s BSc (Hons) in Music and Audio Technology gives students an opportunity to apply electronic and computer technologies to create contemporary music and audio. Students from the degree will join other emerging designers and innovators from BU’s School of Design, Engineering & Computing to display and demonstrate their creations at the 2011 Festival of Design & Innovation.
Open free to the general public from Friday, 24 June to Monday, 27 June on the University’s Talbot Campus, the 19th annual Festival – sponsored by B&Q, the UK’s leading home improvement retailer – will showcase over 170 designs and prototypes created by talented final year students completing undergraduate degrees in Product Design, Industrial Design, Design Engineering, Fashion & Textiles
(from BU’s partner institution, Wiltshire College, Salisbury), Interior Design, Computer Aided Product Design, Sustainable Graphics & Packaging (from BU’s partner institution, University College Yeovil) and Music and Audio Technology.
Further information on the 2011 Festival of Design and Innovation at BU – including opening times, exhibits and travel directions – are available on the Festival website: www.festival.bournemouth.ac.uk
Further information about the Twinthesis programme can be found at Sam Harman’s website – http://samharman.com/2011/03/twinthesis-twitter-powered-synthesis/
To hear the Twinthesiser ‘in action, please visit – http://soundcloud.com/theharmonizer/twinthesis
As i’ll be graduating from university later on this year, this is just an informal post about some projects I want to pursue once I have a little bit more time in my life. Throughout education in general, i’ve always thrown myself 100% into anything i’m doing. At college, i was running the local radio station, in the orchestra, working as youth worker, and taking charge of all things technical for the dramatic society. At uni, i’ve been the student representative for the last two years, and taken charge of the design festival for my course at the end of the year. So all in all, i’m looking forward to having a bit of time to work on some projects of my own.
Firstly, i’d like to develop ‘Twinthesis‘ further. This is a project i’m starting work on now, as i’m due to perform with it, at a creative music technology concert on the 14th May (More details on request!) I’d like to develop some form of graphics engine, to give the instrument a visualisation as well as a sonification process. Also, i’d like to implement emotion recognition, so the sound of the tweet differs depending on certain key words within the tweet. An iPhone / iPad app is the final vision for the product, maybe with location sensing, for collaborative performances.
Secondly, I would like to get myself up to speed with modern web development techniques. I am experienced in HTML / Flash development, but since college really haven’t had time, or the inclination to learn any of the new methods / languages / or techniques. I’d like to master CSS, and PhP development, as well as HTML 5. I think these would be very beneficial skills to have in the future.
Finally, I’ve started blogging for a new Tech blog. Going by the name of TechRant, it will be launched on May 1st of this year. Blogging and social media is always something i’ve been passionate about. I remember from the first internet connection i had (Dial-up speeds, oh yeah!) I was making my own websites, and sharing content. I think the resources that are available for sharing on the web today are vast, but are only the beginning. I think there’s a whole new wave of services on the horizon (streaming music, cloud based applications, etc). I intend to be blogging about it all, for many years to come. So make sure you check out TechRant on May 1st!
Well, that’s all for now really! Just an update on some of the projects i hope to embark on / complete after I graduate. This is just the beginning, watch this space…
As part of the ‘Management Strategies & Entrepreneurship’ unit in the final year of my degree, I was required to thoroughly research, plan, and put forward a proposition for a contract to develop a virtual drum instrument. The assignment proved challenging, but incredibly useful, introducing me to skills and business acumen relevant to the music and audio technology industry. The contract tender document can be downloaded here. It should be noted that all contracts, entities, or companies appearing in this work are fictitious and included purely for assignment purposes.
Imagine being able to capture and store the sound of an acoustic space, like a local church, cathedral, or even your favourite recording studio. Now imagine being able to use that stored file, in a convolution reverb plugin to make any audio sound like it was being played in that space. Now imagine being able to capture the sound of that acoustic space right on your iPhone.
Introducing iResponse for the iPhone. The first impulse response iPhone application intended to generate impulse response files for use within reverb plug-ins. This application is able to record and generate impulse responses for any acoustic space. You can use both impulse-source excitation methods (such as a balloon popping, or a starter pistol firing) or you can record a steady-state excitation signal through a loudspeaker. The excitation signals to be played back through the loudspeaker consist of swept-sine tones of varying lengths. The process is simple, point your iPhones built in microphone at the sound source within a given environment, select the length excitation signal you are using, and hit record. Once the recording has stopped automatically, its a ‘one-button’ process to perform the deconvolution processing required to generate the impulse response.
Twinthesis is a MaxMSP patch I designed to explore the ‘sound’ of twitter, in an attempt to sonify the human randomness being generated on the service. This post is a quick overview of the synthesis engine, as well as a quick video outlining the features and concepts behind the patch. You can then download the synthesiser as a Mac application.
The aims of this project, are to create a synthesiser capable of both additive and granular synthesis using live tweets to generate and manipulate the sound. The synthesiser currently calls twitter once every 30 seconds, so a new tweet is used to generate the sound every 30 seconds. The synthesis engine, has an element of performance to it, and can be used to create experimental music. An example of experimental music created by the synthesis engine is here:
A full scientific paper and report can be downloaded about twinthesis, detailing aspects of how the patch works, and certain constraints of the project in it’s current state. Please note, this synthesiser is still in development and can be considered an experimental BETA version as released below. On a Mac the sound defaults to the ‘core-audio built in output’ at the moment.
Download – Scientific Paper / Report
Download – Twinthesis Application (Mac OS X Only)
As always, I appreciate all your interest in this project and am more than happy to answer any questions you may have, either in the comments below or via the contact page. I am also willing to share the MaxMSP patches upon request.
Many Thanks once again for reading!
Here is an example of a simple Vibrato Patch in MaxMSP, I was going through some of my really old work, and found this little patch. I thought maybe it would be useful for anyone starting out with MaxMSP. It uses the ‘sfplay~’ object to open and play a sound file, and creates a delay unit using the ‘tapin~’ and ‘tapout~’ objects. The user can control the rate and depth of the vibrato effect.
If you are interested in playing around with the patch, download vibrato patch here.
This patch was used as an algorithmic model for the C++ Vibrato Plugin, which you can also download from my site.
New speakers are currently being prototyped by Bang & Olufsen, dubbed the ‘Klang’ speakers they offer the ability to listen to music as loud as you want without disturbing anyone else. Sounds interesting, so how do they claim to work?
These new speakers are currently being prototyped by Bang & Olufsen, dubbed the ‘Klang’ speakers they offer the ability to listen to music as loud as you want without disturbing anyone else. Sounds interesting, so how do they claim to work?
Essentially, they use a 30kHz frequency to beam an ‘audible wave’ to a single point. As we know, humans can only hear within a frequency range from 20Hz – 20kHz. The 30kHz wave produced here is above our audible threshold, hence ‘ultrasonic’. But these speakers work by exploiting the ultrasonic wave and splitting into three parts. This effectively produces an audible wave encapsulated by two inaudible waves. The sound will only be heard when it hits an obstruction (your ear for instance) and the encapsulation is broken.
This technology could potentially change the way we are able to use and interact with sound. For example, a sound wave could be directly transmitted to the ear, without being affected by any room modes. Thus potentially enabling us to hear sound, without it being ‘coloured’ by an acoustic environment. One of the other possibilities of course is a much more vivid stereo listening experience, akin to that of headphones, which in turn would enable binaural recordings to be heard properly through a set of speakers.
An interesting development in the industry, and one to keep an eye on in the future!
This is a tutorial for beginners of Steinberg’s Cubase Studio 5.5, the video below explains the new project assistant window, and also covers how to manually route MIDI through a MIDI track linked to a virtual instrument. This has certain advantages to simply using an ‘instrument’ track. The video is embedded below…
As always, feedback is welcomed either in the comments section below, or via the Contact Me page. This video was produced as part of a university assignment, but I have plans for more advanced video tutorials on iPhone development, and MaxMSP.