Why Apple’s ‘Genius’ doesn’t work for Apps…

So, i’m browsing around the App store on the iPad today and i hit the ‘Genius’ tab. It’s not a function i’ve really used before now, because I never really saw the point of it… and guess what, I was right. All it does, is show me apps that are similar to the ones I already have!?!

If I have an App, and it’s still installed on my device, doesn’t that indicate that I’m happy that this App fulfils it’s purpose? Or are people constantly searching for ‘better’ apps to fulfil a purpose? Am I wrong? It seems strange to me, that’s all.

Future Plans…

As i’ll be graduating from university later on this year, this is just an informal post about some projects I want to pursue once I have a little bit more time in my life. Throughout education in general, i’ve always thrown myself 100% into anything i’m doing. At college, i was running the local radio station, in the orchestra, working as youth worker, and taking charge of all things technical for the dramatic society. At uni, i’ve been the student representative for the last two years, and taken charge of the design festival for my course at the end of the year. So all in all, i’m looking forward to having a bit of time to work on some projects of my own.

Firstly, i’d like to develop ‘Twinthesis‘ further. This is a project i’m starting work on now, as i’m due to perform with it, at a creative music technology concert on the 14th May (More details on request!) I’d like to develop some form of graphics engine, to give the instrument a visualisation as well as a sonification process. Also, i’d like to implement emotion recognition, so the sound of the tweet differs depending on certain key words within the tweet. An iPhone / iPad app is the final vision for the product, maybe with location sensing, for collaborative performances.

Secondly, I would like to get myself up to speed with modern web development techniques. I am experienced in HTML / Flash development, but since college really haven’t had time, or the inclination to learn any of the new methods / languages / or techniques. I’d like to master CSS, and PhP development, as well as HTML 5. I think these would be very beneficial skills to have in the future.

Finally, I’ve started blogging for a new Tech blog. Going by the name of TechRant, it will be launched on May 1st of this year. Blogging and social media is always something i’ve been passionate about. I remember from the first internet connection i had (Dial-up speeds, oh yeah!) I was making my own websites, and sharing content. I think the resources that are available for sharing on the web today are vast, but are only the beginning. I think there’s a whole new wave of services on the horizon (streaming music, cloud based applications, etc). I intend to be blogging about it all, for many years to come. So make sure you check out TechRant on May 1st!

Well, that’s all for now really! Just an update on some of the projects i hope to embark on / complete after I graduate. This is just the beginning, watch this space…

iResponse Application

Imagine being able to capture and store the sound of an acoustic space, like a local church, cathedral, or even your favourite recording studio. Now imagine being able to use that stored file, in a convolution reverb plugin to make any audio sound like it was being played in that space. Now imagine being able to capture the sound of that acoustic space right on your iPhone.

Introducing iResponse for the iPhone. The first impulse response iPhone application intended to generate impulse response files for use within reverb plug-ins. This application is able to record and generate impulse responses for any acoustic space. You can use both impulse-source excitation methods (such as a balloon popping, or a starter pistol firing) or you can record a steady-state excitation signal through a loudspeaker. The excitation signals to be played back through the loudspeaker consist of swept-sine tones of varying lengths. The process is simple, point your iPhones built in microphone at the sound source within a given environment, select the length excitation signal you are using, and hit record. Once the recording has stopped automatically, its a ‘one-button’ process to perform the deconvolution processing required to generate the impulse response.

Continue reading iResponse Application

MaxMSP Vibrato Patch

Here is an example of a simple Vibrato Patch in MaxMSP, I was going through some of my really old work, and found this little patch. I thought maybe it would be useful for anyone starting out with MaxMSP. It uses the ‘sfplay~’ object to open and play a sound file, and creates a delay unit using the ‘tapin~’ and ‘tapout~’ objects. The user can control the rate and depth of the vibrato effect.

MaxMSP Vibrato Patch

If you are interested in playing around with the patch, download vibrato patch here.

This patch was used as an algorithmic model for the C++ Vibrato Plugin, which you can also download from my site.

Klang Ultrasonic Speaker

These new speakers are currently being prototyped by Bang & Olufsen, dubbed the ‘Klang’ speakers they offer the ability to listen to music as loud as you want without disturbing anyone else. Sounds interesting, so how do they claim to work?

Essentially, they use a 30kHz frequency to beam an ‘audible wave’ to a single point. As we know, humans can only hear within a frequency range from 20Hz – 20kHz. The 30kHz wave produced here is above our audible threshold, hence ‘ultrasonic’. But these speakers work by exploiting the ultrasonic wave and splitting into three parts. This effectively produces an audible wave encapsulated by two inaudible waves. The sound will only be heard when it hits an obstruction (your ear for instance) and the encapsulation is broken.

This technology could potentially change the way we are able to use and interact with sound. For example, a sound wave could be directly transmitted to the ear, without being affected by any room modes. Thus potentially enabling us to hear sound, without it being ‘coloured’ by an acoustic environment. One of the other possibilities of course is a much more vivid stereo listening experience, akin to that of headphones, which in turn would enable binaural recordings to be heard properly through a set of speakers.

An interesting development in the industry, and one to keep an eye on in the future!

Cubase Studio 5.5 Tutorial

This is a tutorial for beginners of Steinberg’s Cubase Studio 5.5, the video below explains the new project assistant window, and also covers how to manually route MIDI through a MIDI track linked to a virtual instrument. This has certain advantages to simply using an ‘instrument’ track. The video is embedded below…

As always, feedback is welcomed either in the comments section below, or via the Contact Me page. This video was produced as part of a university assignment, but I have plans for more advanced video tutorials on iPhone development, and MaxMSP.

MIDI comes to iOS devices

That’s right, it’s exciting news for people like me all over the world. Information has just leaked that apple has implemented the MIDI standard in the next version of its iOS operating system for iPhones, iPads, and iPod Touch’s. Apple allows you to send data over USB (via the camera connection kit) or over WiFi, which presents some very interesting opportunities (as long as there is no lag!)

It’s brilliant news, and although some say MIDI may be a dying standard I believe it’s seen a new lease of life on lower powered mobile devices such as these. I think it won’t be long before we see some really creative uses of the new API and I personally cannot wait to try developing with it! Below is a video showing how one developer has used the new API!

[via Engadget]

MAT Promotional Video

The brief for this piece of work was to create a promotional video on a subject of your choice. I chose to promote the course I am studying at Bournemouth University. This video was shot in pairs, and over half an hour of footage was edited and mixed to fit into a strict 2 minute time slot. The video is embedded below and is hosted on youtube.

The video was shot and edited in 1080p high definition video, all editing and sound design was performed using Final Cut Pro, and Soundtrack Pro.

Please note, the embedded video is only 360p resolution. You can click through to youtube to view the 720p HD version. This work was produced in my first year at Bournemouth University.

What is Sound?

This post will cover the very basic rules of sound. One of the most important things to realise is that if you do not know the basic definition of sound and how it works, you’re career as a sound engineer will be very limited! So without further ado, What exactly is sound?

A sound is generated by vibration. Any moving object can cause sound to be created, and this sound is transferred by the vibration of air particles around a given object. Think of it as the ripple effect you get if you drop a pebble into some water. The same thing happens with the air around the source of a sound. The image below should help to visualise this.

So these ripples, are actually more commonly referred to as sound waves. To understand how sound waves are plotted on a graph we must first look at how the air particles are affected by the source of the sound. If you look at the image above, you can see the ripples clearly, and you can see the spaces in-between the ripples. If we think back to the water example, the ripples actually contain more water than the spaces in-between them, creating the visual affect you see above. The same is true of the air particles affected by the source of a sound, except of course there is no visual effect.

It is at this point important to note that sound waves, and ripples in water are technically different. Ripples within water are known as transverse waves, where as sound waves are actually longitudinal waves. The difference being that in a transverse wave (water) the particle displacement is perpendicular to the direction of wave propagation, whereas in a longitudinal wave the particle displacement is parallel to the direction of wave propagation.

So, air particles will bunch together at the height of the wave, and move further apart between the peaks of the wave creating alternating high and low pressure. This is known as compression (bunching together to create high pressure) and rarefaction (moving apart to create low pressure). This is the fundamental reason that we are able to hear sound.

The diagram above shows the compressed (or condensed) air as the darker, more dense specs that correspond with the peaks of the sound wave. The rarefaction can be seen as the more sparse lighter specs corresponding with the troughs of the sound wave. These specs represent the number of air particles, but it is important to note that it is NOT the air particles that are moving, it is the disturbance. The individual air particles are simply oscillating back and forth from their original position (known as their equilibrium).

So, these waves of alternating high and low pressure are what travel through the air, at a speed of ~340 meters per second, towards your ear. We will talk about exactly how these sounds are captured in more depth in a later post, both by your ear and by a microphone. But for the sake of completeness, your ear has a drum with a very fragile membrane stretched across which moves in and out according to the alternating air pressure. Your brain then receives this signal as an audible sound. Again, this is an incredibly simplistic explanation of a very intricate and complex process, so more information on this will be coming soon.

I think this post sums up the very basics of what sound is, so I shall leave it there. You must remember that sound is a very complex thing so I will try to cover things one small step at a time. Next post I will go into more detail about sound waves, and the various properties and elements that eventually translate into a pitch that you can hear.

For now, thank you very much for reading. If you see any mistakes or have any feedback, I cannot encourage you enough to let me know in the comments below or to my twitter account @sammio2


Hello there,

I’d like to welcome you to my portfolio / blog. I am an audio technologist and a developer of applications for the iOS platform. This website will contain a portfolio of my recent work, as well as my thoughts and opinions on relevant technical subjects. This is really just a short post to let you know that content is still being developed for the launch of this site, so please check back at a later date. In the mean time, please feel free to subscribe to my updates either through the RSS feed, or you can find me on twitter @sammio2.

For now, thank you once again for reading, and I hope you come back soon!