Hardware Trumps Software Each and EVERY Time


In the end of the day, every software success relies on hardware modifications.

I am a bit fed up with people telling me that software codecs are just good enough – they consume the same battery life (they don’t), they provide the same quality (they don’t), they can run on small devices (they can’t). You can spam to your heart’s content in the comments below, but please – read this one through before you do.

The model of running our code faster than ever with each hardware iteration is breaking. We have 2 or even 3 GHz machines, but this has been true for several years. To stay ahead of the curve, we’ve moved into packing more computational units (cores) into our chipsets (this was the Intel way of life) or we went to accelerating specific tasks with dedicated hardware (the ARM/MIPS way of life in embedded devices).

Every time a software technology becomes mainstream, it gets accelerated by way of hardware. Here are a few examples from recent years.


Virtualization is a great concept: you take the hardware and virtualize it – running the software on top of it wherever you want – instead of running a single software instance on a single physical hardware you can now run multiple software instances on a single piece of hardware – sharing the capacity it has.

Guess what – this whole virtualization thing requires hardware that explicitly supports it to be of any real use without huge penalties on performance and capabilities. This is why Intel and other chipset vendors have invested in virtualization in their chipsets. While there are still debates as to the advantages of hardware assisted virtualization, you can be rest assured that this trend will only increase.

3D rendering

Guess what? That realistic video game you are playing? Or the nice transitions your iPhone does for the user interface? They get accelerated on a GPU.

The iPhone and most other smartphones today use PowerVR GPU which makes its appearance as part of their system-on-chip. The code that runs the visualization stuff simply offloads all the heavy lifting to the GPU – can’t do it on the host chip with software if you want responsiveness from your phone or a full day of battery life.

Video compression

We’ve been doing video compression in hardware for ages – the reason is simple – there was no other way.

And then Intel with its Moore’s Law decided to catch up with our needs. But then our needs increased: we decided we want more resolution and frame rate for our videos, and we want to compress them better with modern standards.

So yes. You can use an Intel chip and run video compression on it. But even Intel understands the futility of it, so their latest chipsets does hardware based encoding of video. If you own an Android or an iPhone – your camera recordings and YouTube playback are done with hardware acceleration as well.


When it comes to security, hardware can cause problems and solve them.

Troy Hunt did an interesting test where he tried to generate hash values using a GPU. Here’s the result:

You see, using hashcat to leverage the power of the GPU in the 7970 you can generate up to 4.7393 billion hashes per second. That’s right, billion as in b for “bloody fast”.

Let’s see you do that without a GPU acceleration – a CPU won’t get to billion hashes per second – that’s for sure.

On the other end of it, when you want to encrypt bulk data – either for storage or network – you will shift to hardware to do it efficiently. As Adrian Kingsley-Hughes notes in his own suggests:

This drive features AES 256-bit hardware encryption to allow you to encrypt and protect your sensitive data while at the same time getting the performance, reliability and power benefits of a solid state drive.

In other words – if heavy lifting is done by hardware, then there are no penalties on the software you still need to run on the same hardware.


The most fascinating example of all is probably that of taking Java and accelerating it – running pieces of it on the GPU itself.

Hardware acceleration is going to grow. It has come to a point where there are thoughts of accelerating higher level programming languages such as Java, which only goes to show that as much as Moore’s law is concerned, our needs are insatiable even further.

So you see, even if you do think you are running pure software for your super uber algorithm – it is most likely accelerated somewhere to make it feasible for the use cases you need.

In other words – hardware trumps software each and EVERY time.


Serge L. says:
September 27, 2012

The WebM project invests heavily in free, open hw designs (RTL) for hw acceleration and we will be seeing many products hit the shelves very soon. The RTL supports not only VP8 but a ton and a half of other codecs. http://www.webmproject.org/hardware/

But… you can’t upgrade a hw codec, plus the manufacturing cycle is long, so they will always be a little bit behind on bandwidth use and raw quality (although the cpu offload offsets this often).

Also, hw layers such as OpenMax or specialized drivers are very tricky to program for, so this requires expertise that is not widespread. In many cases, these layers are not open and need to be licensed, adding another difficulty.

So, if you have money and the right engineers, hw layer optimisations is a winner. But the costs are real, so for many, software is the only option.

    Tsahi Levent-Levi says:
    September 27, 2012


    That’s the thing – if you have the chipset vendors doing the codec in an accelerated hardware, the rest of the ecosystem is free of that nuisance.

    And… if you take it a step further, and have the whole media processing stuff and package it nicely, you get rid of a bit more complications.

    Add into it network stuff and you end up with a nice solution that can fit a huge crowd of people who don’t understand VoIP.

    But you know all that – as this is what you decided on doing with WebRTC 🙂

Lennie says:
September 27, 2012

Running the Java VM on hardware is an old concept, not something new:


Hardware only trumps software if you have time and money to design/build/test/produce it and the improvement is significant enough, it’s a trade off.

Dave Michels says:
September 28, 2012

Hardware trumps software. So what?
You are missing the fact that hardware continues to get better. We are talking video over IP networks to $600 iPads. This was impossible a decade ago and unfathomable twenty years ago. The modern smartphone has more computational power than glass houses did when I was in high school.

When I studied Telecom, they taught us voice could never work over packet networks because packet networks are not deterministic. No one argued. The IP4 addressing scheme was not designed for a computer in every pocket. Architectures need to be more conceptual and less tied to specific hardware realities, because realities change.

Hardware is moving fast, and what’s special today is a commodity tomorrow. My first smartphone sits on a shelf – can’t be sold, it’s obsolete. I have a drawer full of hardware relics – SCSI CDROM, ZIP Drives, Firewire connected devices, floppy drives – all obsolete (I backed up important files to the ZIPDrive , so I naturally wonder what’s on them. I have a box of cassette audio tapes, and other of VHS tapes.

WebRTC is very interesting. In a decade it will be really interesting. In 30 years it will be prevalent or replaced by the next generation. Hardware trumps software will still apply then too – but the software in five years will trump the hardware today.

    Tsahi Levent-Levi says:
    September 29, 2012


    The thing is that the $600 iPads have acceleration for all the cool things it really does. The hardware architecture is what allows it.

    Whenever you want to do things properly, you need to tie software into hardware that fits the needs.

    And as for WebRTC, I think it will take a bit less than a decade to be interesting – but that’s just me – maybe I am still a bit too young and optimistic for my own good 🙂