26 Nov

Does DRM promote piracy?

Like it or not, there are people out there who devote their lives to cracking software, games and applications for free distribution over the internet. Piracy is costing the industry millions of pounds a year because the one tangible asset that they are able to sell- their intellectual property- is being ripped off.

Those who are responsible for piracy tend to bandy about the terms ā€œopenā€ and ā€œfreeā€.They attempt to excuse their illegal actions by arguing some sort of political or philosophical motive, when in reality, it comes down to one thing. Greed. Why pay hundreds of pounds for a piece of software that you could download for free through BitTorrent?

Much software and games can be freely and anonymously downloaded using peer-to-peer technology. In an attempt to cut down on piracy, development companies are using increasingly aggressive methods to keep tabs on their products.

In their efforts to tackle piracy, I believe that software companies are actually incentivising it in their implementation of Digital Rights Management, or DRM.

One of my favourite ironies is the patronising ā€œyou wouldn’t steal a carā€ appeal at the beginning of many DVDs. We’ve just BOUGHT the DVD, and now we’re being told not to steal it. A pirated DVD would come without the warning and take us straight to the film. Does anyone else see the flaw here?

Games are consistently the worst offenders in adopting aggressive anti-piracy techniques. I’ve played a number of games which limit you to 3 installs before effectively ā€œlockingā€ the disc, rendering it useless. This cripples the second-hand games market, and for those of us with more than one machine or operating system, we actually reach the installation limit pretty quickly. You’re then required to ring up the games company and request that your disc be remotely unlocked- yes, you’re asking permission to play your own game!

Half-Life requires you to input a serial number to activate your hard-copy of the game. I got my copy from a charity shop and it didn’t come with a manual, therefore it didn’t come with a serial number. I was forced to download a keygen to activate my legitimately purchased game!

Recently, I bought a hard-copy of Shogun 2 Total War for PC. I expected to install it on my machine and start playing it within half an hour or so, but the DRM had other plans.

I had to download Steam, create an account on it and register my copy of Shogun 2. Steam then proceeded to download a mandatory 700Mb update to the game software. I live in West Wales, and at the time my broadband was barely faster than dial-up- I had to let the full file download at a shaky 12Kbps.

After 3 failed installations and several days of frustration, I managed to complete the installation and play the game. On a seemingly unrelated note, a couple of days later our internet died. Wanting to relieve my boredom, I booted up Shogun 2, but was unable to play it without logging into Steam… which requires an internet connection.

I’ve been told that sometimes Steam’s servers are down, during which time you cannot play your games. You may have a hard copy of the game, it’s all installed, and you have fibre-optic broadband, but you can’t play your own game because you’re unable to communicate with Steam’s servers.

I struggled through all of this DRM, when it hit me- I could have downloaded a readily-available pirated copy of the game for free, and I wouldn’t have all of the authorisation rubbish that I’m stuck with now!

DRM is one issue; price is another. How many people can really afford the full version of Adobe Photoshop, or Sibelius? When companies price themselves out of the market, piracy prevails again.

No matter how secure the DRM, there will be a hacker somewhere in the world that can crack it. The moment that happens, your software is out in the public domain, free for any thieves to download a copy- it’s too late to claw your intellectual property back.

Creating a completely secure system has not yet been achieved, so instead, I would advise companies to cut out or at least tone down the DRM. Price yourselves affordably. Offer a level of freedom from DRM that rivals a cracked copy. Hopefully your consumers’ have the moral fibre to pay for your software.

Technology and standards are evolving- we no longer expect to see popup advertisements on websites we visit, nor do we expect random programs and browser toolbars to be secretly installed alongside our main program. Let DRM join these examples in the graveyard of historical technology annoyances.

29 Aug

The demise of BlackBerry

Just two years ago, everyone I knew seemed to have a BlackBerry- they loved it too. Now, I feel like the only person in the world who still has one. BlackBerry has really struggled to stay ahead of the competition, and with the recent release of the Samsung Galaxy SIII, BlackBerry will soon be crushed into the ground.

Cast your thoughts back a decade and you’ll remember that the indisputable biggest name in mobile manufacturing was Nokia. Where are they now? Motorola is another company that were once mighty and now are desperately struggling. BlackBerry is not far behind them, and in my humble opinion will not be able to turn around the rapid decline in market share they’ve experienced this year.

Mobile phones are constantly changing as the demands of the consumer changes. Mobile phones followed landline phones, a method of vocal communication between two people over a long distance. SMS text messaging was an extension of this functionality, allowing one person to send a quick message to another. Despite the insanely high cost-per-character of an SMS, texting really took off.

Desperate to move away from the clunky, oversized mobiles of the 90s, manufacturers created smaller and smaller phones, culminating in the Alcatel OT-E100. There was even a phone half the size of that, but I cannot for the life of me remember which one it was, I only remember its girly-blue colour. Mobile phone companies were selling phones that were so small they were actually impairing the usability of their product!

A couple of touchscreen phones started coming on the market. They weren’t great phones- the technology was primitive and the screens were unresponsive. Nevertheless, early investors knew that touchscreen phones were the way forward.

Until now, phoning and texting were the phones’ main function.The arrival of the BlackBerry, and subsequently BlackBerry messenger, further evolved the mobile phone’s messaging abilities. Suddenly people could send unlimited messages to one another using this Messenger service, by adding a relatively cheap Ā£5 a month BlackBerry services to their contract. When each text used to cost 30p, this was a godsend.

However, the BlackBerry Messenger that proved to be so popular would later contribute to the phone’s demise. SMS texting got cheaper, cheaper, and cheaper again and now most contracts include unlimited texts. But even today, anyone wanting BlackBerry messenger still needs to add a Ā£5 monthly BlackBerry services option to their tarrif.

We had phone calls, then texts, then BlackBerry messenger, then reverted back to texting. The next evolution in connectivity came with the growing popularity of social network sites; Facebook and Twitter. People wanted to access them on their phones as well as their desktops.

Facebook and Twitter were designed for the web. They were designed for full size monitors, keyboards, and a quick responsive mouse. Full-screen touchscreen phones became more popular and this was partly because of the web browser experience. iPhones really started taking off, and Google got to work on its Android software. BlackBerry didn’t realise the demand for social networking sites and continued to refine its messenger service, its push email and its calendar.

At one time, inbuilt cameras were rare, not installed as standard. Games were Snake and little else. Phones began to evolve to become more entertaining- they are no longer devices simply for staying in touch with people. Now they’re an entertainment centre; people use their phones to watch movies, listen to music, browse the web, play games. They use their phones as satnavs. As trackers to see how far they’ve jogged that morning. As personal assistants, letting them know the nearest McDonalds restaurant by asking their phone a quick question.

The fact is, BlackBerry phones just can’t keep up with this. Speaking as the two year owner of a BlackBerry Bold 9780 – which isn’t a particularly old phone- I have to say I’m tempted to quit my contract early and get a Galaxy SIII.

My BlackBerry offers an appallingly inadequate web browsing experience. It has a smaller library of apps than Apple and Android, many of which are costly. Even if I held an unquestioned loyalty to BlackBerry and stayed with them forever, I wouldn’t want the Ā£5 monthly BlackBerry services addon but I couldn’t shift the paranoia that I’d run up some hefty charges otherwise, or have an unusable phone, even though I don’t plan on using their services. It’s just too complicated and scary.

The phone can just about cope with multitasking but is slow in general. Games are pathetically unresponsive, due to the limitations of the touchpad and a non-touch screen. The notifications LED, in conjunction with the WhoIsIt Blinking app (yes, you actually need to download apps to get the sort of functionality you should have by default), often continues blinking away long after you’ve read your received text.

Trying to read through old text conversations is sometimes an impossibility, as the BlackBerry gets “stuck” halfway through a text, leaving you no option but to give up and return to the homescreen.

Finally, the BlackBerry just isn’t as cool as its Apple/Android rivals. There are some awesome features, like the Galaxy SIII’s “Smart Stay” feature which knows when you’re looking at the screen and prevents the phone from locking, which makes the BlackBerry look centuries old.

I’m afraid it’s a downward spiral for BlackBerry, unless they can offer a real full screen touchscreen alternative to Android. Within a year their handsets will be joining the cheap plastic remnants of Nokia and Motorola on a Taiwanese rubbish heap without even offering the seagulls a good last meal.

30 Jun

If humans were computer components – an analogy

There are a lot of computer components that reflect certain human traits or organs. I thought it would be fun to list them, along with a real life comparison!

Processor

The brain. This controls the speed of our arithmetic, puzzle solving and reaction times. There isn’t a processor out there that is as powerful as the human brain; it would need to have 38 petahertz of processing power (or, in today’s SI multiple, 38 000 000 GHz)! It would also need 3,854 terabytes of memory. [1]

Steven Hawking: Intel Core i7 (990X) 3.46GHz Extreme Edition

Paris Hilton: Intel Celeron Processor 600 MHz

RAM (Random Access Memory)

This is our short term memory. When we’re given a postcode, we store the postcode in RAM until it has been put into the satnav ready for the journey. This information is removed from memory within minutes of it being put there.

The faster our RAM, the faster we can respond to questions. The more RAM we have, the more temporary information we can store. Waiters and waitresses have a lot of RAM and can remember everything ordered, while the rest of us struggle to remember what day it is.

Tom Morton [2]: Corsair CMZ8GX3M2A1600C9

Texas Gov. Rick Perry [3]: Infineon 128MB DDR SDRAM PC2700

Hard Drive

Our long term memory – friends birthdays, the date of the Battle of Hastings, memories of our first kiss, the perils of our examinations, the knowledge that raw meat needs to be cooked before eaten- all of that is stored here. Files corrupt over time and use.

Louise Owen [4]: INTEL SSDSC2CW240A3

Shaun Ryder [5]: SSDPAMM0008G1

Graphics Card

Defines our visual creativity, how artistic we are, our imagination, our dreams, our inventions.

Carl Warner [6]: GeForce GTX 680

Comfort Wipe [7]:Ā  Trident CyberBlade XP

Sound Card

Audio Creativity- our taste in music, ability to compose, ability to play an instrument or sing.

Claude Debussy: ASUS Xonar Essence STX

The Plasmatics: Rocketfish 5.1 PCI Sound Card

Power Supply

How much power (energy) we have. Some people have excellent power supplies and can live on as little as four hours sleep. Other people sleep all day and still have no energy for anything.

Usain Bolt: OCZ StealthXStream2 600W

Sehwang Jung [8]: DA Series PSDA250 250W ATX Power Supply

08 May

What is ‘The Surprise App’ for BlackBerry?

ShaoSoft have released an app for the BlackBerry which they consider to be somewhat unique because, put simply, nobody knows what it does. Well, nobody knows what it does until they pay Ā£1 for the privilege of downloading it. Welcome to ā€˜The Surprise App’.

This is not a new marketing ploy; people have been selling ‘mystery bags’ on eBay for years. However, this is the first time such a ploy has been used in the app world- and it seems to be working. Humans share some attributes with cats, namely curiosity. It is in our nature to want to know things, and it is this human nature that ShaoSoft have cleverly exploited and made themselves a small fortune. I only wish I’d got there first!

I must admit, I was curious myself, but I didn’t want to spend Ā£1 on something I would later regret. For now I’m a humble student, and money is tight! So I spent a number of minutes flicking through the reviews on the app. None of them, I repeat, NONE of them gave the game away. ShaoSoft themselves told users not to “spoil the surprise in a review”, and for once the community of BlackBerry users actually played along!

At this, I fired up my PC and had a look on Google. Zilch. Nada. Not a single explanation anywhere. There were dozens of comments on an article, anonymous users practically begging people to let them know what the app does, but the author of the article removed any helpful replies before they could be read.

Okay, I have a sense of humour and I see theĀ  “fun” of downloading a mystery app, but I was disappointed that there wasn’t a single explanation on the web. Surprises are all well and good, but ultimately this is a business to consumer transaction and there should be information about the app SOMEWHERE. So I decided to take the plunge, for your anonymous benefit, and download the app. If you want to keep the surprise a surprise, stop reading now. If, like me, you feel that you should at least have the option of knowing what the app is before you make a purchase, then read on, for I am about to reveal everything.

App store screen

Application download screen

This is the application download screen.

Reviews aren't giving much away.

Reviews aren’t giving much away.

As you can see, none of the reviews gave any clue as to what the application does.

Installing...

Installing…

I had no choice but to download it for myself!

So I had to download it!

So I had to download it!

And here it is, downloaded and ready to run. I wonder what it does?

App welcome screen

App welcome screen

The application welcome screen.

Here it is!

Here it is!

The app is simply a BBM notification popup. Those that use BBM might find this useful, but I’m a fan of traditional SMS.

Testing

Testing

This is me pressing the ‘Test’ button. It’s quite a snazzy popup; you wouldn’t fail to notice a new BBM!

Surprises!

Surprises!

Oooooh, extra feature! Again, utterly useless for me, but some might appreciate this shortcut.

More?

More?

More you say? Yes! Get two apps for the price of one- it also installs ‘Super Popup’, which gives you a big pop-up with the content of your SMS message or email whenever a new one is sent to you. This works quite well and could be considered an asset, or it could also be a privacy issue (as anybody could read a new text you are sent without needing to know the PIN to your phone).

Install completed.

Install completed.

I selected Install, as it was the only option I had available. Whether you want it or not, you need to install both apps.

Surprise

Surprise

At least they give you the option to turn it off.

Configuration options

Configuration options

Here are some of the configurable options for the ‘Super Popup’ part of the app.

I kept the app for a while and tested it with BBM and SMS. It worked well and looked different to the normal notifications, but I couldn’t really see the point of it. Sure, I could read my new text at-a-glance on the home screen, but I’d then have to go into my messages and read it again to stop the ‘New Message’ default notification LED flashing away at me. I uninstalled ‘The Surprise App’ within 15 minutes of setting it all up.

Am I annoyed that I paid for an app I will never use? Yes and no. I see nothing wrong with “Mystery Apps” but there should be an option, somewhere, hidden away in helpfiles or forums, which actually explains the purpose of the app. That removes the mystery element, sure, but as long as people don’t click on “SPOILER ALERT- HERE IS THE APP’S EXPLANATION”, then they don’t have to spoil the surprise.

People deserve to have the option to know what they’re buying, even if they choose not to. And now they know.

03 Apr

Redesigning IP Datagrams for a faster internet

Recently our first year computer science department had a lecture on IP Datagrams. While people were Facebooking and falling asleep around me, I paid attention. It’s definitely worth learning the science behind data transfer over the internet or we risk being locked in to a consumerist bubble, where we’ve all been trained to use obscolete products and protocols, no longer able to define our own.

Data transfer is fundamentally simple, but has its complications. I will relay to you the information I learned in the lecture. Only when you have an understanding of the current protocol for data transfer will I tell you my suggestion for its improvement. So, how do you send a file from a computer in London to a computer in Manchester?

Let’s say we have a small text file. The data from this file, made up of binary 1s and 0s, is stored in something called an IP Datagram, which travels with its IP Header. Amongst other things, the IP Header contains the source address and the destination address. These are IP addresses, 32 bits in length each (in Internet Protocol version 4).

The IP Datagram (containing the IP Header and the data) is sent from your computer to your router, via an ethernet cable or wireless connection. Your router has awareness of a number of nearby routers/exchanges. It will look at the destination address in the IP Header and decide which router or exchange is the most applicable to send the datagram to, in order to get closer to its destination. The same thing happens at the next router; checking the destination address and calculating which router to send the datagram to. This continues until, eventually, the datagram reaches its destination.

What happens if the data is too large to be carried across a subnetwork, a large video file for example? In cases like these, the datagram is fragmented. The data is split across multiple datagrams with almost identical IP Headers, which are then “linked up” upon reaching their destination. I say almost identical datagrams- there is a subtle difference in the IP Header, called the Fragment Offset, which keeps a record of where the fragment came in the original message. This ensures that the fragments of data are put back together in the correct order.

Let’s take a closer look at the IP Header. In addition to the source and destination address, the header stores a header checksum, which checks whether the information provided in the header was corrupted at any point on its journey between two routers. It does not protect the user’s data at all, it is only a validator for itself, the header. This does not bear much concern to my article, but it indicates that the IP Header can store lots of different information, which is fundamental to a couple of the concepts I am about to suggest.

With IP Datagrams being transferred from router to router all over the place, problems can arise. If one router thinks one route is best and another router disagrees, the datagram could be stuck in an infinite loop, never reaching its destination but using valuable network bandwith. The current solution to this problem is to use something called “Time To Live” (TTL).

When one router passes a datagram to another, that counts as one “hop”. The TTL concept is to allocate to datagrams a set number of hops before the datagram is considered lost and is discarded from the network. This value is typically around 30, but can go as high as 255. Consider the following example, which has a TTL of 3.

Time to live = 3

The datagram is passed from the source to Router A- this uses one hop. Router A sends the datagram to Router B, which uses another hop, bringing the total number of hops to two. If Router B now transfers the datagram to the destination, the transfer is completed successfully, as it has used 3 hops. If, however, it sends the datagram to Router C, which would then send the datagram to its destination, the datagram would be discarded. Why? Because going via that route would use 4 hops, but the datagram only has a TTL (Time To Live) of 3.

This is a good, working system, though it means that those building our network infrastructure have to think carefully. If somebody wanted to establish a connection between the North and South poles, this typically has to be done in less than 30 hops, which could prove difficult. Also, the further the data has to travel between routers, the more likely it is that the data will be corrupted. This is why, in my opinion, the TTL solution isn’t necessarily the ideal one- there should not be a limit on the number of routers used to transfer information.

My first idea is simple- rather than having a Time To Live, we keep track of the routers that have been passed through. The IP address of each router is stored cumulatively in the IP Header of the datagram. Quite simply, we do not allow the datagram to pass through the same router twice. I realise that this could potentially lead to some long, convoluted route being taken, but this would stop infinite loops from happening, and the data would not be discarded unless every single router connected to the current router have already been passed through.

This leads on to a second idea, a more ambitious one. When the datagram reaches its source, the header containing the tracked IP addresses is sent back to the source computer. The route of IPs, and the time taken to transfer the data, is stored somewhere on the computer, probably the RAM. The router then sends “dummy data” of one or two bits to the previous destination address, and attempts to find a more efficient route. Again, route details are sent back to the source and stored in the source computer.

When the client wants to send or request data to/from the same destination again, the computer scans its RAM for saved routes and chooses the most efficient/quickest route so far. The list of IP addresses and the order they come in are stored in the IP Header of the datagram so that each router knows where to pass the datagram.

In this way, the client’s internet speed increases the more they use their favourite websites. The computer would periodically keep checking for new, efficient routes and updating its database. When the client’s computer is turned off, the list of routes are transferred from RAM to the computer’s hard drive, and when the computer is started up again the list of routes are loaded up into the RAM. The routes need to be available in RAM, as reading a route from the hard drive would be too slow for modern internet applications.

Websites that the user hasn’t visited in over a month could have their routes deleted, to save on space and RAM usage. This should also aleviate some privacy concerns. Users could even “opt in” to having their routes uploaded to an international database. Their computer would then periodically download more efficient routes, as found by the community that have opted in to this database.

The end result, ideally, would be a faster internet for everyone, simply by adopting new protocols.

21 Mar

Trinary – is it the future?

During one of my lectures, I began to question the assumption that computers can only understand 1s and 0s. Lecturers tell us our applications and protocols must work only in binary. We, the latest generation of computer scientists, are slowly being brainwashed into worshipping the binary form, and we rise up in anger at the blasphemous rantings of anybody exploring the possibility of an alternative.

Computers have been around since the 1940s and binary has long been the computing industry standard. Binary is fantastic, a simple solution to the problem of data transfer between and within computers. But it also has its limitations. Consider a piece of data 2 bits long. Assuming a bit can only be 1 or 0, there are a total of four possible combinations of bits; 00, 01, 10, or 11. If we could choose another bit, which we shall call 2, we would have 9 possible combinations (00, 01, 10, 11, 02, 20, 21, 22, 12).

The number of bits in our data has an impact on hard drive space, network bandwith, and processing time. We should be considering all options to reduce the impact data length has on these things- trinary would dramatically reduce the problem.

UNIX timestamps (used across the world wide web and in many software systems) represent time as the number of seconds since the 1st of January, 1970. The time in seconds elapsed since that date is stored in a 32-bit integer, and, in binary, this means that the timestamp can only hold time up until 03:14:07 UTC on Tuesday, 19th January 2038. This is because 32 bits can store up to 2^32 seconds, which equates to 4,294,967,296 seconds. And that equates to just over 68 years.

The 2038 problem won’t be a problem for over 25 years, but in computing, we should make sure our systems are stable and have longevity. The proposed solution to the 2038 problem is using a 64-bit int, which would not overrun until Sunday, 4th December 292,277,026,596. As this is over 20 times the theorised age of the universe, we wouldn’t have to worry about running out of space anytime soon. But this also means using twice as much space to hold exactly the same information.

Storing seconds in a 32-bit TRINARY int would give 3^32 seconds, giving 1,853,020,188,851,841 seconds since January 1st, 1970. This equates to over 58,758,884 years- which is still a pretty good long term system! Coming up with a trinary solution would reduce or remove countless problems in computing. Applications would run faster, data would take up much less space on hard drives, download speeds would be quicker (not only due to smaller file sizes, but as a result of a new Internet Protocol with smaller IP Headers). Rather than coming up with new and better ways of storing and transporting binary data, perhaps we should look into inventing trinary?

With this thought in mind, I got working on trinary the moment I got back from my lecture. The first thought I came up with:

A terrible idea

As you can see, when the switch is open, nothing is being sent, which could be considered a binary 0. When the switch is closed, a current passes down the wire, which could be interpreted as a binary 1. You can either have electricity, or not- what I was trying to come up with was something in between. And my initial thought was light. Electricity could power a bulb or LED of some sort, which would be picked up by a sensor on another circuit, which would treat the signal as a trinary “2”.

I wasn’t hopeful about this idea, so I continued scribbling down my thoughts, and came up with a second concept:

Only slightly less terrible

This is a little more tricky. Electricity is passed down the wire and goes through an electromagnetic coil. It passes through this coil and into a logical OR gate, coming through as a binary 1. However, a magnetic field is created, forcing down a second switch in the circuit. This second switch allows another current to flow through to the OR gate, which is still treating this as a 1. Eventually, the current flows through a second electromagnetic coil, which pushes open the original switch, cutting off the original current. This kills off the original magnetic field, meaning the second switch opens and no current flows anymore. At this point, no current is coming from the OR gate, so this could be a binary 0.

Okay, so I didn’t think the design through properly, but the idea is that electricity flows and stops and flows and stops and flows through the OR gate thousands of times per second. This constant fluctuation in current could be considered a trinary 2, provided there is some intelligence in the circuit.

Again, I didn’t hold high hopes for the idea, so I kept thinking.

Octary?

The basic principle behind this idea was that some sort of motor could rotate and change which circuit to send a current down. There could theoretically be an unlimited number of potential circuits to connect to, but let’s say there were 8. Hey presto, I’ve just invented octary! Depending on which circuit the current flows through, that could represent an octary 0, 1, 2, 3, 4, etc.

Of all the weird and wonderful ideas, this was the least likely to work. So I had another go.

My final idea

Let’s imagine we have a hollow cable- this can be metal, synthetic, whatever, but it has to conduct electricity. Let us also imagine we can send light down this cable. Suddenly, we have more options than even trinary can possess.

No electric signal, no light. OUTCOME = 0

Electric signal, but no light. OUTCOME = 1

No electric signal, but we do have light. OUTCOME = 2

Both electric and light signal. OUTCOME = 3

Theoretically, if we can perfect this type of technology, we could have four different bit types.

Satisfied with my work, I went to bed, before I realised I’d skipped a fundamental step- checking trinary on Google! Alas, somebody has beaten me to it. In the late 1950s, Nikolay Brusentsov produced the Setun- the only modern, electronic trinary (or ternary) computer. It had a lower electricity consumption and lower production cost than the binary computers that replaced it.

According to computer-museum.ru, “Setun has an one-address architecture with one index-register. The contents of it, in dependence of value (+,0,-) of address modification trit, may be added to or subtracted from the address part of instruction. The instruction set consists only of 24 instructions including performing mantissa normalization for floating-point calculation, shift, combined multiplication and addition. Three instructions are reserved but have never been used because of the lack of necessity.”

“Simplicity, economy and elegance of computer architecture are the direct and practically very important consequence of the ternarity, more exactly – of representation of data and instructions by symmetrical (balanced) code, i.e. by code with digits 0, +1, -1. In opposite to binary code there is no difference between “signed” and “unsigned” number. As a result the amount of conditional instructions is decrease twice and it is possible to use them more easily; the arithmetic operations allow free variation of the length of operands and may be executed with different lengths; the ideal rounding is achieved simply by truncation, i.e. the truncation coincides with the rounding and there is the best approximation the rounding number by rounded.”

Here is another quote I found interesting: “In ‘Setun 70’ the peculiarities of ternarity are embodied with more understanding and completeness: the ternary format for symbols encoding – “tryte” (analog of binary byte) consisting of 6 trits (~9.5 bits) is established; the instruction set is updated of auxiliary ternary logic and control instructions; arithmetic instructions now allow more variation of operand length – 1, 2 and 3 trytes and length of result may be up to 6 trytes.”

It appears trinary already exists. So why did it fail?

There seems to be no reason other than the increasing popularity of binary at the time. Binary components were already being mass produced, and officials of the computer production in the USSR disapproved of the unplanned and unusual proposal of trinary as an alternative to existing systems. The cost, financially and in terms of time lost, of replacing all existing computing technology in favour of trinary systems was simply too high.

The future for trinary is uncertain, but Donald Knuth, author of “The Art of Programming”, predicts trinary’s return, due to its elegance and efficiency. A possible modern trinary solution could be to use fiber optics, where dark is 0 and the two orthogonal polarizations of light are 1 and -1.

I can’t help feeling that if it were feasible for the world’s computing industry to use trinary, it would be using it by now. And if the cost and time taken to replace the world’s computers and networks seemed daunting back in 1960, imagine the mammoth task we are presented with today! In the same way that the internet was built on top of existing technologies (read: 4kHz telephone cables), I think we’re stuck with what we’ve got for a good couple of decades. New, state-of-the-art systems are continually being built, and those systems may soon be being built on a trinary foundation. The rest of the world then needs to catch up, and that will take some time.

Loading...