header image
Smart Grid Privacy
December 2nd, 2009 under Digital Rights, Distributed, InfoSec, Politics, World, rengolin. [ Comments: none ]

I have recently joined the IETF Smart Grid group to see what people were talking about it and to put away my fears on security and privacy. What I saw was a bunch of experts discussing the plethora of standards that could be applied (very important) but few people seemed too interested in the privacy issue.

If you see the IEEE page on Smart Grids, besides the smart generation / distribution / reception (very important) there is a paragraph on the interaction between the grid and the customers, being very careful not to mention invasive techniques to allow the grid to control customer’s appliances:

“Intelligent appliances capable of deciding when to consume power based on pre-set customer preferences.”

Here, they focus on letting the appliances decide what will be done to save power, not the grid or the provider. Later on, on the same paragraph:

“Early tests with smart grids have shown that consumers can save up to 25% on their energy usage by simply providing them with information on that usage and the tools to manage it.”

Again, enforcing that the providers will only “provide [the customer] with information”. In other words, the grid is smart up to the smart meter (that is controlled by the provider), where inside people’s houses, it’s the appliances that have to be smart. One pertinent comment from Hector Santos in the IETF group:

“Security (most privacy) issues, I believe, has been sedated over the years with the change in consumer mindset. Tomorrow (and to a large extent today) generation of consumers will not even give it a second thought. They will not even realize that it was once considered a social engineering taboo to conflict with user privacy issues.”

I hate to be pessimist, but there is a very important truth in this. Not only people are allowing systems to store their data for completely different reasons, but they don’t care if the owner of the system will distribute their information or not. I, myself, always paranoid, have signed contracts with providers knowing that they would use and sell my data to third parties. The British Telecom is one good example. He continues:

“Just look how social networking and the drive to share more, not less has changed the consumer mindset. Tomorrow engineers will be part of all this new mindset.”

There is no social engineering any more like it used to be. Who needs to steal your information when it’s already there, on your Facebook? People are sharing willingly, and a lot of them know what problems it may cause, but the benefit, for them, is greater. Moreover, millions bought music, games and films with DRM, allowing a company control what you do, see or listen. How many Kindles were bought? How many iPhones? People don’t care what’s going on if they have what they want.

That is the true meaning of sedated privacy concerns. It’s a very distorted way of selfishness, where you don’t care about yourself, as long as you are happy. If it makes no sense to you, don’t worry, it makes no sense to me too.

Recently, the Future of Privacy Forum published an excellent analysis (via Ars) on the smart grid privacy. Several concepts that are easy to understand how dangerous they can be, became commonplace to not think about it or even consider it a silly worry, given that no one cares anyway.

An evil use of a similar technology is the “Selectable Output Control“. Just like a Kindle, the media companies want to make sure you only watch what you pay for. It may seem fair, and even cheaper, as they allow “smart pricing”, like some smart-grid technologies.

But we all have seen what Amazon did to kindle users, of Apple did to its AppStore, taking down contents without warn, removing things you paid for from your device, allowing or disallowing you to run applications or contents on your device as if you hadn’t pay enough money to own the device and its contents.

In the end, “smart pricing” is like tax cut, they reduce tax A, but introduce taxes B, C and D, which double the amount of taxes you pay. Of course, you only knew about tax A and went happy about your life. All in all, nobody cares who or how much they pay, as long as they can get the newest fart app

Phasers anyone?
November 21st, 2009 under Fun, Physics, rengolin. [ Comments: none ]

Star trek seems a long way and yet, a few news had made into the headlines exposing some achievements that might lead us closer to Roddenberry’s universe.

Some research just found anti-matter in an unusual place: lightning! It might be easier to produce a warp core that we originally thought. Given, of course, that sub-space exists and can be reached by an matter/anti-matter reaction.

Another research, from the University of California, has just found a way to create a medical tricorder. That, for me, is the best achievement so far. Not to mention time travels, teleportation, quantum computers and faster-than-light communication already achieved since the series was created.

Finally, the University of Canada just made the first phaser. Though, it’s still only set to stun…

But I have to say that I’m a bit worried. The Temporal Prime Directive might be needed a bit sooner than the 29th century

Linux is whatever you want it to be
November 5th, 2009 under OSS, Software, Unix/Linux, rengolin. [ Comments: 5 ]

Normally the Linux Magazine has great articles. Impartial, informative and highly technical. Unfortunately, not always. In a recent article, some perfectionist zealot stated that Ubuntu makes Linux looks bad. I couldn’t disagree more.

Ubuntu is a fast-paced, fast-adapted Linux. I was one of the early adopters and I have to say that most of the problems I had with the previous release were fixed. Some bugs went through, of course, but they were reported and quickly fixed. Moreover, Ubuntu has the support from hardware manufacturers, such as Dell, and that makes a big difference.

Linux is everything

Linux is excellent for embedded systems, great for network appliances, wonderful for desktops, irreplaceable as a development platform, marvellous on servers and the only choice for real clusters. It also sucks when you have to find the configuration manually, it’s horrible to newbies, it breaks whenever a new release is out, it takes longer to get new software (such as Firefox) but also helps a lot with package dependencies. Something that neiter Mac nor Windows managed to do properly over the past decades.

Linux is great as any piece of software could be but horrible as every operating system that was release since the beginning of times. Some Linux distributions are stable, others not so. Debian takes 10 years to release and when it does, the software it contains is already 10 years old. Ubuntu tries to be a bit faster but that obviously breaks a few things. If you’re fast enough fixing, the early adopters will be pleased that they helped the community.

“Unfortunately what most often comes is a system full of bugs, pain, anguish, wailing and gnashing of teeth – as many “early” adopters of Karmic Koala have discovered.”

As any piece of software, open or closed, free or paid, free or non-free. It takes time to mature. A real software engineer should know better, that a system is only fully tested when it reaches the community, the user base. Google uses their own users (your granny too!) as beta testers for years and everyone seem to understand it.

Debian zealots hate Red Hat zealots and both hate Ubuntu zealots that probably hate other zealots anywhere else. It’s funny to see how opinions vary greatly from a zealot clan to the other about what Linux really is. All of them have a great knowledge on what Linux is comprised of, but few seems to understand what Linux really is. Linux, or better, GNU/Linux is a big bunch of software tied together with so many different points of view that it’s impossible to state in less than a thousand words what it really is.

“Linux is meant to be stable, secure, reliable.”

NO, IT’S NOT! Linux is meant to be whatever you make of it, that’s the real beauty. If Canonical thought it was ready to launch is because they thought that, whatever bug pased the safety net was safe enough for the users to grab and report, which we did! If you’re not an expert, wait for the system to cool down. A non-expert will not be an “early adopter” anyway, that’s for sure.


Each Linux has its own idiosyncrasies, that’s what makes it powerful, and painful. The way Ubuntu updates/upgrades itself is particular to Ubuntu. Debian, Red Hat, Suse, all of them do it differently, and that’s life. Get over it.

“As usual, some things which were broken in the previous release are now fixed, but things which were working are now broken.”

One pleonasm after another. There is no new software without new bugs. There is no software without bugs. What was broken was known, what is new is unknown. How can someone fix something they don’t know? When eventually the user tested it, found it broken, reported, they fixed! Isn’t it simple?

“There’s gotta be a better way to do this.”

No, there isn’t. Ubuntu is like any other Linux: Like it? Use it. Don’t like it? Get another one. If you don’t like the way Ubuntu works, get over it, use another Linux and stop ranting.

Red Hat charges money, Debian has ubber-stable-decade-old releases, Gentoo is for those that have a lot of time in their hands, etc. Each has its own particularities, each is good for a different set of people.

Why Ubuntu?

I use Ubuntu because it’s easy to install, use and update. The rate of bugs is lower than on most other distros I’ve used and the rate of updates is much faster and stable than some other distros. It’s a good balance for me. Is it perfect? Of course not! There are lots of things I don’t like about Ubuntu, but that won’t make me use Windows 7, that’s for sure!

I have friends that use Suse, Debian, Fedora, Gentoo and they’re all as happy as I am, not too much, but not too few. Each has problems and solutions, you just have to choose the ones that are best for you.

Hitchhiker’s Guide to the Galaxy has arrived
October 14th, 2009 under Fun, Gadgtes, Hardware, rvincoletto. [ Comments: 1 ]

The Wikimedia Foundation has just launched the first release of the Hitchhiker’s Guide to the Galaxy. I hope the next version they’ll use sub-etha to update the contents automatically. It could also come with a babel fish or a Federation tricorder…

Gtk example
September 26th, 2009 under Devel, OSS, Software, Unix/Linux, rengolin. [ Comments: none ]

Gtk, the graphical interface behind Gnome, is very simple to use. It doesn’t have an all-in-one IDE such as KDevelop, which is very powerful and complete, but it features a simple and functional interface designer called Glade. Once you have the widgets and signals done, filling the blanks is easy.

As an example, I wrote a simple dice throwing application, which took me about an hour from install Glade to publish it on the website. Basically, my route was to apt-get install glade, open it and create a few widgets, assign some callbacks (signals) and generate the C source code.

After that, the file src/callbacks.c contain all the signal handlers to which you have to implement. Adding just a bit of code and browsing this tutorial to get the function names was enough to get it running.

Glade generates all autoconf/automake files, so it was extremely easy to compile and run the mock window right at the beginning. The rest of the code I’ve added was even less code than I would add if doing a console based application to do just the same. Also, because of the code generation, I was afraid it’d replace my already changed callbacks.c when I changed the layout. Luckily, I was really pleased to see that Glade was smart enough not to mess up with my changes.

My example is not particularly good looking (I’m terrible with design), but that wasn’t the intention anyway. It’s been 7 years since the last time I’ve built graphical interfaces myself and I’ve never did anything with Gtk before, so it shows how easy it is to use the library.

Just bear in mind a few concepts of GUI design and you’ll have very little problems:

  1. Widget arrangement is not normally fixed by default (to allow window resize). So workout how tables, frames, boxes and panes work (which is a pain) or use fixed position and disallow window resize (as I did),
  2. Widgets don’t do anything by themselves, you need to assign them callbacks. Most signals have meaningful names (resize, toggle, set focus, etc), so it’s not difficult to find them and create callbacks for them,
  3. Side effects (numbers appearing at the press of a button, for instance) are not easily done without global variables, so don’t be picky on that from start. Work your way towards a global context later on when the interface is stable and working (I didn’t even bother)

If you’re looking for a much better dice rolling program for Linux, consider using rolldice, probably available via your package manager.

September 13th, 2009 under Computers, Corporate, rengolin. [ Comments: 3 ]

To start a new idea and make it profitable is much more of an art than logic. There is no recipe, no fail-proof tactic. The most successful entrepreneurs are either lucky or have a good gut-feeling. Hard work, intelligence and the right idea are seldom useful if they don’t come with luck or a crystal ball. After you have started-up, however, they’re the only things that matter.

I may not know how to start a business and succeed, but I do know how to make them fail miserably. I have done it myself and seen many (many) friends fail for different (but similar) reasons. Yet, I still see other friends trying or the same friends still thinking they could’ve done better next time, so this is my message to all of them.

Do you have a crystal ball?

I really meant it, those that really work they way they’re supposed to. If the answer is no, think twice. Seriously, I’m not joking. The only people that partially succeeded were the ones that had nothing to loose, as they had enough money to get them going for years, but (unfortunately) they’re not filthy rich today. The rest are employees somewhere in the world…

Hard work

One thing they all had in common is the idea that they could do it with hard work and a good idea. How wrong they were… Let’s put it simple: if hard work took you anywhere, the world would be dominated by dockers. If good ideas had any impact, the world would be dominated by scientists. But the world is dominated by bankers… Q.E.D.

Working hard won’t help, you have to work just right. That usually means very little in the beginning, a bit more afterwards and later on and finally hire some hard-workers to do the work for you. Simple, stupid.

Picture this: a salesman comes to your door to sell you a pair of scissors. You have many at home, but he assures you it’s the best pair of scissors in the world, that it has twenty patents and the guys behind the design like to work very hard on their ideas. Would you buy it? No! On the other hand, lots and lots of people go to the supermarket and buy scissors just because they’re cheap (and they assume they lost their own).

Don’t expect people to understand your hard work, they couldn’t care less for how much you do work, they just care for what benefits you can give them. The supermarket scissors give them the benefit of being cheap and “be there”, the salesman is already annoying by definition. No matter how good yours is, they simply won’t buy it.

Ingenious crafts

Now, at this point the friends I mentioned are certainly thinking: “but my product was much better. It was new, there wasn’t any one like that in the market”. The truth is, who cares?!

Novelty doesn’t sell, quality doesn’t sell (at least not yours, anyway). If Apple start selling toothbrushes, people will buy by the millions, if you sell a crystal ball that actually works, they’ll ignore completely. Who are you, anyway? Unless they have some kind of value, and their friends (and other posh people) are doing too, they won’t even bother.

If your product is really good, you have to put a high price for it. Poor people won’t buy it, rich people will buy from the fancy brand. You sell it cheap, poor people won’t buy it (because it’s not fancy nor necessary) and rich people won’t even see you. Poor people only buy superfluous stuff from fancy brands (or fakes) and rich people only buy from the real (sometimes fake too!) brand.

If your item is not an every-day necessity, like food, you are in deep trouble. Being the best is not enough, ordinary products sell more than state-of-the-art ingenious crafts.

Do the right way ™

Some of my failed friends (no hard feelings, ok?), that are now really pissed off, are thinking: “But I didn’t put all that effort, and my product was clearly better than any other, and it was for free! How could it go wrong?”. Capitalism 101: No demand, no production.

Don’t yell just yet, when I say demand, I mean demand by desire. There was always a demand for the internet, but people never desired it before a few decades. There was always a demand for a decent search engine, but no one desired that much after all the failed attempts from Yahoo, AltaVista etc. When there was a desire for instant communication, email was not enough, that ICQ had its chance.

Doing it right is not enough, you need to do it at the right time. The right time is not when there is no other option like yours, this fact is irrelevant. The right time is when many others are failing. This, my friends, is the crucial point. You can have a million of ideas, if none of them coincide with the utter failure of one or more other ideas, it’s worthless.

Don’t trust your brain

The recipe for disaster is simple: trust your brain. Trust that your intelligence will lead you to success. Trust that your ideas are better than others’ and that will lead you to success. Trust that hard work wil lead you to success. People that trust their empire is unbreakable, are already breaking. To trust, is to fail.

The most simple rule for success, as I picture it, is to use other people’s failures for your success. If someone is doing it wrong and people are complaining, there is a high demand by desire for that particular thing. If you can identify it and do what they want, it’s likely that you will succeed.

Again, don’t do more than what they need nor better than you have to. Keep it simple, keep it stupid. Hard work won’t lead you anywhere, remember? You have to be fast, noisy and some times ridiculed. It’s part of the game. Good buzz and bad buzz are both buzz, and buzz is good anyhow.

In a nutshell

  • Minimum work, maximum opportunity: Do as little as possible before the window opens, make connections, prepare demos and mockups of several different projects, multiply your chances.
  • Wait for a major failure: Investigate where others are failing and take action immediately, put anything on the market, no matter how ugly or failing, Beta is always Beta (thanks Google!).
  • Don’t let the window close: After you got your opportunity, work hard as hell, buzz, spam, be ridiculous.
  • Don’t use your brains too much: Good ideas are no better than bad ones, your idea is no better than any other. Failing ideas are important, non-existent ideas are irrelevant.

So, my failed friends, it is very simple: You will fail, unless you step on top of other people’s failures and don’t let them do the same to you. Now you understand why I won’t ever try again… This is absolutely not my style at all! I rather have friends than be rich.

A bit of history

Nothing better than a good bit of history to show us how important is people’s failures in other people’s success…

Microsoft’s success

IBM was dealing with Digital Research to put CP/M on it’s new architecture, the PC. Digital was sloppy, negotiations failed, Microsoft (so far completely irrelevant) got a CP/M clone and called it MS-DOS and gave to IBM. You know the rest…

Microsoft had previously worked on a Unix version for micro-computers, called Xenix which was then sold to SCO who ported to PC, which failed. Unix is, as we all know, the best ever operating system in the world. There was no Unix for micro-computers, it was a perfect market, right?

Wrong. The first move (on top of a failure), and not the second (with a bright idea), is what made Microsoft the number one software company in the world today. For bad or worse, they won big time.

Yahoo vs. Google

In the beginning, the internet was a bunch of Gopher sites. When it turned to HTTP and people started using HTML and the commercial boom came in, it was impossible to find anything decent.

Several people started doing directories of cool websites, but it was Yahoo who consolidated it into one big site. They bought several other companies, most notably for their directory contents and search engines. No matter how hard they tried it was still too bad. In 2000 they were to close a search deal with Google. For a short time, Google actually provided search results for Yahoo, but the pride was bigger and they bought Inktomy (who?) and dropped Google’s techology, which obviously brought no value at all to their users.

The search was still no better than Google’s, which saw Yahoo’s pride as their biggest mistake. Google started low, using basically the word of mouth as buzz and making really cool (but stupid, simple and easy to implement) features. Even their search engine was not novelty, as others had done similar in the past and they spent their college time doing it.

Yahoo’s mistake was Google’s take. They now have more than half of the internet passing through them, left Yahoo with second (or third) class, outdated products. The company is now finally destroyed.

To make things even more interesting, Microsoft tried to compete with them, but failed miserably. Their products were even worse than Yahoo’s and, to cement once and for all Yahoo’s mistake, they’re now using Microsoft’s technology as their search platform.

There are obviously many more stories of failures and successes, but I let this as an exercise to the reader. My final and most important point is: commercial success has nothing to do with quality, only with timing and a good deal of bad behaviour.

SLC 0.2.0
September 12th, 2009 under Algorithms, Devel, OSS, rengolin. [ Comments: none ]

My pet compiler is now sufficiently stable for me to advertise it as a product. It should deal well with the most common cases if you follow the syntax, as there are some tests to assure minimum functionality.

The language is very simple, called “State Language“. This language has only global variables and static states. The first state is executed first, all the rest only if you call goto state;. If you exit a state without branching, the program quits. This behaviour is consistent with the State Pattern and that’s why implemented this way. You can solve any computer problem using state machines, therefore this new language should be able to tackle them all.

The expressions are very simple, only binary operations, no precedence. Grouping is done with brackets and only the four basic arithmetic operations can be used. This is intentional, as I don’t want the expression evaluator to be so complex that the code will be difficult to understand.

As every code I write on my spare time, this one has an educational purpose. I learn by writing and hopefully teach by providing the source, comments and posts, and by letting it available on the internet so people can find it through search engines.

It should work on any platform you can compile to (currently only Linux and Mac binaries provided), but the binary is still huge (several megabytes) because they contain all LLVM libraries statically linked to it.

I’m still working on it and will update the status here at every .0 release. I hope to have binary operations for if statements, print string and all PHI calculated for the next release.

The LLVM compilation infrastructure
August 25th, 2009 under Algorithms, Devel, Software, rengolin. [ Comments: none ]

I’ve been playing with LLVM (Low-Level Virtual Machine) lately and have produced a simple compiler for a simple language.

The LLVM compilation infrastructure (much more than a simple compiler or virtual machine), is a collection of libraries, methods and programs that allows one to create a simple, robust and very powerful compilers, virtual machines and run-time optimizations.

As GCC, it’s roughly separated into three layers: the front-end, which parses the files and produce intermediate representation (IR), the independent optimization layer, which acts on the language-independent IR and the back-end, which turns the IR into something executable.

The main difference is that, unlike GCC, LLVM is extremely generic. While GCC is struggling to fit broader languages inside the strongly C-oriented IR, LLVM was created with a very extensible IR, with a lot of information on how to represent a plethora of languages (procedural, object-oriented, functional etcetera). This IR also carries information about possible optimizations, like GCC’s but to a deeper level.

Another very important difference is that, in the back-end, not only code generators to many platforms are available, but Just-In-Time compilers (somewhat like JavaScript), so you can run, change, re-compile and run again, without even quitting your program.

The middle-layer is where the generic optimizations are done on the IR, so it’s language-independent (as all languages wil convert to IR). But that doesn’t mean that optimizations are done only on that step. All first-class compilers have strong optimizations from the time it opens the file until it finishes writing the binary.

Parser optimizations normally include useless code removal, constant expression folding, among others, while the most important optimizations on the back-end involve instruction replacement, aggressive register allocation and abuse of hardware features (such as special registers and caches).

But the LLVM goes beyond that, it optimizes during run-time, even after the program is installed on the user machine. LLVM holds information (and the IR) together with the binary. When the program is executed, it profiles automatically and, when the computer is idle, it optimizes the code and re-compile it. This optimization is per-user and means that two copies of the same software will be quite different from each other, depending on the user’s use of it. Chris Lattner’s paper about it is very enlightening.

There are quite a few very important people and projects already using LLVM, and although there is still a lot of work to do, the project is mature enough to be used in production environments or even completely replace other solutions.

If you are interested in compilers, I suggest you take a look on their website… It’s, at least, mind opening.

40 years and full of steam
August 23rd, 2009 under Computers, OSS, Software, Unix/Linux, rengolin. [ Comments: 3 ]

Unix is turning 40 and BBC folks wrote a small article about it. What a joy to remember when I started using Unix (AIX on an IBM machine) around 1994 and was overwhelmed by it.

By that time, the only Unix that ran well on a PC was SCO and that was a fortune, but there were some others, not as mature, that would have the same concepts. FreeBSD and Linux were the two that came into my sight, but I have chosen Linux for it was a bit more popular (therefore could get more help).

The first versions I’ve installed didn’t even had a X server and I have to say that I was happier than using Windows. Partially because of all the open-source-free-software-good-for-mankind thing, but mostly because Unix has a power that is utterly ignored by other operating systems. It’s so, that Microsoft used good bits from FreeBSD (that allows it via their license) and Apple re-wrote its graphical environment to FreeBSD and made the OS X. The GNU folks certainly helped my mood, as I could find all power tools on Linux that I had on AIX, most of the time even more powerful.

The graphical interface was lame, I have to say. But in a way it was good, it reminded me of the same interface I used on the Irix (SGI’s Unix) and that was ok. With time, it got better and better and in 1999 I was working with and using it at home full time.

The funny thing is that now, I can’t use other operating systems for too long, as I start missing some functionalities and will eventually get locked, or at least, extremely limited. The Mac OS is said to be nice and tidy, and with a full FreeBSD inside, but I still lacked agility on it, mainly due to search and installation of packages and configuration of the system.

I suppose each OS is for a different type of person… Unix is for the ones that like to fine-tune their machines or those that need the power of it (servers as well) and Mac OS is for those that need something simple, with the biggest change as the background colour. As for the rest, I fail to see a point, really.

Online gaming experience
August 15th, 2009 under Fun, Games, InfoSec, Media, Politics, rengolin. [ Comments: none ]

Why is it so hard for the game industry to get the online experience? I understand the media industry being utterly ignorant about how to make sense of the internet, but gaming is about pure fun, isn’t it? The new survey done in UK is more than proof of the obvious fact that people will use all resources of the internet to get what they want, whether it’s illegal or not.

After all, who defines what’s legal and what’s not? The UK government already said that it’s OK to invade one’s privacy for the matter of general security, even when everybody knows that any government has no clue on what’s security and what’s not. Not to mention the Orwellian attitudes of certain US companies seem not to raise any eyebrow from the local government or the general public…

That said, games are a different matter. Offline games still need have some kind of protection, but online games should rely on online commerce, and that can only be complete if the user has a full online experience. So, what do I mean by full online experience?

You don’t always have access to your own computer. Sometimes you have just a remote connection, sometimes only your mobile phone or a web browser. Sometimes you have an old laptop with no decent graphic card and those golden times when you have a brand new game machine with four graphic cards. 10 years ago, mobile phones were not as today, but even though my current mobile has a 3D graphic card in it, it’s closer to the lower end when compared to desktops or even laptops.

So, what’s the catch? Imagine a game that you can play exactly the same game irrespective of where you play it.

There are lots of new online games, so called ORPG (online RPG) or the bigger brothers (MMORPG, massively-multi-player ORPG), but all of them rely on a Windows machine with OpenGL2 and DirectX 10 to play it, even though not half of it really need that kind of realism to be fun.

Moreover, when you’re at the toilet and you want to keep playing your battles, you could easily get your mobile and use a stripped down version with little graphic elements but with the same basic principles. When you’re at your parent’s and the only thing you have is dial-up, you can connect via SSH and play the console version. At least to manage your stuff, talk to your friends or plan future battles.

The hard part in all this, I understand, is to manage different players playing with different levels of graphic detail. Scripts on online games are normally prohibited because it eases too much cheating, and that would be the way of battling via a SSH connection… Players with better graphic cards would have the advantage of seeing more of the battlefield than its friends with a mobile phone, or even using a much better mouse/joystick and a much bigger keyboard (short-cuts are *very* important in online gaming).

With the new mobiles and their motion sensor and GPS interfaces, that wouldn’t be a much bigger difference, as you could wave the mobile to have a quicker glance and even use voice-control for some features that is still lacking support in desktop but it’s surprisingly popular in mobile devices. All in all, having at least three platforms: high-end and low-end graphics plus a mobile version, would be a major breakthrough in online gaming. I just wonder why game makers are not even hinting in that direction…

The console version is pushing a bit, I know, I just love the console… ;)

« Previous entries