My lifetime has spanned many of the important developments in the Age of Computers. Back in 1969 when I entered college, I was a frequent visitor to the Kiewit Computing Center, the lair of a GE-635 computer that filled several rooms. Students had access to the computer via noisy teletypes and a multiuser operating system known as Dartmouth Time Sharing. We wrote simple programs in BASIC, a language created by two of the Dartmouth professors, John Kemeny and Tom Kurtz. In 1969, even the hoary old operating system Unix was still a year or two in the future. There have been huge changes in computers since then. The smartphone I carry in my pocket today is light-years more powerful than that huge old-time computer. It has been an interesting journey from those distant days to the present.
With the 1980s came the personal computer. Microcomputers they were called then, to distinguish them from the previous generation of minicomputers (which were about the size of a refrigerator). The Apple II was a breakthrough system, followed by the more business oriented IBM PC. There were other systems from various companies, some of which don’t exist anymore. Many of the systems were incompatible with each other, so special versions of software were required for each system. Microsoft’s MS-DOS, a variant of another disk operating system called CP/M, won the operating system battle, and eventually all PCs were pretty much interchangeable, running MS-DOS. Apple was the outlier, hanging on to a small market share after abandoning the Apple II and Steve Jobs. The Macintosh, incorporating a graphical user interface (GUI) that was ahead of its time, was the inspiration for Microsoft Windows 95, and through the 90s the GUI became dominant. This was also the era of the rise of the Internet and the Dotcoms. Microsoft put Internet Explore in Windows, making it difficult to install other browsers, leading to Internet browser pioneer Netscape going out of business and anti-trust suits against Microsoft. Desktop PCs were dominant. Laptops were fairly primitive and clunky. Microsoft was at the height of its hegemony.
Then along came the millennium, and with the iPod, Apple, now back under the direction of Jobs, made a complete turnaround. Since then we have seen a revolution in computing with the introduction of mobile computing: smartphones and tablets. This is disruptive technology at its finest. The playing field and the rules of the game have changed since the 1990s, when Microsoft was dominant. Apple is a major player as is Google. Apple has succeeded because of tight integration and control of both hardware and software. Google went the route of web-based applications and computing in the cloud. Microsoft, the least nimble of the three, has struggled. Giving Windows a facelift every few years and expecting everyone to upgrade to the new version doesn’t cut it anymore. More and more people are using their phones and tablets as their primary computing devices, platforms that for the most part are not running Microsoft software. Microsoft is putting all their eggs in the basket that predicts that laptops and tablets are going to converge into a single device. I’m not sure they are wrong. Laptop sales have fallen.
But I personally still see tablets as devices to consume content (like read eBooks and email, and browse the web), whereas for creation of content (writing blogs like this one, or programming) a laptop is far easier to use. So I end up using both. Apple seems to realize that at least for now both devices play a role, and so they have two operating systems tailored for the two classes of device. Yet their upcoming versions of Mac OS and iOS also show signs of convergence. Clearly having one device to do both jobs would be nice; I just can’t envision what this device would look like.
So competition is back in the computing business, which is good. There are all sorts of directions computing can go at this point. There are a lot of choices. There have been a lot of changes. App stores with small, free or inexpensive apps compete with the old paradigm of expensive bloated, monolithic software programs. It seemed for a while that web-based apps would dominate. These are apps that run in a browser and so are platform-independent. Good idea, especially for developers who only need to write the code once. But despite being a good idea, this is not what consumers want on their smartphones and tablets. They want native apps on each platform. So the developer (I include myself here) is forced to write two versions of each app: one in Objective C (and soon in Apple’s new Swift language) for iOS, and one in Java for Android. Oh well, such is life.
Obviously all these changes have affected health care as well. The Internet of Things — the linking together of smart devices — shows great potential for application to health care. Not only can we monitor our individual activities with devices such as FitBit, but we also have the potential to link together all those “machines that go ping” in the hospital. The hemodynamics monitors, the ventilators, the ECG machines, and so on could be all accessible by smart phone or tablet. Integration of health care technology and patient data is certainly feasible, but, like everything else in health care, innovation is bogged down by over-regulation and the vested interests of powerful players who certainly don’t welcome competition. I hope this situation eventually improves so that health care too can take advantage of the cutting edge of the technological revolution we are experiencing today.
David Mann is a retired cardiac electrophysiologist and blogs at EP Studios.