Imagine, if you will, that all owners of data centers and agents representing buyers of computing cycles get together daily and buy and sell commodity computing units (we’ll call them containers) in an open exchange. Now imagine another group of buyers and sellers who are not just exchanging those containers, but buying and selling options on the containers — the right to buy or sell those containers at a future date. This exchange would be trading the 21st-century equivalent to the pork belly. Pork bellies were introduced as a commodity in the early 1960s in the Chicago Mercantile Exchange in tradable units of 40,000 pounds.

In a strange way, the world of data center virtualization is inching tantalizingly closer to this kind of concept. Here the tradable unit would have to be a package of code and/or data ready to move. Fortunately, we have two containers, Xen and VMware, that dominate the packaging market. All we need is for chip consistency across chip manufacturers, or at least enough package portability for seamless shipping of containers across different hardware platforms. And with live migration capabilities, imagine shipping long-running, high-performance computing jobs in the middle. This is sort of like trading pork bellies in a slow cooker, ready to continue cooking at a place to be determined.

We would would also need container portability across multiple providers and all providers would need a consistent cloud infrastructure enabling easy transport of the containers. Of course, this would require the requisite legal arrangements, contracts, terms, and conditions to be hammered out and made consistent. Perhaps whoever establishes the exchange can establish the legal and technical standards for transferring containers, if not supply some of the infrastructure to coordinate the shipment.

Sensitive data would need special security wrapping around the container, sort of like foil preventing x-rays from peering inside the container. It could also mean that the networks inside the provider’s data center would need a form of dynamic virtualization so when the container arrives, it can’t gain access via any electronic connection to any other container in the container farm, and vice versa. I’m sure governments will want to make sure some of this information can’t be virtualized outside a country’s borders. Countries will be eager to prevent these containers from being dispersed, like oil droplets scattered across the ocean. In some cases, data and code will be in the container being shipped around. In other cases, it might be a container of code being shipped closer to data or data being shipped closer to code, depending on which route has a cheaper fare.

The enterprise resource planning (ERP) vendors could benefit here. In most ERP systems, the data and code are very tightly coupled. Most ERP vendors have at least two tiers: an application tier and a data tier. These application tiers are often horizontally scalable and have redundant paths. The database typically is not redundant. A code container can be procured in these environments and added to the existing application tier (aka cloud bursting), provided all the network connections can be dynamically managed (network bursting). The database container can typically reside in the companies’ own or outsourced data center, though with truly high-speed, virtual secure networks, physical location may not matter. The ERP vendors can find new ways of pricing their code via leasing models in a cloud-bursting environment as well as in a complete outsourced environment. Buy a container, rent a container, heck, maybe sublease a container. In a computing container exchange, the ERP vendor as well as the company could take advantage of many computing providers. In fact, all parties can both sellers and buyers be. Confusing, huh?

Of course, there will be potential losers in a transition to a completely standardized, transparent computing exchange. Hardware vendors who want to lock in customers will not like the transition. Software vendors who also provide hardware or whose code base is not ready for the architectural shift will not like this transition, either. The legal community would probably have to totally rethink the contracts that normally apply to be much more dynamic (shorter contracts, contracts for all sorts of different container sizes, etc.) and respectful of technical standards that facilitate the exchange. Lots of salespeople may lose as the prices will be managed via an exchange and the futures market would essentially provide price visibility into the future.

CIOs will most likely have to change their thinking. Rather than actually managing infrastructure, they will be managing the logistics of dynamic infrastructure. Because of the speed of light, which these days is seeming slower than it used to be, I expect that there will be enough problems for which local processing will still be useful and for which the optimized blend of local and distributed processing will be both hard to do and advantageous for companies to calculate. But clearly, in this thought experiment, most data processing and information extraction activities can be moved outside the company’s data center.

The CIO will have to work harder at more elaborate cost optimization schemes that also consider business agility and the time value of future profits. While the CIO won’t have to worry about the container itself, I do expect a whole lot of engineering inside the container still to be quite valuable. After all, some of these containers will have top-secret stuff in them and proprietary forms of data directly tied to a company’s competitive advantage. Since this data and the extracted knowledge it represents are the new pork belly, CIOs will need to be experts at the cost of making and shipping information between container processing centers. Also, CIOs will need to know quite a bit about knowledge diffusion (how knowledge gets absorbed by people, an organization, or markets), which by itself can be quite difficult to understand and manage. After all, the expense of containers must be proved in actual competitive advantage created by real human beings.

But for all these difficulties, which any one or combination thereof could undermine this concept, there stands one fantastic new opportunity too good to go unnoticed. For all the financial wonks who burned both us and themselves by the last whiz-bang complex financial products, redemption is near. Now they will have the chance to sell options on these computer pork bellies and be able to create a whole new class of complex financial products. Imagine a futures market that can help price expected computing demands, which would also be, albeit indirectly, a futures market on new kinds of computing environments. We are not only interested in the volume of containers transferred; we ought to be interested in the technology standards that describe the container and perhaps the innards, which includes possible future technology shifts. Such a market would certainly capture some of this in the futures pricing. As these new data processing providers design new containers (new forms of data, algorithms, and standards), potential investors might want to bet directly on the future of these innovations. I am not an investment expert, but I can sense this might prove attractive to many cunning Wall Street types who might want to play around with some advanced math (chaos theory?) to try and predict the unpredictable. Move over credit default swaps. There could be a new player in town. The sky is the limit! Do you see another bubble coming? Another round of complex government regulations?

Just as not all pork bellies are created equal, neither are computer architectures. The type of the computing container would probably either get factored in the overall pricing scheme or perhaps spawn a separate exchange. In the far future, do we have one exchange for silicon-based containers, one exchange for graphene-based containers, and an exchange for optical computing containers? Or will the future evolutions of computing architectures be compatible with one exchange standard? Will the exchange standards evolve to accommodate new forms of basic computing architectures? Curious minds want to know.

Since information is the lifeblood of all things military and industrial, this exchange and the connections between all players in the exchange would emerge as a critical national asset and would of course need to be protected by respective militaries across the globe. To be self-sufficient, each country would have to invest in building enough infrastructure on their own or with close allies to ensure they can’t get cut off. Physical protection of this knowledge manufacturing equipment could be its own subdiscipline with new versions of such companies as Halliburton, which will work globally to help companies and countries ensure these assets are protected. I recently visited Apple and noticed the young, large, and very serious security guards at the door. Apple’s secrets could be chump change to what kinds of information would flow through these kinds of container processing centers. I can see the want ad on Craig’s List: “Data security mercenaries wanted. Expert in Muay Thai, Jujutsu, weaponry, fractals, and biometrics.” A 21st-century Pinkertons?

As the world gets more computationally driven, it will take a lot of energy to power some of these kinds of facilities. The next generation of high-performance computing coming online in 10-plus years may have yearly energy budgets in the hundreds of millions of dollars. Remember, what today has been driving virtualization forward so far is the desire for data center managers to reclaim unused CPU cycles so more data can be meaningfully processed. In the future, the war between data and algorithms will not be won by either side. We keep inventing new kinds of data at enormously growing sizes and rates. We are continuing to invent enormous data processing capabilities of different sorts to extract information out of some of this data. That race will continue probably for our lifetimes (I give the slight edge to data — algorithms seem never able to catch up). The conclusion is that in the future, further virtualization will be driven by the energy costs to store and move data as well as execute algorithms on that data. This alone could force partnerships between countries as the costs to build, power, and cool these processing centers escalate. Maybe these will be the most expensive kinds of manufacturing facilities on the planet. Maybe they will have their own nuclear reactors or equivalent power source next door.

Only time will tell if these ruminations become real. I am not sure whether the prospect of data and algorithms encapsulated in a secure container and shipped seamlessly in part or in whole across the world for further processing and openly traded in a commodities market and an options exchange should have me eagerly waiting for or terribly nervous about the future.

avatar

Vince Kellen, Ph.D.

Vince Kellen, Ph.D. is a Senior Consultant with Cutter's Business Technology Strategies practice. Dr. Kellen's 25+ years of experience involves a rare combination of IT operations management, strategic consulting, and entrepreneurialism. He is currently CIO at the University of Kentucky, one of the top public research institutions and academic medical centers in the US.

Discussion

 Leave a Reply

(required)

(required)

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>