Understanding The Innovation Lifecycle

Information technology is a rapidly moving space, and understanding the innovation lifecycle is critical to making sound business decisions. Whether you are a creator or consumer, knowing how your products, technologies, approach, and methods line up with the evolving marketplace better positions you for successful outcomes. Clearly, not every innovation follows the cycle below. Some companies go bankrupt, some products become vaporware, and some technologies seem to never move past a certain stage (or hover in a stage indefinitely.) However, the vast majority of new and disruptive changes in this space follow this lifecycle and complete their journey through it.

 

innovation-lifecycle-final

 

What Is Innovation?

Innovation is a much overused and often cringeworthy buzzword. However, it does have a distinct meaning. While that meaning may vary slightly from person to person, in the context of this article we will use the definition constructed below. To do this we will start with in the dictionary definition, and three important caveats.

According to the Oxford English Dictionary, an innovation is:

“a new method, idea, product, etc.” 

That is certainly short, imprecise, and totally lacking for how we use innovation in our common parlance. Innovation is not just a new idea, method, or product. The same model car from last year with a different paint job is not innovation, because it doesn’t represent a wholly new idea. Neither is a phone which only works when standing on the surface of the Sun, because it has no realistic application. Nor would a commuter train made entirely of slowly melting nacho cheese be innovative, because it doesn’t solve any real problem.

To me, innovation as it applies to business and technology requires at least three distinct clarifications and additions to this definition:

  • It must be something wholly new, even if assembled from pre-existing ideas, methods, and technologies.
  • It must be something which has application. An idea alone, or a new method which cannot be realistically applied, is not an innovation.
  • Innovation must solve a problem. In business this means it must impact “the bottom line.” To do that, it needs to affect one or more of the following: speed to market (agility), reduction of effort (rigor), ability to generate capital (investment), ability to reduce cost (savings).

An innovation is also not “a product” alone. Certainly, we can categorize many new products as innovative, but generally when we use the term “innovation” we should be referring to a entire category of products as opposed to a single product. For instance, one could discuss Docker and how it fits in the lifecycle above. However, that would miss the mark, and one should reference “containers” or “containerization” which would include Docker, Rkt, other Open Container Initiative (OCI) technologies, and any closed source or vendor protected products in that space which fulfill the same role. This is important, because products themselves have distinct lifecycles, and those operate independently of the innovation lifecycle. One product may fail, and its competitor may succeed. So we will remove “product” from the dictionary definition and replace it with “technology” in our accepted definition.

All these points aggregate to the following working definition:

“An innovation is a wholly new idea, technology, or method which can be applied to solve a problem.”  

To be even more specific in the context of business, we could say:

“An innovation is a wholly new idea, technology, or method which can be applied in my company to increase agility, reduce effort, generate capital, or cut costs.”

Components Of The Lifecycle

The first stage in the lifecycle is called inception. It represents the birth of a new idea and lasts from the initial spark of a new approach until the point where this technology or method is generally adopted. This is a rocky time in the innovation lifecycle, and exists as a proving ground for new ideas. Technology in this state can be consumable, but often isn’t easily useable or accessible to laymen. With iterative development being the most common model utilized in modern programming, this first stage may only represent the barest foundations of a final “production ready” technology. Features are missing. Things are broken. It is difficult to support without specially trained IT warlocks who don’t bathe and dream in code. Often at this stage there are few to no competing products or suites of products, because no one has a gauge on whether this innovation will be successful or even find a market. Many new, and occasionally amazing, innovations die on the vine in this phase because they cannot make the leap to actually being readily consumable.

The second stage is adoption. This phase lasts from early adoption of the technology by brave and fast moving companies who are able to take the risks, up to Enterprise recognition and large scale deployments. Technologies at the beginning of this phase are past the “beta releases” seen in the first stage, however they are still testing the waters, adapting to general consumption, and often making large course corrections to become more easily consumable. This is followed by relatively stable development, which is often marked by numerous competing products that seek to deliver the same innovation in a different package. Large outside vendors often begin imitating this technology, or adapting it into their own ecosystem of products. The middle of this stage is a great place to become an adopter. You have a market full of choices, and many big names lending their credence to this new idea. Finally, as this phase comes to a close, it moves into Enterprise adoption. Enterprises, which are often seen as late adopters, begin to test the waters with this new innovation. Although, many choose to keep these technologies in development and non-production areas until later in the innovation lifecycle. Here at the very end we begin to see large scale deployments, and a level of soundness and trust which tells the marketplace that this innovation has made it.

The third stage is disruption. Every new innovation which comes along fully believes it is the disruptor, and can itself never be disrupted. This is a critical mistake. In fact, clever innovators should begin to disrupt themselves as soon as they see their core technology being widely used. This is an eventuality every innovation faces, as technology is always progressing. This phase lasts from the first recognition of a disruptor, until the disrupter moves into its own adoption stage of the lifecycle. However, this isn’t yet a period of decline for the innovation being disrupted. The core technology will adapt and learn from its disruptor. Often it will grow stronger, because it has to change and see the world in new ways that it didn’t anticipate. However, there is no adapting out of being disrupted. Once it happens we know that the disruptor will move along its own innovation lifecycle our technology will be supplanted.

The fourth stage is recession. As the disruptor (or disruptors) of the previous stage begin to move into adoption, companies utilizing this technology will start to see it as “last gen” and begin to focus efforts on the coming “next gen.” Vendors will refocus efforts, fast moving companies will abandon this “ancient tech”, and Enterprises will start hiring folks who have the skills for adapting their processes to embrace the new kid on the block. At this point, there is always some innovation panic. This is the mid-life crisis of the innovation cycle. Quite often products using this technology will resort to bolt-ons to prolong life. Large vendors begin to utilize “Embrace, Extend, Assimilate” thinking to keep their variation of this innovation in play with a mix of the new tech. VMWare is a master of this. They are like some wicked Iron Chef who can cook a beautiful steak and throw in a dash of blobfish and still get a standing ovation from their fans. As their core business of traditional on-premis virtualization suffers an overwhelming onslaught from public and private IaaS, PaaS, and CaaS adoption — they start to offer traditional virtualization that also supports Docker containers (VIC), and things like VMWare on AWS (so vSphere and NSX masters untrained in the ways of the cloud are not left out in the cold.) This is impressive, however nothing anyone does at this stage truly prevents the inevitable. At some point, our brave new idea is going to become just a footnote in the history books. All IT is written in sand. The tide will come in, and what was will disappear.

The final stage is obsolescence. Obsolescence is inevitable, but it can take a very very long time. As a COBOL program on a mainframe once told me:

      $ SET SOURCEFORMAT"FREE"
IDENTIFICATION DIVISION.
PROGRAM-ID.  NeverDie.
PROCEDURE DIVISION.

DisplayPrompt.
   DISPLAY "i WiLL NeVeR FRaKKiNG Di3 d00d".
   STOP RUN.

 

This period can run for years or decades. With Enterprise adoption, if wide enough, it may seem to go on forever. Once something gets into the Enterprise, it rarely leaves entirely. It simply gets its desk moved to the basement where it can play with its stapler in peace. The death of a technology is a sad thing, but often the ghost lives on. It may someday be celebrated in small circles by bearded wizards writing LISP and demoing Amiga games written in Lattice C. It even may be run on a futuristic emulator in a computer history museum, so our descendants can just how archaic things were way back in 2018. The innovation will always hit its EOL, but our memories of it will live on.

There the innovation lifecycle completes its journey. Some may object to the arrow at the end of “obsolescence” pointing back to “inception”, but I believe every death gives way to a new birth. The learnings of this technology, including those missed or misunderstood by the disruptors, will give new light to some future development. Even today, as we walk deeper into a world full of artificial intelligence we are rediscovering the innovations, brilliance, and wisdom of early computer scientists like Joseph Weizenbaum, John McCarthy, and Marvin Minsky. As Newton said, “If I have seen further, it is by standing on the shoulders of Giants.” Let us hope our dead innovations become the giant corpses the future needs to stand on.

Posted in Uncategorized | Tagged , , | Leave a comment

How OpenStack Is Changing The Enterprise

It may be cliché these days, but I should point out that obviously nothing I
express here represents the views of my current or former employers.
I’m
extremely grateful to them, and proud that I had, and continue to have, the
opportunity to work with so many great minds! However, their views are
completely their own and not in any way represented here.

Over the past several years I’ve witnessed first-hand the evolution of the current and coming Enterprise cloud. The cloud’s very concept challenges the traditional notions of how the Enterprise operates today, and has required careful re-examination of how these businesses think and work in the modern era. Due to my zealous interest in this area, this has led to a great number of on-going discussions with peers, colleagues, and others working this space. Thankfully, it has also led to some of the common observations, and trending solutions, described below.

What Enterprises REALLY Want

The Enterprise is a demanding entity which, thankfully, is driven solely by business logic. The giant companies of the world are not interested in technology because it is new, cool, or trendy. They are interested in how that technology will impact the bottom line. This is a simple, but key observation. Arguments for cloud technology have to be measured against, and impact, concrete business results.

So what cloud benefits does the Enterprise truly care about:

  • Agility
    • Faster time to market
    • Reduction In Effort Hours
  • Insight And Control
    • Ability to more easily govern a huge landscape
  • Lowering Costs
    • Data center optimization
    • Built-in disaster recovery
  • Rapid Innovation
    • Test ideas faster
    • Ability to try new things and fail with less risk
  • Ultimately, Google-esque IT infrastructure which never goes down
    • Self-healing
    • Auto-scaling
    • Mass Orchestration

Dealing With Enterprise Governance

Why Govern IT?

Enterprise governance is a necessary evil, and often seen the biggest enemy of the cloud. However, without it, large companies could in no way could consider themselves to have compliance across their infrastructure. Today, these infrastructures are incredibly large, and a standard methodology for maintaining them needs to be in place. ITIL has certainly led the way, but currently poses some real challenges for cloud due to the Enterprise interpretation of so many tasks (often as manual processes.) Fortunately, more and more automation is being accepted, provided that the outcomes are the same and there is full transparency in the process.

A Standard Operating Environment

Maintaining an SOE is absolutely essential to Enterprise computing. When managing a large field of servers (sometimes in the 100,000s) businesses cannot tolerate lots of snowflakes (one-off servers.) To provide the most ideal landscape one wants to make certain every server is secure, patched, audited, compliant, etc. This process involves creating a set of SOEs by combining:

  • A core OS at a specific patch-level (often Linux or Windows)
  • A set of standard required products (Security, etc.)
  • A standard set of products for a server role (Web, Database, etc.)
  • All the necessary configurations for above

Software like Red Hat Satellite v6.x make this a central tenant of their product’s philosophy, and for good reason. Overcoming snowflakes and keeping servers compliant is critical to successfully managing modern IT. OpenStack opens up new doors for solving this problem. With a new delivery model, and a wide open cloud landscape, we are free to revisit how we build, deploy, and manage servers. Discarding traditional manual processes and relying on RESTful orchestration, image catalogs, and cloud services we can carve out new enforceable standards with ease. This leads to an interesting paradigm shift trending in cloud-enabled IT today:

Service Delivery Transformation

Many service delivery organizations are adopting new models for cloud. Common models involve delivering standardized “ready-to-go” application instances available through a catalog. This is a parallel to public cloud delivery models. However, this is quite different from the traditional service delivery work of setting up and micro-managing endless farms of servers. Thankfully, removing that burden, opens up new avenues for innovation and broader product support. Using emerging technologies like Docker and Puppet the delivery process is far more streamlined and template based. Further, adoption of data grid technologies and an Enterprise service bus make refactoring traditional applications to modern horizontal/elastic models much easier.

Template and Automate

Manually making classic “golden images” to place in Glance would certainly suffice; however, that is against the cloud concept of being inherently agile. We also need to concern ourselves with ease of deployment and absolute consistency. Finally, we need to maintain a careful verifiable record of these transactions. Therefore, creating and placing these templates in a version controlled repository like git makes a lot of sense. In the cloud era, these applications (or environment) architecture definitions will become the de-facto method for powering automation. They become the “single source of truth” from which to blueprint all of IT. Today these are often documented in common formats such as cloud-inits, dockerfiles, puppet manifests, and heat templates. New standards like TOSCA (which is intersecting closely with HOT) are starting to provide an agreed upon way to define even very complex architectures in a simple YAML file. Not only is the Enterprise becoming entirely virtual, but even the architectures for critical applications and environments are essentially becoming code.

With templates in place, automation becomes easy to accomplish. With all the infrastructure and applications defined in a repository, it is a simple task to invoke tools like disk-image-builder/oz, Heat and cloud-init, Puppet, and so on to perform the orchestration of your defined infrastructures. Providing that it is all hidden behind a nice service catalog (like OpenStack Murano), you are able to create a simple end-user experience which is wired to a Enterprise compliant, revisioned, controlled, and transparent automation process.

Pulling It All Together

Moving infrastructure to CI/CD is part of an evolution to the next-generation of cloud. Continuous integration and continuous delivery are excellent concepts for developers; however, until today infrastructure itself has not been defined in code. Through the cloud and this paradigm shift, the industry has encountered a brand new way to automate and deliver environments. Whole static DEV/QA environments can be replaced through integration of dev-ops processes with Jenkins and OpenStack. This can enable automated provisioning and testing on an isolated exact-replica of production environments. Further, when testing is complete, this infrastructure can be returned to the pool. Successful applications can be manually promoted, or automatically integrated into production with Canary/blue-green CI deployment patterns. Changes to upstream templates could even be set to trigger automatic (no-downtime) upgrading of infrastructure company wide. The possibilities are mind-boggling!

Notes On Event Based Management

When dealing with Enterprise inventory requirements, like integration with CMDBs or auto-ticketing systems, make ample use of the OpenStack AMQ. Many popular products, including CloudForms/ManageIQ, utilize this for addressing the record keeping necessary to support a constantly changing OpenStack environment. Simple integration with OpenStack event notification makes writing a custom implementation for most back-ends trivial.

The Future: Dawn Of The Immutable World

We are just at the cloud’s opening act of moving the Enterprise away from worrying about servers, and towards caring about workloads. As the idea starts to set in, the obvious implication of a world of transient servers becomes apparent. If these servers are indeed transient (just template-based cogs in a machine) — why should we ever access them directly. Wouldn’t we most desire these cogs to be unchanging and untouched. Ideally, these would only be modified though changes to a single Enterprise “source of truth” (git). The modern application-based cloud servers and new container technologies are providing a great path to the clever realization that we only care about “what goes in” and and “what comes out”. IMHO, the future Enterprise will eventually want everything “inside the box” to be completely immutable, governed, and transparent. No access directly to servers, and certainly no changes outside of git.

Let me know what you think, and if you have seen other trends (or flaws in the current ones), please point them out in the comments section below!

Footnotes

[1] Gartner Data Center Pool on Private Cloud Computing Drivers, Gartner, Private Cloud Matures, Hybrid Cloud Is Next, Thomas Bittman, September 6, 2013

Posted in Uncategorized | Tagged , | Leave a comment

Running Windows 7 guests on OpenStack Icehouse

VDI is a great way to enable end-users to take their corporate desktop with them on any device, anywhere in the world. Implemented correctly, it is also a great money saver for enterprises. However, to make this real, you will most certainly find yourself dealing with Windows 7 guests and a healthy dose of cloud automation.

Today OpenStack is picking up pace in the VDI sphere. Companies are dotting the OpenStack ecosystem, like Virtual Bridges and Leostream, who are providing VDI brokering platforms. Some companies have also utilized in-house talent to write cloud automation for the VDI basics. Today we won’t get too deep into the roll out of VDI on OpenStack. Instead, we will focus on the first problem — getting a Windows 7 desktop on the cloud to begin with.

There are some great tools like Oz which are trying to simplify the process of getting every OS into the cloud. However, there are still some bits being worked on in the Windows space there. In light of that, the road to getting a Windows 7 cloud image created and installed is a manual and somewhat tricky chore. To alleviate the pain, I’m going to walk step-by-step through the process I use to create Windows 7 guests.

There are a few things you will need:

  • A Windows 7 image
  • A Windows 7 product key
  • A Linux box running KVM
  • The KVM Windows Drivers ISO

Once you have those together, it’s time to start the process!

Step 1. Install Windows 7 in KVM

Fire up virt-manager on your Linux server, and you should be greeted with the following friendly GUI:

Screenshot from 2014-05-13 20:59:48

It’s not quite VirtualBox, but it works! :) Click the “Create new virtual machine” button, give the new instance a name and click forward. On the next screen, select your Windows 7 ISO and set the OS properties:

Screenshot from 2014-05-13 21:51:38

Click forward and give yourself 2 GB of RAM, and a 1 CPU, per the minimum system requirements. On the next screen select 20 GBs of space, and uncheck “Allocate entire disk now”:

Screenshot from 2014-05-13 22:21:18

Click forward and review your setup. Be sure to check the customize button before hitting finish:

Screenshot from 2014-05-13 22:23:15

You should now be at a screen where you can be a little more specific in your setup. Switch the network and disk to use virtio as shown:

Screenshot from 2014-05-13 22:26:40 Screenshot from 2014-05-13 22:26:26

Now we need to add in a cdrom for the KVM Windows Drivers. To do this click “Add Hardware”, select Storage, and a cdrom with the virtio iso:

Screenshot from 2014-05-13 22:32:05

Finally, we are ready to click “Begin Installation”! Go through the usual screens, and you will eventually get to here:

Screenshot from 2014-05-13 22:41:49

Uh.. where are the drives! No worries, this is what we brought the virtio drivers along for. Click “Load drivers” and browse to E:\WIN7\AMD64:

Screenshot from 2014-05-13 22:46:00

Click “OK” and select the “Red Hat VirtIO SCSI controller”. Your 20 GB partition should now appear. Click next, and go grab some coffee while Windows does its thing.

When it finally prompts you for a user name, enter “cloud-user”. Set a password and enter your product key. Then set the time, etc. At some point you will get a desktop and find you are without Internet connectivity. Time to install more drivers! Open the windows device manager and you should see something like this:

Screenshot from 2014-05-13 23:27:21

Right click the ethernet controller and navigate to the drivers in E:\WIN7\AMD64\. It should auto-detect your device after hitting ok.

Always Trust Software From "Red Hat, Inc."!

Always Trust Software From “Red Hat, Inc.”!

Repeat this process for the other two broken devices. Finally verify the system can reach the Internet. If everything looks okay, then shutdown the guest OS and open the info panel:

Screenshot from 2014-05-13 23:38:46

Remove both cdroms, and restart the Windows guest.

Step 2. Install Cloudbase-Init

When the instance comes back up, open a browser in the guest and navigate to http://www.cloudbase.it/cloud-init-for-windows-instances/ and grab the latest cloud-init for Windows and run the installer:

Screenshot from 2014-05-13 23:50:07

For now, accept the defaults and continue the install. When everything finishes don’t let the installer run sysprep. Also, before you shutdown, edit the C:\Program Files (x86)\Cloudbase Solutions\Cloudbase-Init\conf and make it look something like this:

[DEFAULT]
username=Admin
groups=Administrators
inject_user_password=true
plugins=cloudbaseinit.plugins.windows.sethostname.SetHostNamePlugin,cloudbaseinit.plugins.windows.createuser.CreateUserPlugin,cloudbaseinit.plugins.windows.networkconfig.NetworkConfigPlugin,cloudbaseinit.plugins.windows.sshpublickeys.SetUserSSHPublicKeysPlugin,cloudbaseinit.plugins.windows.extendvolumes.ExtendVolumesPlugin,cloudbaseinit.plugins.windows.userdata.UserDataPlugin
network_adapter=
config_drive_raw_hhd=true
config_drive_cdrom=true
bsdtarpath=C:Program Files (x86)Cloudbase SolutionsCloudbase-Initbinbsdtar.exe
verbose=true
logdir=C:Program Files (x86)Cloudbase SolutionsCloudbase-Initlog
logfile=cloudbase-init.log

Now disable the Windows firewall:

Screenshot from 2014-05-14 00:30:24

All the connections to this server will be controlled the security groups in OpenStack. Also, we should allow RDP access:

Screenshot from 2014-05-14 00:32:00

Now we can shutdown, by manually running sysprep again:

C:\Windows\System32\sysprep\sysprep.exe /generalize /oobe /shutdown

Step 3. Upload Image To OpenStack

Now for the easy part! Let’s convert the image to a qcow2, and push it into glance:

# qemu-img convert -c -f raw -O qcow2 /var/lib/libvirt/images/win7.img ./win7.qcow2
# glance image-create --name="Windows 7 (x86_64)" --is-public=True --container-format=bare --disk-format=qcow2 --file=./win7.qcow2

When the upload completes, log into Horizon and verify the image is available:

Screenshot from 2014-05-14 01:06:08

Then try creating a new instance — and don’t forget to set the Admin password:

Screenshot from 2014-05-14 01:08:38

It will take a bit to spin up due to the size (around 4 GB). When the task completes, head over to the instances console and verify you have Windows 7 running [Note: you may need to update the product key in the console on the first boot]:

Screenshot from 2014-05-14 01:33:28

Now you can provision a static ip and edit your OpenStack security group to add port 3389 (RDP). Now sit back, and test connecting to your instance from something fun like an iPad :)

2014-05-14 18.16.08

w00t!

Now you have a fully functional Windows 7 OpenStack image! With this you can start down the road to a slick OpenStack VDI solution. The first steps on that path are using this image to make a few customized snapshots for the various user groups in your company. These could include system wide changes particular to each division, like customized software or settings. With a little automation magic, you can take these base images, along with persistent volumes tied to each user, and create a nifty “stateless” VDI environment:
OpenStack VDI
In the above example, the user requests a VDI instance. A cloud automation tool communicates with OpenStack to provision a new win7 instance, and attach the user’s persistent storage. The user then accesses the desktop through RDP, VNC, or SPICE. When they are finished, they log off and the instance is destroyed. The user’s data, living in a cinder volume, will be reattached on the next session to a new fresh image. The user gets a brand new instance, and known “perfect state” every time they log in. This could be bad news for PC support :) The BYOD movement should not be underestimated either. Employees favor it, it cuts IT costs, and arguably leads to increased productivity. With cloud VDI, you can answer one of the most important risks in BYOD — maintaining control. No more lost/stolen devices, user corrupted systems, mawlware, or viruses. Just transient desktops and data. Anytime, anywhere, any device.

Posted in Uncategorized | Tagged , | 10 Comments

OpenStack Icehouse Feature Review

I’ve been playing with devstack over the past few months, and I’ve been really impressed with the progress on Icehouse leading up to its release last week. There are some key new features, and updates, which I will touch on below:

Compute (Nova)

  • The improved upgrade support is great, and will allow upgrades of the controller nodes first, and rolling updates of compute nodes after (no downtime required!)
  • The KVM / libvirt driver now supports reading kernel arguments from Glance metadata.
  • KVM / libvirt also got some security boosts. You can now attach a paravirtual RNG (random number generator) for improved encryption security. This is also enabled through Glance metadata with the hw_rng property.
  • KVM /libvirt video driver support. This allows specification of different drivers, video memory, and video heads. Again, this is specified through Glance metadata (hw_video_model, hw_video_vram, and hw_video_head)
  • Improved scheduler performance
  • Scheduling now supports server groups for affinity and anti-affinity.
  • Graceful shutdown of compute services by disabling processing of new requests when a service shutdown is requested but allowing requests already in process to complete before terminating.
  • File injection is now disabled by default! Use ConfigDrive and metadata server facilities to modify guests at launch.
  • Docker driver removed from the Icehouse release. :-( The driver still exists and is being actively worked on, however it now has its own repo outside Nova
  • Important note: Nova now requires an event from Neutron before launching new guests. The notifications must be enabled in Neutron for this to work. If you find guests failing to launch after a long wait and an error indicating “virtual interface” issues, give the following a shot to disable this check in Nova:

    vi /etc/nova/nova.conf
    Set vif_plugging_is_fatal=False and vif_plugging_timeout=0

Object Storage (Swift)

  • The new account level ACLs in Swift allow for more fine grained control of object access.
  • Swift will now automatically retry on read failures. This makes drive failures invisible to end-users during a request.

Image Service (Glance)

Nothing has been reported in the official changes, but there has been some activity on github. Much of the work seems to be stability and cleanup related.

OpenStack Dashboard (Horizon)

  • Live Migration Support
  • Disk config option support
  • Support for easily setting flavor extra specs
  • Support explicit creation of pseudo directories in Swift
  • Adminstrators can now view daily usage reports per project across services

Identity Service (Keystone)

  • There is now separation between the authentication and authorization backends. This allows holding identity information in a source like LDAP, and using authorization data from a separate source like a database table.
  • The LDAP driver updates added support for group based role assignments.

Network Service (Neutron)

  • New OpenDaylight backend.
  • Most work on Icehouse’s Neutron went towards improved stability and testing.

OpenStack Orchestration (Heat)

  • HOT template format is now the recommended format for authoring Heat templates.
  • The OS::Heat::AutoScalingGroup and OS::Heat::ScalingPolicy now allow the autoscaling of any arbitrary collection of resources.

Database as a Service (Trove)

  • Experimental support for MongoDB, Redis, Cassandra, and Couchbase

Overall, there are a ton of features and changes beyond what I documented here. Check out the official release notes for more info.

Posted in Uncategorized | Tagged , | Leave a comment

Installing OpenStack Icehouse On RHEL 7

The public “release candidate” of RHEL 7 (Red Hat Enterprise Linux) came out yesterday, and I decided to take a shot at installing the latest OpenStack RDO on it. The install was smooth, and surprisingly easy. To try it out yourself, follow the steps below.

Install RHEL 7

Grab the RHEL 7 Release Candidate from here [ Note: You must have a current Red Hat Enterprise Linux subscription. ] You can also download an OpenStack / KVM ready qcow2 image to quickly get up and running. Install RHEL 7 on your host server, or in a VM. Make sure to register with:

# subscription-manager register --auto-attach

Update the system:

# yum -y update

Reboot if necessary (kernel update, etc.)

If you are running an instance using the rhel7 qcow2, you should log in and edit root’s ssh authorized_keys. This will allow ssh to root, and generally make things easier when we run packstack:

cloud-user$ sudo -i
# vi /root/.ssh/authorized_keys</em> (remove everything on the first line before "ssh-rsa")

Install EPEL 7

Add the EPEL 7 beta repo on each host with:

# yum -y install http://dl.fedoraproject.org/pub/epel/beta/7/x86_64/epel-release-7-0.1.noarch.rpm

Install Icehouse RDO

For each host install the Icehouse RDO repo:

# yum install -y http://rdo.fedorapeople.org/openstack-icehouse/rdo-release-icehouse.rpm

On the controller node run:
# yum install openstack-packstack

Create ssh keys (optional)

If you have multiple hosts you should create root ssh keys, and add them to the authorized_keys for each host. Log into the host where you will be running packstack (the cloud controler node), and execute the following as root:

# ssh-keygen (accept defaults)
# cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys

For each hostN:

# perl -e '$pub=`cat /root/.ssh/id_rsa.pub`; chomp $pub; print "ssh root\@hostN echo \"$pub >>/root/.ssh/authorized_keys\"\n"' | sh

When you are finished test logging into one of the other servers as root. You shouldn’t be prompted for a password.

Run packstack

The best approach to using packstack is to run:

# packstack --gen-answer-file=config.txt

Edit config.txt for your environment, then execute:

packstack --answer-file=config.txt

If you are in a hurry a packstack --allinone will get you up and running all on one node.
Likewise, a packstack --install-hosts=host1,host2 will install on two hosts, making host1 the cloud controller, and host2 a compute node.

packstack will take a while to run, but on a clean install of RHEL 7 you should soon see:

**** Installation completed successfully ******

Congratulations! It’s a cloud!

Check Out Your Cloud!

Source your keystonerc_admin file and verify services are up:

# source /root/keystonerc_admin
# openstack-status

You should see a lot of “active” components, and some additional info. If you have no errors, then it is time to connect to the dashboard!

First, allow connections to the OpenStack dashboard (horizon):

# vi /etc/openstack-dashboard/local_settings

( Add the hosts you like to ALLOWED_HOSTS. Be sure to add the floating ip if you are running this on top of another OpenStack install! )

# systemctl restart httpd

At this point you should be able to log into the dashboard. Go to http://the-address-of-the controller-node/dashboard/ and you should see the below:

Image

Cat the keystonerc_admin created by packstack, and log in as the admin user with the supplied password.

Posted in Uncategorized | Tagged , , , | Leave a comment

Fedora: Encrypting Your Home Directory

There are a number of steps for encrypting your home directory in Fedora, and enabling system applications like GDM to decrypt your files on login. I’ll walk through the process of how I got this set up on my own machine.

First, make sure you have ecryptfs and related packages installed:

# yum install keyutils ecryptfs-utils pam_mount

Then you can either go the easy way:

    # authconfig --enableecryptfs --updateall
    # usermod -aG ecryptfs USER
    # ecryptfs-migrate-home -u USER
    # su - USER
    $ ecryptfs-unwrap-passphrase ~/.ecryptfs/wrapped-passphrase (write this down for safe keeping)
    $ ecryptfs-insert-wrapped-passphrase-into-keyring ~/.ecryptfs/wrapped-passphrase

[All done! Now you can log in via GDM or the console (“su – user” will not work without running ecryptfs-mount-private)]

OR the hard way, which I followed. There are some benefits of going this route. It is a much more configurable install which allows you to select the cipher and key strength:

First enable ecryptfs:

# authconfig --enableecryptfs --updateall

Move your home directory out of the way, and make a new one:

# mv /home/user /home/user.old
# mkdir -m 700 /home/user
# chown user:user /home/user
# usermod -d /home/user.old user

Make a nice random-ish passphrase for your encryption:

# < /dev/urandom tr -cd \[:graph:\] | fold -w 64 | head -n 1 > /root/ecryptfs-passphrase

Mount the new /home/user with ecryptfs:

# mount -t ecryptfs /home/user /home/user
(choose passphrase, any cipher, any strength, plain text pass through, and encrypt file names)
# mount | grep ecryptfs < /root/ecryptfs_mount_options

Add to the /etc/fstab (with the mount options from ecryptfs_mount_options above) like so:

/home/syncomm /home/syncomm ecryptfs rw,user,noauto,exec,relatime,ecryptfs_fnek_sig=113c5eeef8a05729,ecryptfs_sig=113c5e8ef7a05729,ecryptfs_cipher=aes,ecryptfs_key_bytes=32,ecryptfs_passthrough,ecryptfs_unlink_sigs 0 0

Wrap up the passphrase with the users login:

# ecryptfs-wrap-passphrase /root/.ecryptfs/wrapped-passphrase</div>

Copy over files to the new home dir:

# su - user
$ rsync -aP /home/user.old/ /home/user/</div>

Unmount /home/user and set up the initial files for ecryptfs and pam_mount:

# umount /home/user
# usermod -d /home/user user
# mkdir /home/user/.ecryptfs
# cp /root/.ecryptfs/sig-cache.txt /home/user/.ecryptfs
# cp /root/.ecryptfs/wrapped-passphrase /home/user/.ecryptfs
# touch /home/user/.ecryptfs/auto-mount
# touch /home/user/.ecryptfs/auto-umount
# chown -R user:user /home/user/.ecryptfs
# su - user -c "ecryptfs-insert-wrapped-passphrase-into-keyring /home/user/.ecryptfs/wrapped-passphrase"

Now it gets interesting! Edit /etc/pam.d/postlogin and add the highlighted lines:

#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth        optional      pam_ecryptfs.so unwrap
auth        optional      pam_permit.so
auth        optional      pam_mount.so
password    optional      pam_ecryptfs.so unwrap
session     optional      pam_ecryptfs.so unwrap
session     [success=1 default=ignore] pam_succeed_if.so service !~ gdm* service !~ su* quiet
session     [default=1]   pam_lastlog.so nowtmp silent
session     optional      pam_lastlog.so silent noupdate showfailed
session     optional      pam_mount.so

Edit /etc/security/pam_mount.conf.xml and replace the whole file with:

<?xml version=”1.0″ encoding=”utf-8″ ?>
<!DOCTYPE pam_mount SYSTEM “pam_mount.conf.xml.dtd”>
<pam_mount>
<debug enable=”0″ />
<luserconf name=”.pam_mount.conf.xml” />
<mntoptions allow=”*” />
<mntoptions require=”” />
<path>/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin</path>
<logout wait=”0″ hup=”0″ term=”0″ kill=”0″ />
<lclmount>/bin/mount -i %(VOLUME) “%(before=\”-o\” OPTIONS)”</lclmount>
</pam_mount>

Finally,

# su - user -c "vi /home/user/.pam_mount.conf.xml"

And add this:

<pam_mount>
<volume noroot=”1″ fstype=”ecryptfs” path=”/home/user” />
</pam_mount>

Now you can login and see your decrypted files! (“su – user” will not work without running ecryptfs-mount-private.)

You should setup swap encryption for both of these methods with:

# ecryptfs-setup-swap

If you want to go that extra mile, you can symbolically link your /home/user/.ecryptfs/wrapped-passphrase  to a flash drive and mount it at boot, or use autofs or some scripting to mount it on login (and just in time for PAM to access it.) However, if you are going to go that far you should look into more CIA level disk encryption, like TrueCrypt.

Posted in Uncategorized | Tagged , , | 1 Comment

The next generation of consoles is almost upon us. Before we save up for that glorious new $500 gaming experience, it’s a good idea to understand just what we are paying for. A big part of the next generation is Cloud Gaming and Streaming Games. These innovations are very exciting, but not everyone understands what they actually mean.

 

All previous generations of consoles have been restricted by the power of the client (the actual hardware device.) This is because our console is dedicated to doing all the required work to get the game to function. It needs to be powerful enough to process the physics, control the AI, perform collision detection, render complex HD scenes, etc. So it is only reasonable to assume that every device has to be powerful enough to actually run the game we want to play… right?

 

Not anymore. Average network speeds are moving up around the globe, and cloud technologies are stabilizing, standardizing  and taking hold in every industry — including gaming. We are entering a new era in what is possible by leveraging these strengths. My first brush with this concept was way back in the 1990s. At that time, it was common to see “dumb terminals” in schools, computer labs, and libraries across the US. These were very simple machines that hooked up to a central computer via a serial port and provided a rudimentary text console. The devices themselves lacked much capability. They could turn on and proxy data through the serial port, and print things to the amber or green monochromatic display. All the work was done on the back-end server, and the thin client had just enough horsepower to allow user interaction.That simple concept: “pushing all the work to the server” is the basis of  Cloud Gaming.

 

In the 2000s we began to see some kewl browser based games powered by flash or JAVA. Unfortunately, there was no elegant way at that time to leverage the graphic hardware capabilities of the host. Finally, WebGL was introduced in 2010 (fist stable release was 2011). It provided a new standard with OpenGL and HTML5 Canvas integration, javascript API, and hardware GPU acceleration. It’s now a cross-platform, royalty-free standard built into most web browsers. I became interested in seeing the possibilities of WebGL right away. I scoured the net looking for something to provide good showcase, and I came across a nifty project called quake2-gwt-port. I have a screencast below which I made in April, 2010. I was running the server on the localhost, using a test release of chrome, and while there is no sound in the video it was playing perfectly for me through HTML5 <audio> elements!

 

WebGL Quake II

 

This is a great example of “how” Cloud Gaming will work. Your console will have to shoulder much less of the responsibility. It will communicate through some proprietary protocol to servers in the cloud which do all of the heavy lifting. Your device just needs enough power to display the interface, and transmit user interaction. If a web browser can do this, imagine what a specifically cloud-designed console could do! The technology evolution to cloud gaming will allow these future devices to be cheaper, smaller (think iPhone sized), and have a much longer life span. Their internal technology could remain static (even get cheaper), while the content they provide has the potential to become infinitely more complex and powerful.

 

Cloud Streaming is how companies like Sony plan to tie this into a business model. They will most likely provide a subscription service which gives users access to a huge library of games, much like Netflix does for movies. When a user selects a game to play, a properly sized cloud instance will spin up (in a nearby availability zone) and begin transmitting the content to the user’s console. This provides some deeply interesting cloud-based cost models for the provider. Time will tell if those models pay off, but I have a feeling they will.

 

If you are like me, you’re probably wondering how you can check out some of that cloud gaming awesomeness right now! Well, you can download the Quake II port at the link above and stick it on a cloud instance. I’ll be doing that myself later in the week, and I’ll post a brief howto. I’m also playing around with a tool called emscripten that compiles C++ into javascript. I want to get a cloud-ified ScummVM (or some other emulator) up and running in the cloud, and see what the end-user experience is like. I’ll keep the blog updated with my adventures.

 

Posted on by syncomm | 1 Comment