VCIO – Better IT for Small-Medium Organizations

The Problem of CIOs in Small-Medium Organizations

Organizations are designed with responsibilities and personnel capable to carry them out.  These are then placed in a hierarchical structure we see reflected in an “org chart”.  One of the boxes on this chart an organization sees a need for is a CIO or CTO, to reflect the need for somebody to be responsible for the information technology function and have the capabilities to carry it out.

But small organizations rarely start off with a CIO; instead they start off with a junior help desk person or perhaps a network administrator.  The cost of starting off with a highly skilled CIO is too high and having them do helpdesk work or fixing printers is not an effective use of budget.  But having the low-end skills of a help desk person doesn’t provide the organization with the ability to align the business and technology. IT personnel new in their career or more junior by default focus more on the technology and gadgets than they do on the business needs and technology that would solve their problems.  They request a lot of budget, but bring little new ideas to production that will help the business. By having the junior person on staff the organization is not only faced with a lack of necessary skills, they also now must manage a skill they have little understanding about.  This creates ineffective decisions and misspent funds.

You might consider at this point “Why not hire a CIO?”  For organizations of less than 500 users a CIO would be a luxury. They are a fairly expensiDavid Howard VCIOve proposition, and with the revenues of the organization they would not have enough funds to invest to take advantage of their skill set. They would have great ideas about implementing new sales force automation, e-commerce systems, inventory management and other exciting prospects for the business, but then face a budget that does not allow them to implement them.

What is the solution then? How can an organization have these essential skills and yet spend the right amount for their size?

Solution: A VCIO

A “Virtual CIO”, or as we will refer to it, a VCIO, is a fractional or part-time resource that can work with the business leaders of the organization and oversee IT operations.  They may work between 16 and 40 hours per month depending on the investment level and project status.  Sometimes they may be close to full time, but generally it will be much less than that.  They will be present for key meetings about the business strategy or perhaps the weekly operational meeting to review business status, and will do work on their own to review options, speak with IT staff, or end users about key issues.  You as an organization will pay for only what you need.

VCIOSome capabilities and services of the CIO can offer you are:

  • IT Strategy: develop an overall plan for information technology based on the discussions with business leaders and understanding of the industry and market conditions.
  • Understand the business: the key issue here is that the VCIO will be a part of your management team, and will listen from a technology perspective to the problems the business has. Every person on the management team comes from a different focus, a sales manager will talk about how to increase sales, or a production manager will be focused on how production works with the rest of the organization. The VCIO will listen from a technology perspective so that when they hear problems they will contemplate and suggest options that employ technology in ways that others on the team have not considered.
  • Design the technology platform, policy and process, and personnel for the current and future needs of the business. These are the three key aspects of IT: the technology put in place such as which ERP system or servers to use, the process IT uses to support and manage systems and users, and the personnel and their roles within the IT department. The VCIO can align these to provide a fully effective IT function that meets the needs of the business.
  • Business Process Mapping – people with an IT skill set are often very keen on the business process and flow of data through the organization. A skilled VCIO can help you map the business process and identify bottlenecks and points where efficiencies can be gained either by implementing technology or changing a business process.
  • IT Roadmap aligned to the business: they can develop a roadmap of technology changes and suggested projects with business analysis included to show you how the future can look with technology options.
  • Benchmarking IT capabilities against industry-standards: most markets are competitive, and technology is an enabler to increase the velocity of the business. To the extent a business uses IT well compared to others, they will have a greater velocity of revenue generation, product creation and production, and cash management. These key aspects of the business must exceed the competition or you will look up from the bottom of the market. A VCIO can help you benchmark and improve this.

Having a VCIO will provide your business with a single point of contact for IT needs that aligns to the business, a partner invested in your long-term needs, and a technology leader to sit with the CEO, CFO, and other business champions and make sure the technology is an enabler for the business.

Contact us today to discuss how we can help you for a manageable cost and maximum results.

Bimodal IT – Maintain the Now and Plan for the Future

 

Keystone Technology Museum

Bimodal IT is a methodology being adopted by organizations so they have a division of focus and effort between taking care of the current or legacy technology, while also looking to the future and what is needed to sustain the organization as business and technology cycles continue to ripple.

Every organization follows a methodology, whether it is formalized or not. This is true of the organization and the key components or departments. Your accounting department follows a methodology according to Generally Accepted Accounting Principles (GAAP), your production and quality departments may follow methodologies directed by ISO standards for documentation, and Lean or Six Sigma for means to manage and measure production. The information technology department is no different though it often lags behind other departments in terms of formality in methodology. If a methodology is identified, trained, communicated, and followed it can assist the IT department in working with the business to align the technology to the needs of the organization.

One of the trends in Information technology management methodologies is Bimodal IT.

What is Bimodal IT?

Bimodal IT is an approach to information technology where two areas are in focus, with expected results established for each.

The first area is the traditional IT function which remains highly valuable: the normal “keep the current systems reliable, secure and performing” so the business can deliver on its plans and promises. The emphasis here is on safety, accuracy, reliability, and scalability.

The second area is innovative (or fast mode), and emphasizes speed and agility.

A great CIO will struggle to compete with small, disruptive startups that threaten the business. The startups do not have the overhead an existing IT operation must maintain, and are not limited by the lack of focus on something new.  They can be fast and agile. But a good CIO can simply shift resources to be focused on innovation.

Gartner research has studied this trend, where Peter Sondergaard, senior vice president and global head of research, said “CIOs can’t transform their old IT organization into a digital startup, but they can turn it into a bimodal IT organization. Forty-five percent of CIOs state they currently have a fast mode of operation, and we predict that 75% of IT organizations will be bimodal in some way by 2017.

Bimodal IT is simply a shift in some resources, with goals of speed and agility to develop options and solutions for current and future problems.

What Problems does Bimodal IT Address?

There are several problems this methodology is addressing.

  • Keeping IT current so the organization does not fall behind – a key business case for the danger of neglecting upgrades and innovation in IT is the New York subway system (discussed here). The subway system was designed and built in the 1930s to provide for safety and largely avoid collisions between trains.  The article’s author, Bob Lewis points out the details and the estimate to replace it was set at 20 billion dollars.   The obsolete technology’s issues had been known for a long time, and discussions held to plan its replacement in an orderly fashion for the budget and operational cycles had been thoroughly designed and vetted, right?  No, the discussions followed the same path they do in most organizations facing a potentially expensive replacement of a legacy system (the following italicized text is from Mr. Lewis article): Does any of this sound familiar — a legacy system that would be good enough except its architecture is obsolete, the platforms it runs on aren’t around anymore, and:
    • “Lift-and-shift” replacement provide no new features, and so no business-driven value to justify the expense?
    • Nobody can describe important new features that would justify anything more than a lift-and-shift replacement?
    • Investing in any replacement system would drain needed capital away from other efforts that are also important for the organization’s ongoing survival and success?
  • Increasing Value – the most persistent complaint from business leadership about IT is that it is unreliable. Once that is solved, the second most persistent complaint is that it is not adding value to the business. The IT department spends their budget and focus to “keep the lights” on, but never comes to the table with investment opportunities with clear ROI that will help the business. A lot of IT shops spend 85% of their budget on maintaining what is, rather than thinking about what could be.  This holds back 70% of IT leaders from focusing on innovative projects that will increase business value.  Bimodal IT allocates a certain percentage of the IT function to the future needs, and should be associated with an accountability to develop innovative options for the business.
  • Attracting great talent – Great talent in technology likes to work on interesting projects, so having projects that are more than just point upgrades will attract and retain people with better skills and ability to deliver innovation. They will add value in multiple ways in all areas of IT.

Is Bimodal IT a Fad, or Will It Help Me?

Maybe it is a fad term; the phrase Bimodal IT may go the way of “zero defects”, “Total Quality Management”, and other names for lost methodologies. But the concept of planning the orderly replacement of obsolescent technologies and developing new options is a good thing, no matter what it is called. This requires a focus on new solutions to the changing technology landscape and business challenges.

We focus on the actual goals, not the terms. And there are other methodology options that people are passionate about, such as Dev Ops or Agile which are also good.  The main thought we raise is we believe some of your efforts should be focused on the future, where technology can make an impact negatively if not dealt with (New York subway), or make a positive impact on a growing organization (disruptive technologies that provide a competitive advantage, e.g. “news blogs vs traditional newspapers”).

To maintain the now, and plan for the future, you will need a strategy to generate new options (innovations) which can be implemented (accountability).  This will help you avoid the negative and benefit from the positive.

Keystone’s Bimodal IT

We spend a lot of time looking at the current systems via monitoring tools, reports, and visual review.  We also look at the future: we just returned from the Consumer Electronics Show (CES) in Las Vegas, where we witnessed numerous trends in robotics, product development, monitoring with connected devices, and so much more.

We also published two articles on Technology Frontiers you may enjoy, part 1 and part 2.

One of the features we have at Keystone is a technology museum.  You may wonder what does a museum have to doKeystone Technology Museum - Bimodal IT with Bimodal IT?  It does for two primary reasons.

The museum has items that come and go to keep it fresh, but starts in t1800s technologyhe 1800s with old journals of a store’s transactions and accounts (“the books”), which were filled out with a pen dipped in ink.  This was “technology”. It then moves totypewriters that replaced the pen, and PCs that replaced the typewriter.  These were shifts that had to be planned for or the risk of being out of business was real. These past shifts give us insight into how to plan for future shifts.

We always reserve the last section of the museum for future technology; something that represents what IBM PCcomes that can make a difference and must be planned for. We see things here that are part of the Internet of Things (IOT), 3D printing for product development and someday delivery, voice command technology, and so much more.

It is all a continuum of technology we help clients understand and implement. The past into the future.  Maintain the now, and plan for the future.

You may want to know how to implement this approach, contact Keystone today to start that discussion!

 

Technology Frontiers

The word frontier can be defined as “areas near or beyond a boundary”, and when we think of those who are “frontiersman”, we may think of ancient explorers, or the crew of the Enterprise on Star Trek who were exploring “space, the final frontier.”  These frontiers are new and exciting, but also fraught with risks and unknowns. We have gone through many frontiers in the information technology industry.  Looking back in my life we have had several: the move from mainframes to PCs, from character based operating systems to Windows and Mac graphical user interfaces, from local area network client/server applications to web based applications, and from PCs to tablets and other mobile devices. There are always new technologies and they drive change in how we operate and live and communicate.  Think of how the pony express system and telegraph allowed people to settle out west in the 1800s, far away from the civilization they knew in the eastern cities.  Similarly, today I write this at home while connected to my office and team via web, email, cell phone, and Skype for Business for chatting and sharing documents and screens. I am not 2,000 miles away, but I could be and it would be fine for what I need to do now.  This was not possible 20 years ago and yet it has become commonplace, and in it we see that I am using several of the technology frontiers of my lifetime.

And even now, we at Keystone are working with new technology that you may not even be aware of.  Why do we do this?  Inherently we love technology, so if you did not even need us to do it, we would still geek out at the latest mobile phone, backup software, security patch, and other fun to all or mundane to many technologies. We just can’t help ourselves, but we know that not everybody can stomach the pain of the new frontier.  They call it “cutting edge” for a reason, and sometimes it means “bleeding edge”.  We would not subject our clients to a new technology unless we have a good sense of the risks inherent, and how to overcome them to get the best value in the safest way possible.

Here are some technology frontiers we are exploring now in our Research and Development (R&D) that you may see as commonplace in your future.

3D Printing

So far, this feels like early paper printing technology.  Have you ever sent a job to a printer and nothing happens?  Or have you ever sent a 2 page document only to get 100 pages of what looks like alien communication?  That is what 3D printing feels like now.

3D Printing takes the concept of a data file, with instructions for how an object is shaped, and combines it with plastic extrusion technology to “print” the object.  You load the plastic filament into the 3D Printer, and send the job to it as a set of instructions.  The printer is supposed to print the object by feeding the filament through a hot end extrusion nozzle (the “print head”) and dropping it into a flat surface.  The print head moves up and down, and the flat surface (“the bed”) moves back and forth and eventually your object is sitting there; ready to use.

But it does not quite work that well.  Sometimes it runs for a while and stops, sometimes it slams into the bed and melts a hole, sometimes nothing, sometimes a big mess of plastic, etc.  But when it works it is great!

Think about some of the implications for your life.

  • You want to create a 3D representation of a new factory floor plan to test your kaizen or lean model more fully – just design it in the 3D software by dragging and sizing objects, and send to the printer. This reduces the time to prototype saving costs and improving flexibility.
  • Can’t find the battery cover to the remote control – just go online and download the design file and print a new one. No trip to the store, no tape over the batteries, etc.
  • Your client is not able to visualize what you are describing for your latest design for their building, and you are not going to make the sale because they lack a perspective needed to decide. Perhaps a 3D representation will help?
  • You need a new towel hook for the bathroom, but instead of buying online and waiting, you go browse designs, select and pay for one, and download and print.

At this point 3D printing has already been used to create new organs for your failing body parts, prototype new cars, create functional desk accessories, and help sell ideas.

It is new, it is exciting, and we are testing it now!

Clustered Computing

Most computing is one computer doing one or more jobs, and reliability and performance are based on what is in the machine’s box.  If you need more power, you open the machine and add more memory or disk space.  If the CPU is a few years old and not keeping up, you buy a new machine and rebuild everything. If you need reliability you buy one with at least 2 of everything you can: multiple drives, power supplies, and network cards. Performance and redundancy in this model are built on what is in the machine.

But if you could just add another machine and have it do ½ the work?  Or three more machines and they all share.  You now have 4 machines – 4x the performance, and if one goes down you run on 3 machines and replace the failed one as needed.  This is “clustered computing”.

It is not particularly new, and is sometimes called “Super Computing”, “Parallel Processing”, or “High Performance Computing”.  It was first conceived in the 1960s but required incredibly expensive hardware and custom software, and only accessible to organizations like the National Oceanic and Atmospheric Administration (NOAA) for use in weather studies.  In the mid-1990s new technologies allowed computer clusters to be built from commodity servers (search for “Beowulf Cluster”).  Suddenly organizations could build their own.  So at the same time that the internet was becoming available to everybody, the power of clustered computers became available to build search engines like Google and Yahoo!  (for a quick view of Google’s first cluster that looks like a Lego system, see this: http://infolab.stanford.edu/pub/voy/museum/pictures/display/0-4-Google.htm).

These capabilities are now becoming available in two ways:

  • Build your own local super computer from off the shelf parts. We are doing this now, using about $200 in parts primarily based on the Raspberry Pi motherboards. By linking 4 of these credit card sized motherboards that each have 4 “cores” together in a clustered network and using special software, we have what looks like one computer to a software application. In testing, using one unit in the cluster, it takes about 35 seconds to calculate the value of Pi to 16 digits on one core of one Raspberry Pi, but when we go to 4 Raspberry Pi units (16 cores) we are seeing times of less than 9 seconds!Raspberry Pi Cluster 2 - part of a technology frontiers approach
  • Rent space on a cloud provider’s platform and use it while letting somebody else (Microsoft, Amazon, Google, etc.) do the dirty work of managing the platform and the networking. See this for a quick description of Google’s current platform for this (https://cloud.google.com/solutions/architecture/highperformancecomputing).

One caveat of this is the necessity that your software be developed to run in a multi-node, multi-core environment.  You can’t just grab a copy of Microsoft Excel and expect it to calculate your budget faster (although oddly enough Microsoft has extensions to support this (https://technet.microsoft.com/en-us/library/ff877825(v=ws.10).aspx)!  Your software has to be designed for multi-threading, multicore support (you may see things like “HPC”, “High Performance Computing”).  The leaders in this area now are big data database packages like Hadoop that have to process incredibly high volumes of data in a short time.

This technology may not be ready for the average small to medium sized business, but it shows what is possible and could help with growth and seasonal needs.

Latest Applications, Operating Systems, and Devices

This is the most basic technology we test – the thing you see next week, or next month, or next year.  We have multiple devices and many different operating systems and applications that are in beta form, and we are trying them out so we have a perspective on what you may see, when you should move to it, and what the risk and reward will be.

In fact, I just had a lock up when I was writing this article using Windows 10 in an advanced preview copy, and Microsoft Word 2016 latest version; it does not occur often, but does happen.  We are evaluating the features and capabilities, the user interface, and the reliability (in this case I lost a few minutes but no data).Windows 10 Blue Screen - The result of working in technology frontiers

Some of the tools we are testing now include:

  • The latest suite of Office 365 Products, including Skype for Business
  • SharePoint and OneDrive for Business
  • Apple MacBook 12” with a beta version of Apple OSX
  • Cloud Based Information Security Systems
  • Amazon Echo
  • Beta Versions of IOS (on iPhones and iPads)

Summary of Technology Frontiers

There are waves of technology shifts that represent new frontiers for users and business organizations, and each represents some questions: What is this?  How can it help me?  What are the risks? We are looking at these so you know we have an eye on what may make a difference for you!

Next time we will catch up some more, and include some other technology frontiers like Internet of Things (IoT) and Voice Recognition!

Learning from the New York Stock Exchange’s Technology Failure

The New York Stock Exchange (NYSE) experienced a serious technology failure this week of approximately 3.5 hours, after experiencing reduced functionality for the first 2.5 hours of the trading day.  The NYSE is of course a very high profile, internationally critical component of our financial systems.  System wide failures are extremely rare, and when they do occur they are publicized.  This allows us to consider what happened, and what we can learn from it that may help you.

What was the Technology Failure?:

The NYSE has numerous software applications that are integrated to provide a cohesive system for access and control.  There is the core record keeping system, systems to manage the process, customer systems to control accounts and execute trades, systems that monitor activity for fraud, etc. These systems exchange data with each other at various levels, and are dependent on being compatible and reliable.

On Tuesday evening, July 7, 2015, NYSE administrators applied an update to one of these systems to support a change in how the industry timestamps transactions. On Wednesday morning, July 8, 2015 the NYSE started noticing issues with communications between systems and applied an update to the customer system, this in turn created more issues.

The problem was not resolved, and at 11:30am the NYSE shut down trading and continued to work on the issue. At just after 3:00pm, non-updated backup systems were brought up in place of the production systems and operations resumed.

A quick synopsis can be seen here: http://www.cio.com/article/2946354/software-update-caused-nyse-suspension.html

What do we learn that is applicable?

What does you SMB sized organization take away from this?

We may be able to continue operations. The NYSE must have a level playing field to allow everybody to execute trades at the same time, or else fraud or inequality of opportunity become an issue.  Your business may be able to continue operations without a complete shutdown if one function is limited or creating any data issues.  For example, if your customer service system is down and orders via the web cannot be taken, it may be possible to place a message holder informing customers they can call customer service to place an order.  You may need to temporarily reallocate staff to handle more call volume, but customers can still be serviced and a more intimate conversation take place during the transaction.

Systems are complex, especially multiple systems that communicate with each other.  Software, especially software designed for a specific organization and use, can be complex. The luxury of waiting for others to test it in the real world is not present.  So testing is essential and it must reflect the real world: real data, real transactions, real systems that mirror the production system with the changes tested applied.  The testing must be broad, rigorous and deliberate, and results must be tracked.  Automated test tools can make the process more efficient, but they are just pieces of software and must be setup and used correctly. When multiple systems are involved and dependent upon each other, they all must be exercised.

Disaster Recovery works, but is a choice to execute.  In this case, the NYSE decided to cut over to the backup systems to continue operations.  This is not the same thing as pulling a server out of the closet and installing everything and going back to operations.  This is a “hot system”, one that has all of the live data but was not updated with the errant code.  It is not a small decision to cut over, as there is normally a cut back process when issues are resolved, but one they could make because they had designed the systems for it.  This allowed them to resume operations while still dealing with the issue. Most small organizations do not have this capability, but they can, and can have it for a very economical price.  My firm, Keystone Technology Consultants, offers this for even very small clients of 20 users. It is not just a peace of mind issue; it literally allows an organization to continue operations and keep the flow of work and money, and maintain their reputation and client relationships. It is essential.

I would love to hear your view of this, feel free to comment below.

 

 

World Class IT

Ever see a mission statement? Your own or other organization’s? Most reflect the same sentiments – “world class” in everything we do… unmatched service… excellence.  Setting lofty goals is admirable – hyperbole or not – but it’s always important to examine the outcomes and goals you have for each initiative. Businesses today, from small to large are all stretched thin – and losing touch with what really impacts reaching your business goals can be detrimental to your company’s success, if not deadly.

So we make choices every day: what is critical to success and what can we live with as “good enough?” But be careful when deciding which is which, because it is not always clear cut. It is easy to know what is “core” to your business – but do you know all of the other areas that are critical to deliver excellence in your core business?

Often inefficiencies with technology and infrastructure are where organizations settle for “good enough?” What we lose sight of are the efficiencies in time and money that the right technology solutions can have across the organization.  “Good enough” just isn’t to deliver your products, services, and support with excellence.  Your competitors can achieve better than you – in everything you do – simply by having an infrastructure that strives to be world class, enabling them to be faster, better, and cheaper.

World-class does not have to be expensive, but it does require thought. More specifically it requires strategies and people committed to rightsizing the IT function  and making sure it is aligned with your business goals. Understanding that the organization may be small, or may not have information as a core part of its offering may cause them to consider outsourcing to gain the advantage of a world class IT function without the cost of a wholly owned unit. The best news is that the cost savings and gains in efficiencies turn your IT from a cost center to a partner in your business.

Researching and considering what makes a “World Class IT” function has led me to believe that it embodies the following characteristics:

  1. It creates and maintains a platform that is stable and robust that business units can reliably employ through all business cycles and functions.
  2. It has people that are skilled in their roles, committed to the organization’s success, and is a stable team with low turnover.
  3. It provides the needed systems and tools to enable all business units to understand processes, and measure and manage them for improvement.
  4. It spends just the right amount, with measures in place to understand costs.  In other words, it has a budget in alignment with the rest of the organization – and has demonstrable value.
  5. It takes its knowledge of the organization through the lens of data movement and helps define and improve business process for the betterment of the whole organization.

I came across a book recently that has been helpful many CIOs, and is aptly titled for our discussion – World Class IT by Peter High.  In it he develops five principles for IT which are different than I had thought of them.  His five principles are:

  1. People form the foundation of an organization. Without the right people doing the right jobs at the right time, it is difficult to achieve excellent performance.
  2. Infrastructure distinguishes between a reactive organization and a proactive one. If software, hardware, networks, and so on are not consistently performing their tasks, the IT organization will become lodged in reactive mode. If the infrastructure works reliably, then a greater percentage of the organization can think about the future.
  3. Project and Portfolio Management is the engine through which new capabilities can emerge within the company. It is important to ensure that the portfolio collectively supports the goals of the business and that projects are delivered on time and on budget.
  4. IT and Business Partnerships are vital. It is the IT executive’s role to ensure that different groups within IT function as a team, communicating efficiently and effectively. It is equally important that IT develop partnering relationships with executive management, lines of business and key business functions to ensure ownership of and success for IT initiatives.
  5. External Partnerships are increasingly important as outsourcing becomes more common. By contributing to the discussion about business strategy, IT is in a strong position to determine which aspects of IT are best handled by external partners. Further, IT must be adept at managing those relationships to be sure the company gains the expected value from its outsourcing activity.

There are some overlaps with my list, but the differences point to areas that we need to make certain are not taken for granted.  For example, disaster recovery does not “technically” add business value – but neither does insurance.  Money to protect your business may someday be the best money you’ve ever spent. No one wants to need insurance – but you cannot accept being without it.

I especially appreciated the way High promotes Project and Portfolio Management, as it leads me to think back to the fact that the truly outstanding IT functions I have had the privilege to work with not only did that well for IT-centric projects, they actually led the rest of the organization in this discipline for non-IT projects.

Regardless of how you think of “World Class IT” – it is clear that often good enough just isn’t, and that your IT needs to be:

  • Encompassing of all functions in the department, and outside of it.
  • Measurable and actionable
  • Customized to your organization. World class is not the same for everyone as each organization has different goals and processes that support these goals.
  • Stable and reliable – never a barrier and always an enabler to great work.
  • Contributing to the growth of the business in a true partnership.

And that brings me to a concluding thought – metrics are important.  They should be aligned to the overall business goals, and organized according to whatever system you choose to use when discussing and designing your IT function (e.g. High’s World Class IT or some other system).  And to that end, metrics should also be:

  • Assigned to a person responsible for its improvement
  • Have targets
  • Have the right amount of metrics – too many causes loss of focus
  • Have specific projects and initiatives created and assigned to reach and exceed the metric’s goals.

Over the next few months I will be spending time developing this, and publishing that here.  I hope you find it helpful and welcome your comments below.