Archive for the ‘ Uncategorized ’ Category

Exiting (the Expense of) Email

In the 1980’s email was an exciting new collaborative tool, mostly limited in its use to academic or research setting.  Since email was a relatively unique application, IT organizations in higher education had to design and manage their own email systems.  Over the years email has become a key communication tool for the campus community and can now be considered an institutional utility, as critical to the function of the campus as the water and power utilities.

Over time technologies tend to either die off or become a commodity. And after nearly 30 years email is now very much a commodity. While different institutions may use different underlying email technologies, the basic email service provided to the customer is remarkably similar.

As a service becomes more standardized or commoditized there are more effective and less expensive ways to offer that service. Provisioning and managing a commodity is very different from how you manage a customized service. Continuing to provide email as a customized service will cost you a premium; likely a premium your institution can no longer afford.

In the US, many institutions have abandoned their home-grown or home-managed systems. For several years providers such as Google and Microsoft have delivered offerings of their web-based email services to educational institutions at no cost and with no advertizing. This is now often called Software-as-a-Service (SaaS) or moving the service to the “cloud”. But regardless of the label, the significant change is that you simply have someone else provide a service for you. No longer do you care about details such as the type of server and disk, you simple purchase (or receive for free) a service that meets your needs.

The debate about whether or not an external organization can provide the required email services for the institution occurred several years ago in the US, with hundreds of institutions concluding that the service level, features and security was better using Microsoft or Google. Their experiences have shown that these providers of free email can supply robust, easy-to-use and highly functional email offerings to both large and small campuses.

In Canada very few organizations took advantage of these free “cloud” email services due to concerns that the US Patriot Act would allow the email security and privacy to be compromised. Lakehead University was one of the early Canadian adopters of Google email and this past summer an arbitrator ruled (http://bit.ly/64whI8) that the university had the right to switch to the Google email, despite faculty concerns about privacy.  Other Canadian universities, including the University of Alberta (http://bit.ly/73kxhT), are currently contemplating the move to Google for campus email.

Before moving jumping up to place your call to Google or Microsoft there are three main areas that need to be explored in the context of your institution:

Cost      What is the annual cost of your current email services? What is the cost of migration? Perhaps you have a solid IT Service Costing model that will allow you to measure the total annual expenditure on email, but if you don’t have this detail of financial data available you should be able to do some reasonable back-of-the-envelope estimates of the operating and capital costs. Alternatively, you could use the market to tell you what an efficient large-scale provider will charge for the set of services – currently $50/year/user for Google Apps Premier and $5/month/user for Microsoft’s Exchange On-Line. But whatever figure you arrive at for the cost of your current in-house email services, it is almost certainly higher than the $0/year cost charged by Google or Microsoft for educational customers!

Functionality   Do the “cloud” email services meet your institutional needs? Likely the current Google and Microsoft offerings meet the basic email, mailing lists and calendaring needs of the campus. In fact, in many cases, the functionality provided by these vendors will exceed what can be offered on-campus. In particular, not many institutions currently provide the massive 7GB of email storage offered by Google. But when considering the offered functionality, it is important to think about email as part of a larger set of collaborative tools and therefore, when making email directional decisions, you should also considering your directions for services such as Instant Messaging (IM), web conferencing and file sharing. Many of the “cloud” email vendors have collaborative tool offerings too.  Also, it may be helpful to think about segmentation of these services as not all groups on campus necessarily need the same set of email or collaborative services.

Security  Does your institutional email need to be stored on campus servers or stay within Canada? This seems to be the most contentious issue in moving email to the cloud. There tends to be some comfort in knowing that your email servers reside somewhere on-campus. But is that really providing better security and privacy? This is the area where you need to spend some time and effort in doing a risk assessment and a Privacy Impact Assessment. IT Leaders need to open a conversation with their Privacy Office and/or legal counsel.

Your institution’s answers to the above will allow you to chart the best course of action. But be warned, moving to a new email system is a significant change to a critical system and the project needs to be properly planned, managed and communicated. As well, as you move to a cloud provider for a key service such as email, you should review your IT Service Management approaches. It is quite likely that your current Help Desk, Problem Management and Change Management processes will need to be revised to work in the new environment.

Cloud computing for email and other services isn’t something an IT Leader can ignore. If the IT Leadership at your campus does not work on setting the institutional direction, given the current fiscal environment you can be sure that someone on the financial side of the organization will advocate moving to $0 cost email.

And one final caution. The movement of services to the cloud is accompanied by a strong wind of change. A wind of change that goes well beyond the technical aspects of a specific service. The winds might even be gale-force in service management and governance aspects of IT. Be prepared!

Advertisements

Why Be Green?

There is a lot of talk these days about “Green Computing” and what an organization should do to be green. 

At last week’s Summit 09 — a conference on CyberInfrastrure — I was part of a panel talking about Green activities in Higher Education.  In addition to talking about specific projects, some of the questions (during and after the session) were about why organizations should be putting effort into Green IT. Not that I sensed people disagreed with Green IT as an important initiative, but some seemed uncertain about how to frame or prioritize Green IT within their organizations.

From my perspective there are several, overlapping drivers for organizations putting effort into Green IT. 

One obvious driver, especially for an organization in the public sector, is that greening of IT is just the socially responsible thing to do. Higher Education has an obligation to the stakeholders — that is the people of the province, country and world — to ensure that the environment is considered and supported in all the institutions actions.  The desire (perhaps need!) to be socially responsible is especially visible in the students at the University. The students are firmly behind sustainability (and Green IT) because they know that it is the right thing to do. They’ve put dollars behind this view by using their money to purchase clean energy for computer labs. 

While it is important to be socially responsible, the organizations also have an obligation to be fiscally responsible. Fortunately, many Green IT initiatives also provide excellent return on investment. For some this may be viewed as a by-product, but I think it actually very important for Green IT to be able to present a good financial business case. Without a solid business case it is difficult to convince the bean-counters to release the seed-funding that is necessary for many Green IT projects. That fiscal reality was true several years ago, but is much more important in these difficult financial times.

A third, sometimes overlooked, driver is service to the customer or client. Some of the significant Green IT initiatives are also approaches that can be used to improve the quality of service provided by IT. This can mean improved capability or capacity or reliability. There is an increased demand for and dependency on IT in organizations. It is easier to get support for a plan that includes tangible improvements to the user than just relying on social or cost improvements.

Let’s take a simple example of one “Green IT” initiative — virtualization of servers.

From a pure green perspective the main gain is a decrease of power requirements. From a financial perspective, cost savings are reported by some organizations to be over 20% per year. From a service perspective, once you’ve virtualized you can have a more agile and flexible computing environment.

It is easier to build the business case if there are wins all around!

What is Green?

While outside of my window the world is currently white with fresh snow, I’ve been spending my morning thinking about green … that is, Green IT.

From my perspective “Green IT” is about putting together a strategy and implementation plan to improve the environmental footprint of IT within an organization (or for oneself). This includes the full life-cycle; from manufacturing to daily use to disposal. It seems to me many discussions about Green IT are reduced to consumption of electricity, but it is important to have a broader perspective.

Given that Green IT can be such a broad topic it is useful to have some taxonomy to help divide the issue into actionable areas. One possible view is the division of Green IT issues into manufacturing, use, and disposal.

Using that framework it follows that we need to explicitly include “Green IT” into our purchasing processes. We need to ask vendors detailed questions about their processes and how they measure their “Greenness” or environmental sustainability practices. I think most organizations (at least in Canada) have procurement processes that include sustainability in their evaluations, but I think within most organizations there remains a gap in the ability to objectively assess the responses from the vendors.

Similarly, when we look to the other end of the process – disposal – most organizations have some methodology to ensure that the old technology isn’t just sent to the local landfill. High commodity prices helped develop an industry to receive and recycle old technology. I toured such a facility a few years ago and was impressed by the noisy yet sophisticated process of mechanically pulverizing old computers into shards of metal and plastic then sorting the debris automatically.

It is probably the area of “use” that still requires the most effort by IT leaders. There are some aspects of the use of technology that are (almost) directly controllable; there are other aspects in which our main tool is influence on others.

The area of greatest control by IT leaders is what I would call “behind the walls”. This is the equipment in wiring closets and data centres. Already many organizations have utilized technologies such as virtualization to reduce the footprint and increase flexibility, often without our customers even realizing we made significant changes in infrastructure.

The area of least control by IT leaders is the end-user devices and end-user behaviours. That is the myriad of printers, computers, phones and other equipment that pervades the organization and a similarly complex set of ways in which individuals choose to use the equipment. It may be one of the areas of least control by IT leaders, but it may also provide the largest potential return on investment. While changing a particular user action may save only pennies, the cumulative effect can save millions. My favourite personal example is moving an organization to a shared print service, thereby decreasing paper usage from 72 to 50 million sheets per year with an overall savings of over $2M in a few years. One of the lessons from this “greening of print” initiative was that it required a complex set of standards, education, and innovation to make substantive change.

The topic of Green IT is broad and continually changing. I suspect I will have more to say on the topic after I have the pleasure of presenting, discussing and debating Green IT issues with several esteemed colleagues on a panel at the Cybera Summit next week.

Information Technology and the Economic Downturn

I am writing this blog while on an airplane to Texas. As I often do when I travel, I bought a copy of The Economist in the airport and plan on reading the magazine cover-to-cover during the flight. As one might expect given today’s economic environment, a central theme of the issue I hold in my hands is the global economic downturn and the pending impact on industry and society.

The words in the magazine lead me to reflect about the upcoming challenges for IT and, since I am generally an optimist, I also wonder about what opportunities we have for utilizing information technologies to some of ease the negative impacts of the global economic downturn.

In a broad sense IT can either be run (or perceived) as a cost centre or an innovation centre. If IT is a cost centre then the organizational focus will be on determining how to reduce the current IT spending. If IT is an innovation centre then we should be contemplating investments in projects that identify (and implement) ways to streamline business processes to reduce costs, improve services, or enter new markets.

Clearly IT will need to find ways to reduce current IT expenditures to assist with balancing the books. IT will need to restructure and reduce costs. But just making our share of cuts and waiting for this fiscal unpleasantness to pass would be, in my opinion, missing an opportunity. I think now, more than ever, we need to find ways to use IT to ensure we use our scarce resources to get maximum value.

The economic situation has brought with it a sense of crisis (or potential pending crisis) in many organizations. This can be a significant catalyst for change. We are likely entering a period of time where organizations will be willing to more critically examine past business practices and approaches to drive out costs and improve services.

However, in these economic times it may be difficult for organizations to invest in large IT projects. Capital will be scarce. Any large project risks will be difficult to justify. So instead of envisioning large monolithic projects that have positive but perhaps a rather long-term return to the organization, I think we need to rely on a more incremental approach.

Big vision, small steps. And each step must provide distinct value to the organization.

Incrementalism can be a sound philosophical approach even in good times and certainly a good practical approach when times are tougher. However, sometimes I feel some people would really rather talk about the big wins.

In these tough economic times our challenge in IT will be to convince people that a lot of aligned smaller wins are more likely and more valuable than that one big project. No matter how exciting that big project may be!

Are you a CIO or an IT Director?

As of the time of publishing this blog I have been a CIO for 3.578 years.

As an old CIO joke goes, not so many years ago the typical duration of someone in the CIO role was thought to be about 2-3 years, so by at least one measure I am now an above average CIO. I strive to also be successful by other (more useful) measures but the joke leads me to wonder what are the appropriate responsibilities and measures of success for a CIO in Higher Education.

The letters “CIO” are an abbreviation for Chief Information Officer but another common title for the “chief” in this area is “IT Director”. How does an IT Director differ from a CIO? Exploring this difference leads me to at least two significant measures of success for a CIO.

First, let’s explore what an IT Director might do. In most organizations an IT Director has the responsibility for the centralized IT function. In an academic setting that generally includes the key business systems, networks, the common applications used by most staff and students (e.g., campus email, calendar, learning management system, desktop applications, etc.), and often the campus phone services. It typically does not include any responsibility for the distributed IT services within the Faculties, Department or other units. In fact the distributed IT units may often offer competing services such as local email. In a large research intensive University the combined distributed IT shops can easily be as large as the central IT organization. If you’ve got a Medical School distributed IT might even be larger than central IT.

In some organizations a CIO is seen as a relabeled IT Director. Same responsibility, shorter title. But, in my opinion, a real CIO has several key differences in responsibilities from an IT Director.

The first key difference is scope. An IT Director is often responsible for just the central IT, a CIO is responsible for the IT across the institution. Wait a moment, some of you are saying to yourselves! In most higher education institutions with a CIO there is still a large segment of distributed IT that report through different lines — typically up through a Dean. So? The CIO still is responsible for the overall IT directions of the campus. Regardless of budget or HR authority, they still are responsible to move the complete IT organization in the appropriate direction. Through a variety of means — developing a shared vision, creating incentives, developing relationships, funding, and, if desperate, use of corporate politics (in only the nicest of ways!) — the CIO is responsible for developing alignment and direction that enables the institution to best use IT to achieve the institutional goals. In addition to delivering on a central set of IT services, the real CIO is also responsible for delivering on a set of localized IT services.

The second major difference can be seen in the title. The title “CIO” doesn’t contain the word “technology”. IT Director is often seen as the chief tech person. And, especially in some of the smaller IT shops, is often a person with a very strong technical orientation. While the CIO generally has the central IT Department as one of their units that is not the sole or perhaps ultimately even the most important function. I see the CIO’s responsibility as one in which they need to meet the organization’s “information” needs. Technology just happens to be one of the key mechanisms by which the “Information Officer” can meet those needs. That said, if the basic technology is not functioning, a CIO might as well pack their bags and go. Unless basic services function, there is nothing on which to lay the more valuable “information” services. The provision and management of information is a hard thing to measure. But basically the provision of quality “information” should result in better decision making across the organization.

So what does the above mean with respect to measures for the success of the CIO? Obviously there should be a suite of measures of success of the CIO, and the specific items will tend to differ for different organizations. However, two key measures arise from the above:

1) Alignment of services. A successful CIO should be moving the institution toward common sets of services (where they make sense) regardless of the distributed nature of the IT environment. Some indicators could be the reduction in number of email systems, file services, directories, web services or help desks.

2) Success of the organization. Simply measure the CIO on the overall organizational ability to achieve stated goals. That is, the same general measures as one might expect to use for the President, CEO, VP(Finance) or other senior corporate officers.

By the way, another old CIO joke (that stems from the same time period when a CIO’s careers with a firm lasted about 2 years) is that the letters CIO actually stands for “Career Is Over”. However, this “above average” CIO expects to continue in his career for at least another decade before retiring to some place where he can fly fish every day of the year.

IT and Groundhog Day

[Originally posted Feb 1st 2008]

Tomorrow is Groundhog Day!

If it is sunny the groundhog will see its shadow and winter will continue for six more weeks. In Calgary I can pretty well guarantee, despite sunny or cloudy weather today, we’ll have snow a few more times before we see the end of this winter.

But Groundhog Day is also, according to the Internet Movie Database, the name of the 186th best movie of all time. In that movie Bill Murray plays a character forced to live February 2nd over and over and over until he got it right. The idea of endless repetition provides a good framework from which to view a few current IT trends.

What trends are worthy of discussing on the day of Punxsutawney Phil or, to give a Canadian slant, on the day of glory for Wiarton Willie or Balzac Billy? Three topics come to mind: SOA, Cyberinfrastructure, and ITIL.

Are these new ideas or are these IT versions of Bill Murray waking up yet again to “I’ve got you Babe”? I think the image really isn’t one of Bill waking to a Sonny and Cher song on the clock radio, but perhaps he is waking to the same song but now performed by Lucky Dube on YouTube. Mostly the same words and musical score, but a remarkably different performance on a remarkably different media in a remarkably different cultural and political environment. So let us explore some ideas.

SOA. Service Oriented Architecture. There is a lot of hype and marketing around these three letters. Is the concept completely new? Not really. Many many years ago students, like me, were taught new programming languages (such as SP/k) in an attempt to move us from spaghetti code to a concept called structured programming. Reusable code was the mantra. But, as in many things in life, the vision of what was possible back then pales compared to what is possible now. Back then we talked about individual programmers developing reusable code. Then, as we progressed from Structured Programming to Object Oriented Programming, we worked on building reusable and sharable libraries. Now Service Orient Architecture envisions people reusing sets of services, where the services themselves might be distributed across a network. The key repetition in the cycle is the goal of reuse. A worthy goal, no doubt. Reuse can save time, increase efficiency, reduce costs. Look on the web and you’ll find many exciting applications being developed via SOA. But look into your organization, especially within your enterprise business applications, and you are unlikely to find many examples of service oriented architecture.

Why don’t we see many mainstream business applications available under SOA frameworks? Perhaps the main reason is the cycle time for the development and implementation of new technologies. It may be easier to build and deploy and gain mindshare for some new products that are exploring new needs. It is somewhat more difficult to build and deploy and gain mindshare for something that does something like manage the general ledger. But, technology cycle time aside, moving to SOA for corporate business functions will create some new challenges for IT. And, ultimately, the key issue won’t be a technology issue but rather will be about application and data ownership. We will need to create new service agreements and frameworks to ensure that these SOA applications — which may have embedded services that aren’t run by your own IT shop and or are even understood by people within your own organization — will continue to function and meet your business needs.

Cyberinfrastructure. The Alberta Cyberinfrastructure Task Force describes Cyberinfrastructure (CI) as the integration of Information Communications Technology infrastructure needed to support advanced, internationally competitive and groundbreaking research. They (which included me, by the way) go on to say that Cyberinfrastructure uses grid middleware and advanced research networks to integrate large numbers of distributed computers, data storage facilities, visualization tools, remote sensors and collaboration facilities. Is the concept completely new? Not really. Years ago I recall conversations that included many of those elements as we planned Super Computing initiatives that were focused on determining new models for distributed computer systems, improving the capacity and capability of storage infrastructure, visualizing scientific data and finding ways to enable many researchers to not only work on a shared set of hardware but also to enable new sharing of applications and research. We even had a decade or so after Super Computing where we retagged it as High Performance Computing — not only because we had decreased our aspirations of being “super” but also because there became a marked separation of the High Performance Computing most facilities could offer from some of the amazing increases in capacity and capability being make available by select (mostly government) sites.

Perhaps a superficial read of description of Cyberinfrastructure makes it sound like a rebranding an existing product to make it sound exciting. Cyberinfrastructure, the New and Improved High Performance Computing! But digging deeper into the aspirations and requirements of Cyberinfrastructure show it to be a new day for technology in support of research. The difference can be drawn from several of the words in the description — integrate, distributed, remote, collaboration — plus from one key word that didn’t make it into the above description — service. Cyberinfrastructure is not just about more computers, more storage and more data. It is a fundamental change. In some ways it is the Service Orient Architecture of the infrastructure. We should no longer care where the compute is located, storage doesn’t need to be in the same place as the compute, data is derived in near real-time from various sources, and the running of the infrastructure and the location of the associated support services and software is irrelevant to the customer. This is a radically different model from the creation of the Super Computing Centres of the past. In some ways we might think of this as the “Virtual Super Computer Centre”.

There are many technical challenges in building the Cyberinfrastructure. But, like SOA, I think the critical issues will be centred on developing the appropriate support and service models. Who makes the decisions about service levels and service availability? How do we utilize the infrastructure in an optimal way? How do we even understand the dependencies of services that span across multiple organizations?

That brings us to our third topic.

ITIL. Information Technology Infrastructure Library. This is one of the IT acronyms that doesn’t gain any clarity by expanding out the words. It refers to IT creating coherent, correct and repeatable processes. ITIL will be the unlikely hero in our IT Groundhog Day scenario.

ITIL has roots into the 1980s. IBM developed various Systems Management concepts and published a set of books. These concepts weren’t about the technical implementation, but rather about the management of IT and, over time, ITIL was created from these concepts. Via a long and rocky path ITIL has evolved to become a significant force in the world of IT process management concepts. I don’t want to oversell ITIL. It doesn’t actually provide much with respect to how you should manage the IT processes, but it does provide a strong framework about what you need to do. There will be more discussion about ITIL in future blogs!

So, to tie all this back to Groundhog Day, today’s key IT issues have some of the same aspects of yesterday, but changes like SOA and Cyberinfrastructure will make tomorrow remarkably different than yesterday. Our new day in IT will contain complexities and dependencies we couldn’t imagine a few years ago. Much of what you rely on to run your business won’t be directly managed and controlled by your local IT shop. The only way to ensure the new IT meets your needs will be to make significant headway in both how we manage our processes and our ability to focus on supporting the customer.

IT organizations that don’t understand and respond to these changes will find that when they wake up on Groundhog Day they have become irrelevant.

Happy Groundhog Day!