Thursday, April 23, 2009

Open Source Web Store for System i

I've been working with a user on replacing their current web store with one that actually integrates with their System i POS system. Their old one printed an order form off, and they had to pick up the printer output, manually check inventory, grab it, complete the order, then notify the customer. If anything went wrong, they had to notify the customer. It was all asynchronous, which is not what most customers expect from a web transaction.

This is not a new task for us. We've done this for other customers in the past. You typically need to access two things: Data and Logic. Raw data is typically not too hard to get at. Logic is a bit trickier. Our customer was price concious (who isn't, these days), so we looked at ways to reduce costs for them.

One way we did this was to go with a lot of open source software.

osCommerce is an open source web store product that uses MySQL as its database. We've found a way to hook the calls to the DB and, where relevant, pull/push data from/to the POS system through PHP web service calls.

In some cases it's data that we process directly, such as inventory levels for product availability. But in other cases we provide a web services layer that calls directly into RPG programs, allowing for the POS to process order completion, including credit card validation and all that fun stuff.

We have an amazingly simple process for creating a web service that calls an RPG program, using the FusionWare Integration Server Designer.

The customer is installing osCommerce, on top of Apache and PHP, on top of Linux, on commodity hardware. They are using more commodity hardware to run the FusionWare Server on, which enables them to keep the load off the System i (we could run there, but most System i apps are running pretty full load). The end cost is a fraction of what IBM quoted them for a WebSphere and Global Services-based solution.

We've been doing this kind of data/logic integration for web sites since about 1995, when we had two customers start using web servers (One used Netscape, one used IIS) to put up web stores. These were custom jobs. They came up quickly with minimal functionality, then more was added over time. With products like osCommerce, you can quickly and easily bring up a full-featured web store, saving you the effort and pain of gradually creating this presence and getting it right. Now, with FusionWare, you can bring the freedom of open-source osCommerce to your System i POS system.

For anyone who is going, we'll be at the COMMON trade show next week in Reno. Booth number 214.

Tuesday, April 21, 2009

A Little Birdy Told Me

I've just discovered another use for Twitter.

You tweet a question about something you're researching, and almost as fast as Google comes back with an answer to your query, you get an answer from someone in the know.

How does this work?

Well, as near as I understand it, when I put my question in, it got indexed by Twitter.

Now, either the person at the other end did a Twitter search, or perhaps there is some kind of software out there that will notify you if a tag term is used somewhere.

In any case, I'm blown away with how quickly the response comes back!

Thursday, April 16, 2009

To Tweet or Not to Tweet

I've been on Twitter for over a month, now, and I think I'm only now starting to "get" it. I've been lurking, looking, and seldom tweeting at first, but now I'm starting to get the idea of some of what I can do with it and what it can do for me.

Interestingly enough, this is another case of moving from somewhere to nowhere. Originally, all business communications happened by paper or fax, using company letterhead, dates and signatures. Then came email, and communication became a bit more "virtual", but still went out through a corporate email server, usually backed up by information on a corporate portal.

But now, savy companies have figured out that Facebook, Twitter, YouTube and a host of other social networking sites can drive significant interest and ultimately business to them. The interesting thing is that now your corporate communication is no longer controlled by a corporate server. While you can block a Twitter account from following you, and you can refuse to accept a Facebook invite, when dealing with the volumes that one hopes to get from these sites, there is simply no way that you can do this practically.

Setting up these sites and getting started using them can be done very quickly, and with minimal cost, yet with huge benefits, so many companies are jumping on the bandwagon. In some cases, they jump on it because their competitors are there and they have to, in order to survive. But then, there are risks involved in using these things.

Where the company originally controlled all communication, and who we did it with, we now find that our communications are controlled by a series of other sites, and that our customers consist of anyone who can find us by any means and chooses to subscribe to our updates. We've lost control of who we do business with, we've lost control of our marketing medium, but we've gained infinitely more customers as a result. No longer are our customers people that we approach. They are people who approach us!

Most IT people still don't get it. Kids are getting it. They don't really think about it, they just use it, and the common uses become obvious. Business people really hate to not have control, so this type of free-wheeling business-by-experimentation model really scares them, or simply pisses them off. Many won't embrace it - to their disadvantage, I may add. Those who do embrace it will find that it will open new opportunities to them. I can't really say what these are, nor can I predict which ones will work and which ones will turn out to be insecure or unsafe, but for those willing to experiment, the opportunities await.

I suspect that the push to use these technologies is as likely to come from, or be blocked by, management and IT people almost equally at first. Misinformation, concern over the risk of the unknown, a desire to micromanage everything to fine detail, and people being protective of their job security, are all factors that can inhibit the move to use social networking.

So, my questions are:
  • Are you willing to yield some control, in favor of reach and savings?
  • Are you willing to experiment with new marketing opportunities?
  • Are you stuck in the past, or ready to embrace the future?
As for me, I've begun to tweet, and you can't stop me now!

From Firmware to Nowhere...

When I first started with computers, I was working on Microdata systems running an O/S called Reality that had 16 users running on 64K of core memory. This worked because the system used a virtual memory model and a virtual machine model, but also because most of the operating system was burned into a memory chip, called a firmware chip. This chip had its contents burned in at the factory and there was no way to change its contents once it was produced. To upgrade firmware meant that a technician had to show up, turn off the computer, open its refrigerator-sized case, and replace the physical chip. I think they actually had to use a soddering gun to do this.

Not too far into my experience, there was a big change. They came out with a new technology that the technicians labeled "Mushware". It was really the same thing, but they could update the contents of the chip without having to create a new chip. This was similar to flash ROM, probably the precursor to it, or a variant of it. You might think of this as having a virtual O/S. That's how it felt at the time.

Over time, more and more of the operating system moved out of these specialized chips and became part of regular RAM, that had to be bootstrapped.

When IBM PCs came out, with MS DOS, computer systems for a time seemed to move away from virtual memory and virtual code, but with the advent of Windows NT and Java, virtualization started a comeback. Recently products like VMWare, Xen and Microsoft Virtual PC/Virtual Server have further provided options for virtualization. And now we have Cloud Computing.

We can see that some of the trends that have recently been taking IT by storm involve taking a machine that used to require a physical host, and moving it into a virtual environment, where you could actually move it around, clone it, save a snapshot, and do lots of powerful things. Of course, it became possible to wind up with so many systems that managing them, finding them, or even knowing they existed became next to impossible. There are always trade-offs, but in general, virtualization has been a good thing.

Along the same lines, we have virtualization of applications.

At first, an application had to live on an O/S and that meant hardware. In fact, the cost of hardware was so big a component of an application, that in the early days, often customers would buy hardware first, then find someone to write the application for it. I remember forward-thinking salespeople trying to convince customers to think about the application first, then work backward to the best system to host it on. At the time, this was novel thinking!

But now, with cloud computing, your application can live anywhere. In fact, it may be distributed across multiple systems in data centers around the globe. These systems probably implement virtual machines that provide a slice of your functionality, and they use multi-tenant applications that allow multiple customers to share a virtual machine instance safely and securely. You really don't know, and probably don't care for the most part, what hardware this resides on. Your focus is the application: Its functionality, availability, performance, reach, and ultimately its value to you.

The other nice thing is, you don't have to provision a system, or possibly, even a data center, in order to bring up an application. This has a huge cost savings and can speed deployment dramatically.

Of course, your data and applications may also disappear, if the vendor goes out of business. And if they do, you don't really have any recourse. The problem is, there are real risks with a new technology like this. One way to mitigate these risks is to stay with larger vendors.

As one example of how risks can impact adoption of new technology: Canadian government agencies cannot put private data about Canadian citizens into the cloud, because if these wind up in computers in foreign jurisdictions, these foreign entities (notably the US) may sieze the data based on laws that violate Canadian privacy laws. So, no cloud computing, for now, for Canadian government agencies.

So, there are lots of potential benefits to virtualization and cloud computing, but there are also risks. The benefits will belong to those willing to take thoughtful risks. I believe that many companies are unaware of the costs they could be saving. Others are not realizing the benefits they should be because they don't have proper control (or are exercising too tight control at the wrong places) over their virtualization initiatives.

So, what is your company doing with virtualization, cloud computing and SaaS? Your answer may range from "Nothing" or "Watching and Waiting" to "Trailblazing".

Monday, April 6, 2009

IBM Optical Data Conversion (EBCDIC)

Here at FusionWare we love a good challenge involving disparate systems, data and business logic. We've been doing this for so long that very few things can stump us.

Recently we had a customer with a large amount of data on an AS/400 on optical drives. They had an application on the AS/400 that would let them read and process this data so that they could view historical information. They had migrated their application to another system, but compliance regulations (and collections) required them to retain access to their historical data.

Unfortunately, this meant that they kept paying maintenance on the AS/400, and worse, they were facing a situation where their hardware was old enough that it was going to drop off IBM maintenance, so they were facing a hardware upgrade.

The customer visited trade shows like COMMON and contacted all sorts of companies, but everyone they spoke to said "No, we can't migrate this data off - you're stuck with the AS/400". Their own, incredibly creative efforts were gradually getting them there, but they simply didn't have the time to do all the conversions themselves, and really needed tooling to make it efficient.

Finally, the customer found us, we held a discussion of what they were trying to do, and we provided them with a proposal and estimate to do the following:

  • Conversion of their historical data to a SQL Database.
  • A web-based application to access this data, providing at least the same functionality as the current AS/400-based lookup program.
  • Good performance when accessing the database.

In consultation with the customer, we decided to use SQL Server for the converted data (any SQL Database that could handle the volume would have done) and IIS with ASP.NET to rapidly create the web GUI for accessing the new data store. Again, other web options could have been used. We work with our customers to find the solution that will give the best results and meet your corporate standards.

In doing the work, there were a number of interesting challenges that we had to work through:

  • The optical data is in EBCDIC format. We needed special tools to provide the conversion from EBCDIC to ASCII, including handling Packed Decimal and other special formats.

  • The optical data was huge. Over 60 GB of raw data. We needed a target database that could handle the volume, and indexing was critical, to ensure reasonable performance for the resulting application.
  • The optical files consisted of 3 parts: Header, metadata and data. Over time, the format of the data written changed, so that there were 6 variations of metadata for one file type.
  • Occasionally, garbage files were written to the optical drive and/or garbage data was written to some of the files. The only way to know this was to process the file and detect the problem when processing the converted data.
  • Because the data set was so large, it turned out that attempts to anticipate data problems by sampling data were largely unsuccessful. You really had to go for it and deal with anomalies as you encountered them. A good restart approach was critical.
  • Some critical data was embedded in the data in formats that required complex handling to extract it reliably. Basically it was in free-format text fields whose placement changed over time. A complex algorithm had to be devised to figure out how to get this data out reliably.
  • There were several collections of data. We started with one of the better defined, but larger sets. One objective was to come up reusable components and code that would make subsequent collections easier to work with.
  • Security and privacy. The customer's data included data with privacy concerns, so we transferred it between the customer's office and ours using Maxtor Black Armor secure USB drives (http://www.maxtor.com/en/hard-drive-backup/external-drives/maxtor-blackarmor.html). We did our development and testing work locally with all the data (including the SQL Server database) on our own Black Armor drives, ensuring maximum security and protection of the customer's data.

The solution involved a number of tools and steps:

First, we used a product called VEDIT from Greenview Data, Inc. (http://www.vedit.com/) including their Level 2 EBCDIC conversion tools to facilitate the conversion. This product allows you to inspect the data and view it in both ASCII-converted and Hex mode, on the same screen (split window). It also supports a macro mode so you can automate operations from a command line. VEDIT uses something called a layout file to do it's EBCDIC conversion.

We used the FusionWare Integration Server (our own product) both to orchestrate the steps, transfer the resulting ASCII-delimited files, run SQL DML Scripts, and create layout files and SQL DDL Scripts.

Because formats changed over time, we had to do the conversion in several steps:

The first step was a preprocess phase. We started by breaking the EBCDIC files up into header, metadata and data portions. Then we processed the metadata, and used XSLT to create layout and SQL DDL scripts. We had to associate each converted file with the appropriate layout files. When this pass was done, we had numerous variations of both the SQL DDL scripts and the layout files.

We used these initial steps to create the SQL Server tables and to build the application for viewing the data. This application was an ASP.NET application and used a browser to access the data, using windows authentication and role-based access.

Then we started the process of conversion. The conversion process had to detect which variation of the layout file to use to create the ASCII comma-delimited data. Because of some data issues, we also had to put a process in to do a data-cleansing step on some of the files. Then we transfered the data into the SQL Server tables.

Actually, the above 3 steps were iterative. As we got to the data phase we discovered data issues that required us to restart the final phase, and in some cases redo parts of the preprocessing phase, as well. Some of this required changes to the SQL Tables as we discovered that at one point the application added new fields to the database.

The application that we built for accessing their data provided them with greater searchability than their original AS/400 application, and performance was not only better than accessing the AS/400 Optical data, but it was actually faster than accessing historical data on their new system.

The end result of this process was a set of reusable code components that can be applied to the additional collections.

The customer now has their initial collection (statements) sitting in a SQL Server database (about half a billion rows worth, taking up about 130 GB of data, index and tempdb space) and a process and components that they can repurpose to convert their other collections of historical data. Once we complete the conversion of their other collections, they will be able to decommission the old AS/400 system with its optical drives, while still meeting their legal obligations.