Year 2000 Summit Meeting, June 26-28, 1996 -- Report

Year 2000 Summit Meeting Logo

This is a "living" file in that I'm trying to capture my own thoughts and impressions from this conference. I have no doubt that I learned a number of useful and interesting things at the conference and that some of them should be shared. I'll keep adding to it, especially new links as they become available.

Contacts and URL's

I met a number of people at the conference, including an ex-CAS member who has worked on Rigi as part of the Program Understanding project, Scott Tilley. Scott now works for the Software Engineering Insitute at Carnegie Mellon University. The link to the Reengineering Center is particularly relevant.

Another interesting person was Jeffrey Martin from Matridigm Corporation. [Hopefully, they'll change their name to something more palatable eventually.] Jeff is the Director of Marketing and Sales and is currently on leave from IBM where he was in Marketing until he went back for his MBA. The interesting part is that Matridigm is run by ex-IBM Fellow Jim Brady and they are claiming to be able to extract all the relevant facts from a module in days or weeks. The part that they seem to be missing is the visualization and presentation stuff. He seemed interested in the things we're working on. I suggested that he look at Hausi's Rigi pages and I've got Jeff's card. As far as I can tell, Matridigm doesn't have a web presence but you can email Jeff as JAMartin@MatriDigmUSA.com.

A third person that I talked with at length was Gary McKinney from Interactive Development Environments. They produce a product/service called "Software Through Pictures". As far as I could tell from their demos, they're showing Rigi-like images based on similar ideas. Their website doesn't give much of the flavour of what they do but I am hoping to invite them to CASCON later this year.

Vendors Not Otherwise in This Report

Presentation Reports

The following is a partial list of presentations I attended and things I found interesting. Some of the links that I came up with are included and a few of them are highlighted since I think they are particularly interesting.

  1. The Re-engineers Full-Employment Act of 1999
  2. An Overview of Available Year 2000 Tools and Services
  3. Don't Say You Weren't Warned
  4. Tag, You're Responsible for Year 2000!
  5. USAF's Strategy for the Year 2000 Problem
  6. Year 2000 Compliance of Commercial Software Products
  7. Strategy Workshop
  8. Governments and the Year 2000
  9. No Surpises: Better than 90% Solution
  10. State of the Year 2000 Solutions
  11. Year 2000 Users' Groups
  12. Get a Process, Then Get Going

  1. The Re-engineers Full-Employment Act of 1999
  2. The initial talk was presented by the Reengineering Forum conference organizer, Elliot Chikofsky. It seemed to be mostly a justification of why we were there.

    One very interesting thing they did to highlight the problem was a bit of theatre built around a 500-piece jigsaw puzzle. It went more or less like this:

    This is a nice metaphor for the way the Year 2000 problem is viewed. People will not see that they have a part of the problem or they'll abdicate their responsibility or they'll focus on little else or whatever. In the end, almost all of the job gets done anyway.

    As a piece of theatre, this worked out very well. Over the course of the morning breaks, the puzzle started to take shape. By the end of the day, most of the puzzle was built and, by the end of the conference, all but 13 of the 500 pieces were in place.

    Other than the cute demo, Elliot pointed out that the Year 2000 problem took on a much larger image after the Congressional Hearings 3 months ago. He predicts that there is going to be a "feeding frenzy" late this year as the 1997/8 budgets get allocated. This, in turn, will lead to more "snake oil" solutions including old CASE tools being dusted off and sold as "Year 2000 Solution"s.

    There seemed to be a lot of agreement from the audience over this point and over the subsequent point that a lot of contracts will be signed early in 1997 which may effectively lock up the talents of the best people, leaving only "the fringe" who try bandaid-style solutions. The conclusion seemed to be that society will need a plan in place to handle the failures.

    There were a number of points raised about strategies but they mostly came down to:

    The conclusion is that there will be a "Lawyer's Full Employment Act of 2000".

  3. An Overview of Available Year 2000 Tools and Services
  4. It appears that the USAF is far ahead of most other groups in dealing with the upcoming crisis. Among other things that they offer is a Data Base of Y2K Tools . This structure has been further augmented and will eventually be part of the information on Year 2000 offered by the Mitre Group. As of the conference, Mitre projected that their page would be available late this month.

    The USAF presentation was one of the best and was an excellent talk to schedule at the beginning of the conference. The topic was the USAF 5-Phase approach to the Year 2000 problems and specifically how they find, group and evaluate tools. The five phases are:

    1. Awareness - if your organization doesn't know they have a problem you won't get resources to fix it.
    2. Assessment - how big is the problem and how can you start to quantify it. Tools in this group are mostly viewed as: configuration management; impact analysis; simulators (pretend the date is Jan. 1, 2000); reverse engineering (eliminate dead code; program understanding; and derive source code from object); code-slicers (forward slicing for everything affected by a criterion; backward slicing for everything that affects a criterion; static slicing of the source code into subsystems, modules, etc; and dynamic slicing of where data is pulled in during execution); and data name rationalizers (if the name seems like it might be a date, flag it -- based on AI inference).
    3. Renovation - automate as much as possible to change 2-digit to 4-digit years but recognize that the Gartner Group estimates you can only get a 50% speed up.
    4. Validation - ensure critical functionality, testing tools, simulators.
    5. Implementation - get the fixed code into production.

    There are a number of classification strategies but the best bet is to check out the data base of Y2K Tools.

    Their plan is to focus on some specific tools starting this month. They figure that they can evaluate a tool with two person-months of effort and they have two people working on a total of 3 tools, 2 for correction and 1 for validation.

    During the follow-on discussion, they mentionned that they now include a Year 2000 Warrantee in every contract that they're signing. Apparently AT&T is doing the same thing. Basically, a supplier has to guarantee that the product will not fail in the year 2000. One observation is that older systems are going to be a hot area for replacement. The idea of a repository for all of this information (waranted products, data base of tools, etc.) was well received.

  5. Don't Say You Weren't Warned
  6. This presentation featured Bill Goodwin, publisher of "Tick,Tick,Tick...", a newsletter devoted to Y2K. Based on what I saw and Bill's talk, I'd recommend a subscription to this! The talk started with Bill reminiscing about his first experiences on Wall street with people sleeping at desks after the crash of 1929.

    Most of the talk focussed on Bill's experiences from the stock market crash and from a paperwork crunch that hit in the 1940's. He made it relevant by pointing out the connection between Wall Street's problems then and now.

    One story he told was being a junior member of a stock brokerage in the 1940's. They were all taken "upstairs", given 20 minutes of training, and set to work balancing the books for the day's trading. 12 hours a day, 6 days a week. Unbalanced books meant a liability for the brokerage and these liabilities added up to enough that some major brokerages went under. The reason there were so many problems was put down to Infoglut, too many trades done too fast with some resulting bad data. For details, he recommended "Wall Street Security Risk" by Herb Barouk.

    One legacy from the 40's was the conversion from 3-day to 5-day float (the time between the purchase of stock and when certificates had to be delivered). In 1995, this went back to 3-day because so much was automated.

    Bill's take is the Wall Street will collapse if they don't handle Y2K and that, while they are working on the problem, they aren't working on it fast enough. One key problem is the large number of mergers in the financial community in the 1990's. Chase Manhattan is still digesting Chemical Bank and the impression I got from the talk and discussion is that the data centres are fighting over formats and conversions and there isn't much energy left for Y2K.

    The insurance business is doing better, mostly because they've been forced to face the issues over 35-year annuities (starting in 1965!). However, they still don't do well on the day-to-day stuff like writing new business, billing, etc. In one example, he cited a contract for a subset of functions totalling 29MLOC. Because they needed to have a budget *fast*, they only took 6 weeks to do the impact analysis and they required companies to bid for this work. It ended up taking 6 weeks just to download the information from the various file systems. When they finally looked, 50% of the programs had hard-coded "19" for the century. Another insurance broker planned a complete rewrite of all their code as part of a move to client/server. They added Y2K to the workload and starting in 1994. Currently, this effort is deemed a big failure and the company is now considered to be at significant risk.

    As part of a theme for the conference, Bill said that if a company's annual report does not contain a special line item for Y2K, he'll be buying PUT-CALLS on that company.

    It isn't all doom-and-gloom. The Chicago Futures Market started in 1989, they have a test every 3rd Sunday of each month, and they are on track to finish Q4 1997. The NYC Transit Authority has finished their inventory (took 18mo.) The U.S. Social Security agency has recognized the problem and acknowledged that even a few days of delay would affect the entire economy. During the Congressional Hearings, they estimated 300my of effort but the recent publications have jumped this to 400my as of last month. Since they plan to finish in Q1 1999, that means they must be planning to put nearly 200 people to work on the conversion.

  7. Tag, You're Responsible for Year 2000!
  8. Michael Carver from EDS/ Technology Architecture gave this presentation. The theme was to recognize the opportunities and to gather the information. Once again, the process was described as starting with awareness and then moving to assessment, and leading to an informed decision based on the assessment.

    The most relevant point he made is that he feels "Legacy to Object Oriented" tools do not form a valid subset of Y2K tools. The focus is wrong. This sentiment was echoed by many people who feel (some from hard experience) that Y2K solutions must be kept separate from any other development, maintenance or conversion work. A few people had stories of failures because too many things were being done at once.

    I noticed that the auditors were taking notes about this and I suspect that the notes were summed up by one who said (the next day) that they would be red-flagging any financial audit result where Y2K was claimed as part of "regular, on-going maintenance".

  9. USAF's Strategy for the Year 2000 Problem
  10. The USAF was a major participant at this conference and, frankly, they gave the best talks. I feel sorry for some of those people who are now being put under the DoD Y2K Committee which is itself part of another committee which is part of the main committee in charge of Y2K for the U.S. federal government. This group has been doing a lot of work and apparently doing it quite well. But now that Congress has heard about it, there are three new layers of politically motivated committees telling them how to do their job and, although not spoken aloud, clearly the people who have done such a good job are now afraid of being pushed aside and ignored. This was one of the scariest facts I took away from the conference.

    Tom Ashton (the speaker), works for the USAF Software Technology Support Center, like some of the previous speakers. He separated USAF efforts into "Old Testament" (prior to Oct '95) and "New Testament" (Oct '95 onwards). The first six months of real effort (Feb-Sep '95) were not successful but in October they got recognition for their work and funding after which they organized all USAF efforts and built an organization.

    The history is interesting. Apparently the date standards were initiated by a (since retired) Lt. General and a civilian working with the air force. Because both were highly respected, their policy letter was not totally ignored. In the letter (Sep 8, 1994), they set out a definition of Y2K compliance, including a requirement for 4-digit years.

    Not suprisingly, the first result was that people started looking for loopholes instead of solutions. This lead to an 8-month effort to close all the loopholes. The ultimate hole-closer is the policy that, if you choose not to use 4-digit years, then you are responsible for all costs associated with interfacing your code and data with compliant code and data.

    USAF plans talk about "Short Term" and "Long Term" levels of compliance. Short term is also known as "Survival" and requires each Major Command (MAJCOM) to be able to manipulate, store and translate correctly all dates. Long term refers to the time after January 1, 2000 and requires complete adaptation of all systems to 8-digit dates.

    The plan is to have a central group in the air force that is responsible for Y2K. This group formulates general policy and provides a package (still incomplete) of guidance for MAJCOM's. The central group also sets the broad timelines for compliance but leaves the details to the MAJCOM's. Each MAJCOM has its own group that takes guidance and formulates it into plans of action. Currently these groups are performing complete assessments for their MAJCOM.

    Having surveyed approximately 20% of the Air Force Systems, they found 225 systems with 74MLOC. These are just the management information systems (MIS's) and not the embedded or weapons systems. This translates to a workload of approximately 370MLOC. Based on studies, they estimate 80% of the code is actually executed and that it will cost $1.70/ELOC (executed line of code). This translates to $500M which is not in budget!

    Cost estimates are that Awareness and Assessment will account for 40% of the budget, "Renovation" (coding) will cost 20% and the Validation and Implementation phases will add up to the final 40%. The plan is to reserve all of 1999 for the final two phases which requires all coding to be complete by the end of 1998.

    Each MAJCOM is required to have an assessment of their total exposure by July 5, 1996. To accomodate the timeframe, the database requirements were reduced below the original expectations but Tom feels that 07/05 is doable. (Since I'm writing this report on July 4, I expect he'll know better soon.)

    Tom showed an MS-Project style chart with timelines out to January 1, 2000. Intriguingly, the chart did not match the statements he'd made earlier but I didn't have a chance to ask him a question about it then or later. I suspect that a lot of the differences are related to his problems getting funding (no committed dollars) and to the problems they are having selling MAJCOM's.

  11. Year 2000 Compliance of Commercial Software Products
  12. This talk was presented by representatives of Concept Five Technologies, a wholly owned subsidiary of Mitretek. It was one of a few talks that I found unilluminating. Mostly a recital of known facts and what I felt was an uncoherent approach to thinking about COTS.

    With so many companies buying software from vendors or licencing products such as MS-Excel, this seems like the sort of risk that needs complete evaluation. Some well-known problems with Excel handling of dates were trotted out but no suggestions were provided for dealing with them.

    The conclusion of the presentation is that COTS products don't yield simple Yes/No answers for compliance. Rather, compliance issues must be reduced to risk evaluations (value of product and cost versus Y2K risk). The approach seemed to be to evaluate each COTS product yourself for your personal risk. To me, this seems like a massive waste of manpower since Excel is the same product coming out of a box in a bank or in a military office.

  13. Strategy Workshop
  14. There were three strategy workshops but they quickly became gab-fests with some anecdotes but lots of "me too" comments. The intended sessions were: "Can't Get Done in Time: How Much is Good Enough?", "Working with the Press on Year 2000", and "If Year 2000 Failures are Inevitable, Which Systems Must Society Concentrate On". In the end, I think they blurred into a single session that could have been titled "How Can We Make People Aware of the Year 2000 Problem",

    The sessions eventually yielded a short list of reasons why management appears to be ignoring the problem (thanks to Peter de Jager for getting people to focus).

    The first root cause appears to be a short-term focus of companies. Either they have other, more pressing goals to focus on (such as the problems merging data centres for Chase Manhattan and Chemical Banks) or there is more interest in keeping the bottom line up at year's end and Y2K conversion costs don't yield profits.

    The second most commonly cited cause is that there aren't any hard metrics for the problem. Outside of the oft-quoted $1.10-$1.70 per executed line of code (NOT per date reference!) and some vague idea of how many LOC might be in a company's portfolio, there isn't much. In fact, this may be the biggest surprise when bigger companies start to do their assessments. The USAF was apparently very surprised to find that they manage 350MLOC just for MIS operations.

    Another common reason is that Y2K doesn't cause any pain right now. You could paraphrase this as Y2K is just theoretical. This is starting to be overcome as more organizations start to run what-if tests. This appears to be the underlying reason for some people feeling that there will be a stampede to the altar of Year 2000 Quick Fixes at the end of this year -- some critical mass of reputable reports will be reached and suddenly everyone will try to act on Y2K. In turn, the fear of a stampede is suspected of holding up the Wall Street Journal coverage (either they're afraid of triggering the stampede or they're looking forward to blowing the clarion horn).

    The fourth reason the group settled on is that management feels that this is a technical issue that will be solved by technicians as part of the course of their regular work. Echoes of this were heard in comments that management feels that it is not related to business. Some people report that management basically says, "Don't bother me, just do it." but doesn't back the command with funding.

    If I had a nickle for everyone who trotted out variations on the five stages of dying (anger, denial, fear, bargaining, acceptance), I could have paid for the trip! The common conclusion is that management knows just enough to be scared and is currently in denial. There was also some concern expressed that the press (and hence the public) is about to start the process by getting angry at the computer programmers who caused the problem.

    The only "solution" to come out of the 4 hours of talking was a suggestion that each person at the conference write to their company and to every company where they have shares asking for an explanation of how the company plans to address the Y2K problem. Alternatively, tell a lawyer to point out to the directors of the company that they will be personally liable for failing to deal with the risk.

  15. Governments and the Year 2000
  16. This was one of two talks presented by Stan Price from Pheonix, Arizona. Stan is the chairman of the Arizona Millenium Group and we talked a fair bit at lunch earlier in the day so some of the anecdotes I'm including come from before or after this talk but are more relevant in terms of the topic here.

    Pheonix got interesting in Year 2000 problems in January 1995 when a ?senator? was sentenced to 5 years of probation and the end date (01/00) crashed the court system. I wasn't entirely clear that it was a senator but it was obvious that whoever it was had enough media presence that the crash itself became a news item.

    A HUGE DISASTER LOOMS is the title of the third slide. From talking with Stan, I think that he personally feels the disaster may be inevitable although professionally he is working to stop it and sounds optimistic. The disaster is mostly from cessation of most government services for some period of time, including communications. And also from the costs of trying to prepare, of recovering where preparations failed and of litigation due to failures. Stan didn't set specific timeframes but he feels that the recovery time will be long and expensive.

    Pheonix handles all airport services on a long-term contract basis. A few years ago, they started having all contracts terminate in 12/99 because the system couldn't handle dates beyond that. It still can't.

    The Pheonix traffic light system is a sophisticated collection of rules based on date, day of the week, holidays, time of day, etc. This system will fail after 01/01/00. The plan is to replace it within the next 18 months. Problems due to the hurried replacement will be less than problems due to failure or late implementation.

    The Pheonix 911 system (and probably most others in the world) logs only 2-digit years. There will be some liabilities accrued for incorrect or missing log entries. There are some known problems but it hasn't been fully tested.

    Fire station locks have special controls for time-of-day and day-of-week, etc. This controls access to the equipment during times when most people don't need access. They have no idea what happens after 01/01/00 and 02/29/00 but conceivably doors could be locked inaccessible at the wrong times.

    Pheonix runs all sorts of systems from DOS, Win3.11, Win/NT and Win95 PC's to IBM mainframes to IBM System 36 systems with key programs in RPG-II. Government systems tend to be very date intensive, particularly at a local government level. Over the years, hardware gets retired and replaced but the software tends to be ported to the new platform without replacement. As a result, the software is a mess of code that serves no apparent purpose. By Stan's measurements, only 2-3% of the code is date-related but that code affects 80-90% of the applications which translates into about 70MLOC ($120M by Gartner Group estimates).

    At the federal level, the goverment is counting this as a $30B problem. Social Security has been leading the way, starting their efforts in 1989. The contact name Stan mentionned was ?Judy Draper? (this was in the last session and I didn't have a chance to get the exact contact info). At the Congressional Hearings in April, Social Security reported that they have a long way to go.

    There is now a federal Interagency Council (see USAF notes above) leading the way. They apparently have required agencies to place Y2K requirements in budgets for FY1998 and many tools are listed under the GSA for Year 2000 Tools - a small grace making it possible for agencies to order the tools. The FY1998 date is far too late and is recognized as such. It is possible that agencies will be allowed to temporarily rechannel funds but only taking from existing funds which means ?temporary? cutbacks in programs.

    At the state level, the Virgina Data Centre reports that it is 60% complete but only after working on the problem for 10-12 years! About 40% of the states report that they haven't started any Y2K projects and several of the others have held only one conference to identify the problem. Nebraska just instituted a $0.02 tax on cigarettes to pay for their Y2K efforts (expected to raise about $30M).

    Things look poorest at the local level of government. Several larger cities are in the assessment phase and a few have made it past there into planning. Statistics are sparse.

    One special problem governments are finding is the "Design&Build Laws" which basically forbid anyone from both designing and building a solution. Thus, if you assess the problem you cannot even bid on fixing it. Also, almost every government agency is required by law to allow for competitive bidding which is very time consuming. Because of these two problems, and because there is a lot of work out there, most of the better solution providers are just ignoring the governmental problems for now, possibly waiting for the potentially more lucrative "build" or "test" phases of the work. Also, contractors can easily pay more salary than governments and can adjust quicker to the marketplace. With a feeling at the conference that programmer salaries will go through the roof starting at the end of this year, the government agencies are going to have to deal with second-rate or worse talents.

    Without access to first-tier solution providers, independent contractors are being considered. Stan reports that there are lots of names he hasn't seen after years of working in the area and that raises some concern for quality and the added potential for fly-by-night operators. Doing the work in-house would be nice but the salary demands may soon make this impossible plus budget constraints mean that other essential projects will effectively be frozen for a few years.

    Some for of triage is going to be necessary. (This was another recurring theme at the conference.) Systems that can wait, will wait over the short term. If this is done, a detailed project plan and a very strong project manager are going to be essential.

    One thing Stan has done is to put all suppliers of package software on notice by sending them a letter referencing the product, asking specifically what they are doing about Y2K and when it will be done, pointing out that Pheonix views this as part of normal "maintenance" and warantee of fitness, and requiring that they reply affirmatively in a given timeframe for continuation of the contract.

  17. No Surpises: Better than 90% Solution
  18. Bill Scully from Viasoft presented a talk that started with the assertion that none of the first-tier vendors will gouge the market. The general view seems to be that this is a good way to cement client relations and any additional costs will be directly related to the additional costs of doing business (such as the much feared inflation of programmer salaries).

    Viasoft has conducted over 60 impact analysis studies in the past year, mostly dealing with IBM/MVS systems running in Fortune 1000 companies. Bill protrayed Viasoft as an "expert in legacy system maintenance and transition engineering". They've been in the business of building legacy system maintenance products for 15 years.

    The first step is to establish awareness. Apparently there is now so much work like this available that vendors are walking from the job if this is too much effort. If the company doesn't want to believe there's a problem, just walk away and talk to someone else. Bill also warned that it is easier if your external information sources are unimpeachable, such as Gartner Reports, fact sheets from Anderson or IBM, IEEE and ACM publications, etc.

    There are a number of charts in the slides. One that saw a number of heads nodding in agreement was a bunch of rows labelled: Finance, Government, Health Care, Insurance, Telecommunications, etc. The columns were labelled: Program %, Data Items %, and Logical LOC %. The numbers ranged from 61.4% of all Telecommunication applications have date related code up to 93.8% of code in Retail systems. The problem affects 0.7% of the data in Utilities up to 2.4% of the data in Manufacturing systems. And anywhere from 1.0% of LOC in Utilities up to 4.6% of logical LOC in Insurance applications will contain date-related instructions.

    Among a number of specific steps to take, he listed general costs. As I understood it, an impact study alone should costs $1-2M. Just creating an inventory takes anywhere from 1-4 days per application (where typical applications consist of 800 separate programs on average). He also strongly recommends an implementation "pilot" to validate the results before progressing to the full-scale implementation. Contingency planning is also important at this late date.

    The rest of the talk was an explanation of how Viasoft does its business, stripped of the interesting details and thus not of a lot more interest.

  19. State of the Year 2000 Solutions
  20. Peter de Jager is an interesting speaker who runs both a Year 2000 mailing list (which I follow) and a website devoted to Year 2000 issues. The website is pretty dynamic and I find it worth visiting every so often. Peter mentionned a planned revamping of the information late this summer.

    If we don't fix this, we are heading into the next great depression.

    I can't attribute this quote but it could easily have been the motto of the conference as a whole and it could have been a title for this talk.

    If we started now, we might somehow muddle through. However, 65% of businesses aren't working on the problem. That's the number Peter gives and it matches what Gartner is saying. Apparently both also agree that the real number would probably be more like 85% but nobody would believe that. Peter basis his estimate on his tour of duty as a speaker where he usually asks for a show of hands and rarely sees even 10 in a room with 100 people.

    The GM brake plant in Dayton, Ohio went on strike and, within two weeks, this one small plant had shut down all of GM. GM (and the other auto manufacturers) are dependent on 100's or even 1000's of parts from third-party suppliers. What happens if even a few of these are unable to recover from Y2K problems for even a few weeks?

    Peter's talk is the source of my earlier note that the Wall Street Journal has done at least five major interviews and that none have resulted in an article. His opinion is that the WSJ may be afraid that they will cause a crash if they write the report.

    This is not a techincal issue, this is a management one.

    The Newsweek article of June 24 quoted an IBM VP as saying that the benefits from 2-digit dates in the past more than compensate for the current costs. This is stupid because people won't be counting past benefits.

    Chase Manhattan (see above) apparently has given their technical support center a "blank cheque" to solve the problem. At least, that's what they have said to Peter, citing the fact that a few million dollars will save a business that deals with $2-3TRILLION each day! They told him that they recognize this as a life-or-death matter for the company. I'm not phrasing this as statements of fact because I overheard a hallway conversation between people who should know the facts at that bank and apparently they have talked to people who would know of such a deal and it just isn't happening that way. In fact, Chase Manhattan is dealing with political infighting over who will run the data centre and nobody is working on Y2K in a large scale project.

    The one big hope is that a large, highly public failure is due soon. When it comes, the press will be all over it and it is important that anyone asked to quote make it clear that this is the result of a single company failing to deal with a well-known and well-understood problem. Highlight that responsible companies will already be working to avoid exactly this problem.

  21. Year 2000 Users' Groups
  22. This was the second talk presented by Stan Price, this time in his role as Chairman of the Arizona Millenium Group. This group manages to field monthly meetings with 30-35 people. This particular group does not allow "vendor" representatives although vendors are invited to speak regularly.

    In fact, since this group has representatives from 10 state agencies, 2 cities and 2 counties, they are currently booked by vendors through the end of this year already. And the regularly scheduled large meetings are attracting some press coverage since 20 people getting together without a product to sell indicates that there must be a problem. This creates some synergy as vendors are attracted by the press attention and are more open to talk.

    Stan did warn that some companies may not want their names associated with such a group since it might appear as if they have a problem that is not otherwise recognized. So a number of people attend as individuals and not as representatives of their companies.

    Stan's advice for getting a group started is to use vendors. Get their mailing lists, ask them to distribute your flyers as they make calls, etc. Also, work through the existing organizations such as DPMA, IEEE, etc. Don't neglect local papers and community calendars.

    One of the first meetings should feature a high profile speaker (such as Peter de Jager). Even after this 50% growth spurt (following Peter's talk), Stan felt it took a full 6 months before he was no longer essential to the continued existence of the group.

    The rest of the talk was good advice for running any user group. If anything was key, it was the combination of a membership requirement that you never miss more than one meeting in a row and that vendors could not be represented.

  23. Get a Process, Then Get Going
  24. This presentation featured Ron Petrie, Vice President of Avatar Solutions, Inc. The goal was to define a process for dealing with Y2K. I left the talk feeling like that goal wasn't quite reached and I'm not sure that I believe everything Ron offered.

    The idea of having a process in place is a good one. And I certainly agree with the initial premise that existing software maintenance is driven by expediency and that unusual discipline will be required in order to solve the Y2K problem before 01/01/00. I also agreed with a lot of the "motherhood" statements (5% of a software portfolio is highly complex and represents 95% of the problems, match skills to tasks, etc.).

    Where I felt left down was the lack of a well-defined process at the end of the talk. Beyond the already stated Analysis-Plan-Execute strategies presented (better) in earlier talks, there wasn't much meat here.

The remaining presentations I attended (actually, attended earlier in the day) were rather low-level technical matters. One was by William Brew and Karl Schimpf of Reasoning Systems. They were pushing REFINE as the key to a solution, especially to solving the forward slicing and backward slicing problems. Since we already do the same and better, it wasn't particularly interesting except that they seemed to be getting the technology right (as I'd have expected).

Another was by Brian Boyle of NOVON/DBStar. He favoured a complexity theory approach to the problem and used the Center for Disease Control as an example. There were a number of very fancy and detailed slides, graphs, etc. but ultimately it seemed very academic, unfocussed and unrealistic.

Vendors Not Otherwise in This Report

I didn't capture every vendor. Some didn't leave me feeling confident in their product and I'll just leave them out completely. But a few seemed noteworthy and I thought they should be part of this report.

IBS/Solution 2000

I have to admit that I was a little surprised that their literature didn't mention a website since these people seemed very well organized in every other fashion. Perhaps the 'net isn't as pervasive a force as I think.

Interactive Business Systems, Inc. provides an integrated set of tools for identifying Y2K problems and automating the conversion. The tools appear to be based on a successful suite of migration tools.

Their Year 2000 pamphlet is pretty impressive. And they have "White Papers" on the topic from an ex-IBM'er with impressive credentials. The papers are a little on the abstract end but that's part of their nature.

IBS Conversions, Inc.
2625 Butterfield Road
Oak Brook, Illinois 60521
1-800-5555-IBS

Lockheed Martin

Another vendor without a website listed on their material although they do have an email address.

They have a slick brochure also but they focus more on their Reengineering capabilities and only focus on Year2000 as a small part of the package. The product itself looks a lot like Rigi.

After talking with them, I'm not sure if they have solved their problems scaling up to millions-of-lines systems. Clearly that size doesn't make for a good demo but the 10K examples that they were showing made me wonder how they handled the scaling up. Talking with their representatives didn't give me enough information to be clear. I do think that they deserve a closer look and I am quite sure that there is lots of room for mutual benefit with CSER and this group.

ReEngineer Tech Support
Lockheed Martin Tactical Defense Systems
P.O. Box 64525 MS U2H27
St. Paul, MN 55164-0525
1-612-456-7803

McCabe Associates

I'd previously associted McCabe with complexity and not with product but this is a nice looking package. The idea is to use staticly determined complexity metrics to understand a system and to use visually oriented tools to display the system.

In the end, the examples seemed small and the manipulations seemed somewhat forced which makes me wonder if the solution would scale up even into the 100KLOC size let alone the 10MLOC and greater sizes needed.

This is another place where CSER could show synergy in working to combine our strengths into a solid reengineering tools suite.

McCabe Associates
Twin Knolls Professional Park
5501 Twin Knolls Road, Suite 111
Columbia, MD 21045
1-800-638-1528 Info@mccabe.com

Last Update: 07/05/96