Scotia McLeod Ltd.
INTRODUCTIONThis was my third job. It resulted in my move from Montreal, Quebec, to Mississauga, Ontario. This move was brought about by socio-political changes in Quebec.
Ontario Job Hunting
I had several disappointing interviews in Toronto and Mississauga. Coming from an aerospace background as a systems analyst into an area that is primarily finance focused was very difficult. I interviewed at Atlantis Aerospace, a small aerospace firm specialized in training simulators for aircraft maintenance operations. That went nowhere. I sent applications to firms that were clients of CAE, but again, silence. I met with a large job placement firm and was told that I would never find work in Toronto as a FORTRAN programmer earning the same salary I was making in Montreal. Apparently I was too well paid in Montreal! There was no demand for FORTRAN programmers in Toronto.
MCLEOD YOUNG WEIR LTD.
I interviewed at McLeod Young Weir as they had a recent vacancy. The first interview was with Human Resources. Then an interview was with Greg Latiak, a technology manager responsible for the production and development brokerage systems. It went well. He explained that his background was chemical process control systems and having come from a different industry was not a barrier for him to get into finance, nor should it be mine. I had a couple of other interviews; one with a software development manager and then another with Bo Wisser, the technology director. The role they needed to fill was that of a systems and database administrator. This was a large DEC VAX shop with a considerable networking infrastructure. My exposure to VAX systems and networking at CAE made me a suitable candidate for the position. So almost after one year of applications and interviews at various firms, I was hired!
McLeod Young Weir was acquired by Scotiabank in 1988. When I joined in 1992, the integration into Scotiabank was still not obvious, the firm operated pretty much with the same culture as it did in the McLeod days. It was a great working environment, fast paced but notable for great cooperation between staff and management. The benefits were good and the pay was decent. A quarterly bonus system rewarded staff for good work. Greg insisted that new hires take the Canadian Securities Course (CSC) which the firm paid for. It was a great way to become familiar with the investment industry. One fellow on the trading floor realized I was taking the CSC course and tried to explain how the firm priced synthetic derivatives; I appreciated the effort but I remained totally clueless! It turned out the be a great place to work.
Interesting story: before 1997, when live trading still took place at the Toronto Stock Exchange (TSE),
McLeod traders wore a trademark bold yellow
McLeod tartan (as on the page header above) jackets to recognize each other easily, trade efficiently and stand out from the crowd.
This idea was born because, at the time, Trevor Dixon (McLeod Young Weir 1970–1989) wanted to give his traders more visibility.
When his Scottish wife was flipping through fabric samples to make their son a tartan jacket, she showed Trevor the McLeod
tartan, and he had an idea!
The Chairman of the Board, Austin Taylor (famous for knowing each of his 2,300 employees by their first name), approved
Trevor’s proposal of bold tartan jackets and asked Trevor to launch them by leading his traders onto the floor with live bagpipers.
The link "Scotia McLeod 100 Years" gives a good history of the firm and its founders.
In either late 1992 or 1993 Scotiabank took more of an active involvement in McLeod. As we had just emerged from a recession perhaps it was time to put tighter reins on the firm's expenditures. The firm's name formally changed to Scotia McLeod further establishing itself as Scotiabank's brokerage arm. The result was more of a large bank focus on the firm's operations; more formal controls over change management, processes, audits, etc. The number of staff became leaner over time. Some staff, mostly related to trading, kept the generous McLeod benefits, whereas other staff (IT as an example) folded into the Scotiabank benefits, which were not as encompassing.
Staffing was always lean. One Unix administrator eventually quit after working for several months through weekends without any time off. Lean staffing is the norm in most banking technology departments. I had the same complaints from my staff at another bank. Banks are generally safe employers and some folks can spend many years working at the same bank, if downsizing or re-structuring does not impact them. As a system administrator, I really had no formal backup. If I vacationed out of country, one of the Operations leads would fill in for me. But if I was in town, I carried a pager and was effectively on pager duty 365 X 24 X 7. Several times when I was out for dinner or attending a function, I would get paged and needed to come into the office to investigate some production issue. One time I had taken the weekend to spend time away in another city and betting that all would be well at work. Unfortunately, while I was taking a bath, my wife got a call from an operator that there was a production problem and could I please come to the office; a 2 hour drive!
For many years our data center was located on the 10th floor of one for the four Toronto Dominion Bank (TD) Towers. Ours was on the corner of Front and York streets in downtown Toronto. This was the iconic TD Center designed by Ludwig Mies van der Rohe in the 1960s. Similar copies of the TD buildings are in Montreal (Westmount Centre) and in Chicago by the same architect. I'm guessing our Scotia presence in TD's tower was legacy from the McLeod Young Weir days. We were eventually moved to the 14th floor of the ScotiaPlaza tower. Likewise a disaster recovery (DR) data center was built several miles away. The same location hosted a business continuity site (BC). Financial services rules by then required financial institutions to have a viable backup and business continuity sites.
Despite these inconveniences and the off-hour workload the job held much responsibility and latitude in the way work was carried out. It was also a fantastic learning environment.
VAX Production and Development Systems Support
My role at ScotiaMcLeod was, as Systems Administrator, to keep the production and development systems working and ensure all software was up to date and supported by the respective vendors. This included the Ingres database, Financial Models FMC portfolio management software, FAME reporting, FOCUS database/reporting, Quickpost, various compilers on the VAX, full system backups and upkeep of the VAX operating system itself. Day to day operations consisted of investigating system errors, managing internal user access and external PCBOND access (subscription based service) over DATAPAC X.25 connections. Responsibilities also included terminal server and printer configuration/integration to the VAX. Monthly full system backups on weekends was also performed.
Every year I would create an application update plan to ensure our applications always remained current and supported. In addition to software upgrades there were the occasional hardware upgrades. At some larger banks I worked at, system upgrades were done every few years, and when it was done it impacted thousands of servers and their applications. It was also usually rushed as hardware reached its end of life and would be unsupported. The approach McLeod took was to review software and hardware on an annual basis for upgrades. Some applications were upgraded several times throughout the year. Systems capacity was measured monthly and added to a management status report for review. If capacity trends were negative we had ample time to plan for upgrades. Therefore we never experienced a panicked rush to upgrade applications or equipment.
I owned a DEC PRO350 workstation which I brought from Montreal. The console for the VAX 8600 was a PRO380, a more powerful version of the PRO350, so I was familiar with using it beyond its role as a console. While rummaging around the file system, trying to understand how the VAX8600 firmware download worked, I noticed a file that the PRO380 kept of the VAX's internal temperature. The file revealed a trend that over the years the VAX's internal temperature had steadily risen. On discussing this with my manager we concluded that as we brought more equipment into the computer room we did not upgrade the room's cooling capacity. The only Liebert air conditioning unit, although large, was inadequate to maintain a cool equipment temperature. Subsequently, we used the temperature log to make the case for another Liebert air conditioner; placed at the opposite end of the computer room.
This awareness also led to us to create a list of systems that could be powered-off by criticality should the air conditioning units fail. It happened a couple of times due to issues with the air conditioner water supply. As the computer room temperature increased, we now had a list of systems that could be shut-down to reduce heat dissipation as the room temperature increased. The 8600, HSC50s and disks were always the last systems to be shut down if needed as they were the most critical production systems. Every business day we had to deliver bond data to the Financial Post newspaper for publishing. The Scotia McLeod bond indices were well known in the industry.
Our production VAX 8600 was eventually upgraded with a multi-processor VAX 6200. As the number of applications increased, the databases grew larger. There were also more users on the system, a systems upgrade was needed. We continued to use HSC50s attached to RA92 disk systems giving a shadowed disk capacity of 10GBytes (enormous by 1990s capacity). The cache memory on the HSC50s was upgraded to improve the speed of data reads; if the data was already cached it saved reads from disk to get it. I observed that after the upgrade, there was a good improvement in the disk read speed (reads/sec). This change alone improved program execution speed by reducing the I/O wait time.
A year later, we upgraded the MicroVAX development environment from two disparate VAXStations to a VAX 3100/95 server with developers each having a VAXStation 4000VLC or a 4000/60 workstation. This gave the developers a working DECWindows environment and advanced software development tools through the DECSet suite. The DECSet suite featured the following development tools:
- Language-Sensitive Editor (LSE).
- Code Management System (CMS), a version control system.
- Module Management System (MMS), a build tool.
- Source Code Analyzer (SCA), a static code analyzer.
- Performance and Coverage Analyzer (PCA), a profiler.
- Digital Test Manager (DTM), as a test manager.
- A number of text editors were included in the operating system, including EDT, EVE and TECO.
I don't believe the developers fully utilized the capabilities of the new tool suite. APL continued to be the software development language although there was more interest in C. FORTRAN was not used in our financial apps although we had software licenses for it.
As part of the upgrade we also purchased an Infoserver (a small dedicated VAX3100/10e) and several CD drives so we could make the Bookreader application (on-line DEC documentation) accessible to anyone with a VAXstation or an X-Windows terminal.
I inherited the two development MicroVAX systems which were now unused. I took them home to tinker with. These were known to developers as the RED and GREEN systems. For several months they were still covered at home under the DEC maintenance warranty. GREEN was too power hungry to keep; it had a large demanding power supply. I still own RED, a smaller MicroVAX with a 300MByte CDC SMD Disk running VMS 5.5 from the Scotia McLeod days. Since no one was home during the day, I had RED connected to the phone line over a modem.
Eventually there was a movement to Unix as a development environment. There was already widespread use of Unix systems and X-Windowing terminals throughout Scotia. After we upgraded our development system I was fortunate to get a nice 20 inch NCD X-Windowing terminal for administering the VAX. It booted off the VAX using the MOP protocol. Seeing how beneficial it was to have multiple windows to access different systems over TCP/IP, I tried to convince the Operations manager to provide the Computer Center Operators with X-Windows terminals to modernize operations management. However, his point was that it would be difficult to train new operators on the new terminals versus keeping them on the older dumb terminals. As well, should the X-Window terminal fail, it would take away the ability to direct connect to systems, which of course was only half right as there was a serial port on the NCD X-terminals which a terminal emulation window could be attached to as a backup.
At my cubicle I now had an NCD terminal and a PC. By accident I also discovered that a POTS telephone line to the outside was installed under my desk. This telephone line was going to be handy in future. The X-Windows terminal was great to have because on one screen I could have access to both production and development VAX systems. As well, I could also access any Unix system that I had an account on. It was an opportunity to play with other DECWindows applications that I could explore on our development VAX as I didn't need a VAXstation.
One of the programs on the DEC freeware CD I received was xroach. This program hides creepy cockroaches under X-Terminal application windows. When a window is closed, for example terminating an Xterm session, the cockroaches will scamper looking for another window to hide under. It can be quite unnerving if you're not expecting to see cockroaches on your screen. I loaded it on the development VAX and found it was a great prank application. So after trying it on my own X-Window terminal successfully, during lunch, when our Unix Administrator (Daryl) was out, I went over to his X-Terminal, picked up his IP address, then on the development VAX pointed the xroach application to his IP address. For weeks Daryl didn't have a clue where the roaches were coming from, it must have driven him crazy! Eventually I explained to him that I was guilty. It was a great prank! He wasn't thrilled and tried to get back with his own bag of tricks.
Eventually I figured that I could bring a modem from home, connect it to the phone line under my desk, then connect the modem to the serial port on the X-Windows terminal and start up an xterm window to dial outside. I used this access to dial up to my home VAX (RED the former McLeod MicroVAX) where I could try various DCL jobs, install software or try some of the VMS commands that would be detrimental at work. One day my manager showed up at my desk wanting to see what was on the X-Terminal screen. This had never happened, so I suspected someone told him about my home access. As it happens, that day I was connected to several Scotia VAX systems over xterm and to the home VAX via an xterm dial-up session. But all he could see on the screen was several VAX xterm sessions all with the prompt "Yes Master?", which was also the command prompt for the home VAX. So, he left seemingly disappointed for not seeing anything out of the ordinary.
I'm not sure what he was expecting to see; gaming applications perhaps. Dial-up access to my home VAX from work was a privilege I didn't abuse. The experimentation on my home VAX benefited my work. I could try system changes that would be detrimental if they failed at work. Try SYSGEN with changes to MODPARAMS without impacting anyone. I could evaluate several application installation options at home before running the same installations in production. Although we had a development environment, some of the system level changes, backups and software installation tests could only be done after hours and at the detriment of corrupting the environment.
When my cubicle was moved to a new location (now a nice window spot) the dial-up telephone line moved with me. By this time I could use the telephone line and the PC along with my personal dial-up Internet account to connect to the outside world and browse information. Despite Scotiabank not having widespread Internet access no one really fussed that I could access the Internet over my own account from work.
One day, our SVP and several other visitors came down to my cubicle to see if I had Internet access. Was I in trouble? No, it seems Scotia McLeod was considering purchasing a web-based trading platform for their clients (buy versus build) and needed a demo. But they could not get the dial-up line in her office to work. They needed to access the demo version of the vendor's trading system in New York. The visitors were reps from a US brokerage who were already using a trading system they had built in-house. They were willing to license it to Scotia McLeod. So, over a 14K Baud modem and my Internet dial-up account they ran the demo. It was a bit slow but it worked. We all benefited from learning how the system worked and how it was architected (mostly SUN server based). The dial-up line saved the day. Thereafter, McLeod built their own trading system, rather than license one.
As an aside, this was also around the time that Linux started showing up at work. Developers were passing around floppy disk distributions of early versions of Linux. There was no commercial interest in Linux this time, it was still considered a curiosity. There was no use of Linux in the data center yet; Red Hat didn't even exist. SUN Solaris was the de-facto Unix OS at McLeod.
QuickPost
Quickpost, created by Kasten Chase Applied Research Ltd., was an early real-time messaging system similar to text messaging that we enjoy today. However, Quickpost would send and receive messages on terminals and send messages to printers. The original system was developed on a PDP-11 series minicomputer and adapted to run on VAX/VMS. Our version was running on a VAX3100 under OpenVMS. Quickpost was used for real-time messaging among ScotiaMcLeod users. It was available mostly to front-office staff.
The system was based on message delivery queues. It guaranteed message delivery by waiting for receiving systems to be available to consume the message. Incoming messages from terminals get put into a queue which waits for the receiving terminal to be available for the message to be delivered. Messages could be delivered user to user via terminal access or user to printer. There are several queues, not a singular one. Queues can fill up when the messages are not flushed out quickly enough from the receiving terminal. This happens if the receiving terminal or printer is powered off. The messages stay pending until the receiver can get them. The problem here is that if a queue fills up because it can't flush the messages then the system hangs; a critical flaw. A system reboot will recover it, but will lose any pending queued messages. Anyone familiar with IBM's MQSeries product will see the similarity.
As a workaround I created a batch job on the VAX that would run every 15 minutes to produce a report showing how many messages were pending in the queues. The report was e-mailed to an administrator who would monitor the application queues. If a queue was reaching its capacity, then a hold was put on the queue before it could reach its limit. As well, if there was a device holding up the queue because it was off, for example an off-line printer, then someone was instructed to turn on the device. As well, the admin could re-route the messages to another queue as a last ditch effort.
It was possible to see how many messages were pending in what queue for what device. We asked staff not to turn off printers overnight as this would cause any incoming messages to be queued up until someone turned the printer on in the morning. The report improved the availability of the Quickpost system as now we became aware how close queues were to reaching their limits. If queues filled up to their maximum capacity, then the only way to recover was a system re-start. We could minimize queue full incidents and forestall application recovery reboots by monitoring the reports.
The worst time for Quickpost was during the holidays when everyone sent greetings to each other. This always sent the message queues close to their limits, causing much stress for the administrator. Many times we instructed users not to send holiday greetings on QuickPost, but to no avail.
Scotia, realizing the limits of Quickpost hired a developer to learn the system from Kasten Chase. The goal was to evolve the system ourselves as this was a critical firm-wide application. The developer spent several months and over $20K of training at Kasten Chase. After the training, he insisted that the only way to improve Quickpost would be a total re-write of the application. This was not management's intent and when management declined to support a re-write, the developer left in a huff. No one else was ever trained on the internals of the system and it continued to be used for many more years into the late 1990s.
CMTS and Pyramid Systems
The Capital Market Trading System (CMTS) was a multi-year project for building a global trading system encompassing Singapore, London, New York and Toronto. the goal was to move trading positions from one geographical location to another, so as trading closed in one region, the positions would be passed to another region, thereby enabling continuous 24 hour trading. The CMTS systems were based on multi-processor MIPS R3000 processors by Pyramid Systems. A computer could accommodate two CPU boards with two processors per board for a 4 CPU system. Storage consisted of several differential SCSI disk drives. It was one of the few large scale multi-processing systems available at the time and hence why it was chosen.
We hired a full-time Unix system administrator to manage this complex in Toronto. I was also trained as a backup system administrator on Pyramid's operating system (DataCenter/OSx).
CMTS was developed for the bank by DSI Systems from New York. In the early days of the trade desk, CMTS would crash or otherwise malfunction and an orderly shutdown across several cities was required to reboot the system, a process requiring many support personnel and hours of time. Surprisingly, this reboot procedure was handled from the front-line support trade help desk; in other words, Mike M. and another support analyst. They would first have to call the department and senior heads (the only time Mike ever got to address Scotiabank's Vice President!), get consensus for the procedure, then all servers were shutdown and then rebooted. As the server start up required some time (different for each trade desk!), the front-line desk support would coordinate with support personnel in each city to bring each server up in sequence; starting with Toronto, then Montreal, then New York and then Vancouver. Since this procedure would take several hours, traders in the cities at the end of the sequence would often be told to clean up and go home early since the servers would not return in time for the rest of the working day.
I also recall that there was a memory leak in the code. Memory was allocated but not released by an application. By the end of a week, the application had consumed most of the memory. To prevent excessive virtual memory paging a re-boot had to be performed weekly. Initially, to circumvent this issue until the code was repaired, more physical memory was added.
Pyramid Technology was acquired in 1995 by Siemens Nixdorf Information Systems (SNI). DC/OSx was eventually deprecated and replaced by Siemen's own Unix OS. Over time, Pyramid systems maintenance costs became excessive. A project was started to migrate the code to SUN SPARC based Unix systems. I recall that this took over a year and there were some C compiler compatibility differences. A large team worked on that conversion. We often chided that some contractors we hired learned how to code in C on this project.
My involvement with this system was minimal but it was nevertheless a great opportunity to learn more about Pyramid systems. There were actually some similarities to the SEL 32 series systems we used at CAE. The system backplanes on one side connected to device interface cards through a set of pins. As with the SEL 32, this made installation of any interface cards very tricky. As we found ourselves migrating away from the Pyramid systems, I inherited one of the office type Pyramid servers and spare cards. It was as large and as heavy as the SEL 32/67 used at CAE. I hired a truck to get it home. I got it out of the data center with the Operator's help. At home, after burning through one CPU card, I realized the basement wasn't cool enough to keep the system running. Heat sinks were added to the R3000 chips and this kept the CPU cards from burning out. This Pyramid was rated for an office environment but the reality is it needed computer room conditions. Unfortunately there was not much software available for it. After bringing up the OS, there wasn't much else to do with it. It had a huge power draw on the electricity bill, so its use was brief.
The World Wide Web
The real exciting part of working at Scotia McLeod was the advent of the Internet. My manager sent me on SUN Microsystems course for managing systems through SNMP agents using SunNet Manager. While there, the instructor showed us this new thing you could access on the SUN systems; the world wide web. It was accessible through an X-Windows application called the Mosaic browser. There were several thousand web pages in existence then. The instructor mentioned that it would take more then a lifetime to read every page.
I was super excited seeing this and read up all I could on this new World Wide Web. However, Scotia McLeod being cautious would not allow Internet access. As a compromise, you could request a web page by e-mail and it would be sent to your inbox. There was even a way to get binary files, encoded as ASCII characters, through e-mail from the Internet. This was miserably slow, but banks being risk averse, wanted nothing to do with the Internet, yet. There was no bank strategy on how to utilize this new medium, yet. For banking this eventually became an additional client interaction and sales channel as we know it today.
By this time my original manager, Greg, had left McLeod and I reported to a new hire; Cameron Douglas. He was a former military leader and an Order of Canada recipient for volunteer work with St. John's Ambulance. Cam was a supportive manager. He was easy going and great to work with. It took him some time to understand the McLeod culture but in due time, he fit in very well.
I learned all I could about HTML, web browsers and servers. Since I managed the VAX development cluster I took advantage of the spare system capacity to run the Ohio State University (OSU) Web server; sourced from the DEC freeware CDROM. The word got around with the developers that we had a web server running and before long other people were hosting their own web pages on the VAX. As we had static IP addresses for all equipment, it was easy to access the VAX. Web pages were viewed with Netscape's Navigator browser.
Our SVP of technology found out that we were running a web server with some people's web pages. So, I go a call from Gail! The result was that I had to remove the personal pages on the web server. On the upside, I got offered a position to start a new group focused on managing Internet and Intranet services. Cam was not happy about me leaving the McLeod VAX world to be moved to an internal Scotia infrastructure group (ISS) as this meant hiring and training a new VAX administrator. However, for me it was a promising career move as this technology was the next big thing. The Internet revolutionized content access, brought the knowledge of the world to everyone's computer, changed how firms provisioned services and how consumers accessed those services. Its benefits are taken for granted today.
Initially no one really took interest in the Internet as a new technology. Not one internal group wanted to be responsible for it. I believe most considered it a fad. I was baffled.
INTERNET INTRANET INFRASTRUCTURE GROUP (I3G)
So the I3G group was formed in 1995 with $50K of SVP seed money for software and hardware. I had the go-ahead to staff three additional positions. At this time this was the Wealth Management branch of Scotia but Scotiabank did not have widely accessible Internet for some time. I reported to a VP of network services. The Internet infrastructure was currently managed by one of his staff. He was responsible for the Internet connection, firewall, proxy server and the current piping that connected Scotia to the Internet. However, one weekend his staff for some unknown reason decided to daisy chain two web proxy servers providing Internet access. On Monday morning, there were a myriad of complaints to our team as to why Internet access was so slow. Finally we discovered that a change was made on the weekend without informing anyone or initiating a change record. I escalated the issue to the VP; who really should have change control over Internet infrastructure? Since then, we had more visibility into any changes impacting service.
The I3G mandate was:
- Act as the single source for Internet/Intranet inquiries.
- Resolve any issues with I3G infrastructure by working with other teams to resolve issues on a timely basis.
- Install Netscape products on servers (Enterprise Server, Proxy, LDAP, Compass, Mission Control server, etc.) and the Netscape Navigator browser on desktops.
- Build an Intranet for Scotia McLeod.
- Provide Help Desk support for Netscape browser issues.
- Provide monthly web server client activity statistics on Scotia McLeod web sites.
- Manage the Netscape licensing relationship for contract renewal.
We had not yet hired a system admin for the VAX systems, so I now carried two support pagers; one for VAX and one for the Internet/Intranet. Fortunately, a qualified system administrator was hired and I was happy to pass one pager to the new admin.
As regards I3G hiring, I was very lucky in getting some excellent people on board. I hired people who were technical generalists and could take on several roles.
-
The first hire was Mike M.. I worked with Mike before when I was managing the VAX systems. He was supporting the trading help desk but was happy to take on a new challenge. Mike, in my opinion was a genius with a gift to foresee trends and be on top of the latest tech. He had good Photoshop skills and the gift to get any software we needed; gratis ;-) ! As we had a budget, we legally acquired the software we needed. In the early days of developing our own pages, the ability to get new development tools free really gave us an edge over any other group in getting work done quickly. Purchasing software in banking is a time consuming process because of approvals and then procuring it from bank sanctioned software providers.
As Mike had great web skills, he developed the Intranet site and consulted with internal users on content for the site. One user cautioned us against some dubious bank links on the Intranet; we removed the links. Mike also assisted anyone with Intranet content issues. I recall him helping the trading desk with publishing documentation in Adobe PDF to a web site. More content was being moved onto the Intranet in PDF, JPEG and HTML web pages. However, staff were challenged with how to convert content to web consumable formats. This was all new for them. In some cases Mike built templates for them to follow, or he would teach them how to do it. At the time MS-Word was just developing the capability to optionally save documents in HTML format, but the resulting HTML code was just terrible in these early days. It was just easier to save documents in Adobe PDF format.
-
The second hire was Susan C., the daughter of a family friend. She was working as a systems administrator for a small firm and I knew she could be a valuable asset to our new team. She had a technical background and was familiar with Unix. As a young woman she could also be good to balance what was mostly a male IT culture at McLeod. Her role encompassed installation of Netscape applications on the Unix servers we used. Anytime a team needed a web server installed, or a proxy server, she installed and configured it. As Intranet usage increased in the firm, I had her provide on-site Help Desk access. Most of our users were on the trading desk. My experience with them was that they had little patience and were high strung. I was yelled at by one trader who complained that his web pages were not getting any views.
Susan was knowledgeable, had a calm demeanor and was a young lady, so I had her dispatched to any user on-site help calls. I bet the guys would be more respectable to a young lady. This worked well as everyone had good feedback and were respectful of her help. At one time she was called to the Scotiabank CEO's office to help him with a browser problem. He complained that there was little use of documentation placed on Scotiabank's web site despite the investment made.
-
The third hire was Adrian, a developer. I assumed that we would likely be asked to do a small amount of development using the Netscape products so it would be handy to have a developer. These would be small coding jobs or modifying existing code as the department already had a large development team for any application development. We were not the go-to place for development tasks. He would also install Netscape products as the need arose.
With the hiring concluded, I3G as a small team that could support Netscape products from installation to support, create web sites, perform web site usage trend analysis and provide advice on using the evolving web technologies. I3G became the go-to place for any inquiries relating to web technologies.
I3G Activities
With the team in place we started to participate on several web initiatives. Other than the regular support activities there were many new interesting initiatives underway that were leveraging the Internet. As a small group we could be a resource centre for web technologies but as interest increased and the demand grew for web-based applications I could see that this group would be short-lived. The development teams had the tools and more resources to build any applications. Our small team would not be able to keep up with exponential demand on our time. Over time our activities would be sidelined by other teams taking a more focused approach on web technologies.
The Netscape Enterprise web servers came with many great tools. It supported server side JavaScript (LiveScript) and could operate as a Java application server. JavaScript was initially developed by Netscape as LiveScript until the name change. It was invented by Brendan Eich in 1995. It was developed for Netscape 2, and became the ECMA-262 standard in 1997. After Netscape handed JavaScript over to ECMA International (an organization that creates standards for technologies), the Mozilla foundation continued to develop JavaScript for the Firefox browser. Today JavaScript is used by millions of developers. Few people know of JavaScript's origin as a Netscape technology.
To demonstrate the language there were several demo applications included; contact manager, project task manager and an HR Job Posting system among others. Included with the web server was a Unix version of the Informix database server.
I had experience using Informix at the desktop level, so creating databases for the sample applications was something we could do by ourselves without having to call on a DBA. Another very useful web server feature was a built-in document search engine and indexer. This provided the capability to publish PDF or text documents, have them organized into collections and then provide user with a web based search engine to search the indexed documents. All these were native features of the Netscape Enterprise web server (release 3.6).
Project Tracking
To support our activities we instantiated two sample Netscape applications; a contact manager and a project manager. The project manager site was accessible by the team and provided everyone with a view of tasks underway and tasks to be completed. It was a great dashboard to keep everyone appraised of work in progress and other work in the pipeline. The contact manager was comprehensive as it had many data fields. We used it mostly for tracking vendor contact information we needed. This was the first web-based task management tool used at Scotia; most people were unaware that we were using these tools to organize our work. They were simple applications but it got us away from sharing spreadsheets of activities. It was great that we were given the freedom to pretty much do what we thought was right. Sample screens from this application is in Appendix C.
Scotia Capital Intranet
Mike created the graphics for the first Scotia Capital Markets Intranet. It ran on a Solaris instance of the Netscape Enterprise server. The Scotia logo with the words around it were Mike's creation. Content input was solicited from the traders. We created the HTML pages. After an initial review, we received SVP approval to deploy it. Internal communications went out before the Intranet go-live day, although users were aware that were working on a site.
All Nestcape servers were hosted on SUN Solaris servers. The web server response was quite good despite the whole network infrastructure being limited to 10MBits.
Although the Intranet site looks amateurish by today's standards, we had to keep the pages simple. The graphics were stripped of as much resolution as we could to make the JPEGs smaller. The reason being that there was only so much data you could push down a 10MBit link. As well, with many users hitting the web server simultaneously we opted for smaller pages with lower transactional times for page delivery. The site was a hit. Response time was great and site usage was high.
In time, another team was given responsibility for web page design. They took over the Intranet design as well. For the Intranet, they built an elaborate professional looking web site. However, they failed to recognize the limited network bandwidth. The result was slow response. Complaints that it took too long to load pages followed quickly. That was the end of our involvement with the Intranet site.
Scotia Capital HR Job Posting
Another first for Scotia was a web based HR Job Postings. We took the sample application provided by Netscape and found that it lacked the functionality we needed. We brought over a developer from the development team and he worked on the back-end code. Mike branded the application for a Scotia simple look and feel. It ran on the Informix database server that came with the Netscape instance and was now running on Solaris. The job posting was another very simple but successful application that brought job postings to the Intranet. An HR administrator would log in and manage the job postings. The limitation was that users could not apply on-line to the job but they could download a job application form from the site. The web site look and feel is captured in Appendix A.
On March 20, 1998, we had 68 job openings advertised on the site.
As a side note, the Informix salesman somehow found out we were using a database and wanted to extract licensing costs from us. We explained that this came with the Netscape distribution and needed no other licensing. Of course, we had no Informix support for the database but we managed nevertheless. Informix was quite a stable and easy to use database. It was the database of choice for many Unix installations at the time. Informix was subsequently acquired by IBM in 2001.
I3G Intranet
While we were deploying Netscape Enterprise Servers and other Netscape products, we were still using the OSU Web server on the VAX to host our I3G web site. Some of the I3G sites pages can be seen below in Appendix B. Having the web server on the development VAX allowed me to create applications (mostly in VMS DCL and FORTRAN). Some of these are listed below:
-
Telephone Directory. The Scotia McLeod telephone list was in printed form; originated from an electronic copy.
I took the electronic copy, formatted it to plain text and created a small batch script and web form to perform a string
search on the directory text. Because it was a simple string search you could search the directory by city, last name, first name,
reverse telephone number lookup; really any combination. When a search was initiated, it launched a batch job on the
development VAX to
perform the search and return a web page to the user browser.
On the McLeod Intranet a link was added to the telephone directory running on the VAX OSU web server.
Whenever, there was a telephone directory update, I would get a new file and update the list.
This became one of the most used web applications at McLeod. I would go down to the VAX console and see a constant flow of batch completion notices for telephone searches. It was faster to do an on-line search for telephone numbers than it was to use the paper telephone handbook. For an application that we had just quickly put together ourselves, it was a huge hit!
I wanted to expand the telephone directory to encompass all of Scotiabank's telephone numbers. I was given the manager's name that maintained Scotia's telephone directory. He was currently rolling out an electronic telephone directory based on a Novell product. Unfortunately, he refused to provide me with an electronic copy as he didn't want to have to provide someone else with regular file updates. As well, this would have competed with his own project. I tried escalating this to my management but to no avail. We could have had a centralized directory for the entire bank, but it never happened.
- Web Server Monitoring. The other application we ran on the VAX was web site monitoring.
Using CURL on VMS,
we would extract a web page from several sites and measure the latency to get that page. If the site's web page
didn't respond, then the program would log the failure. The monitoring program created a web page with the status
of each server we monitored. If the server did not respond to the CURL request, a big red "X" would show over the
server icon on the web page. The web page self-refreshed every few minutes on a dedicated PC we kept on our team desk.
We used this to monitor the availability and response time of internal web servers. Over time we started monitoring the availability of web servers of other banks. The reason for doing so was to measure their latency time, determine if they went off-line and when they would shut-down for maintenance. From this we learned that when other banks performed their server maintenance it was often on weekends and after midnight. By contrast, we were cycling our own public web server during the day, a practice that was soon changed.
As a side benefit, when we monitored the other bank's web servers we could also tell if we had our own network problem. Occasionally, we would observe that several bank web servers all went off-line. It would be unusual that all servers no longer responded to CURL or that all banks were doing maintenance at the same time. We realized the issue was that our Internet connection to the outside world had failed. Because we ran the monitor every minute, we were aware of the network failure before the network team did. In future, when we saw this issue we notified the network team to repair the link.
-
Northern Miner. This was the early days of Internet use and everyone at McLeod now had the Netscape Navigator browser
and could explore the World Wide Web. I don't believe we had yet developed an Internet use policy. That was eventually
created to inform users that their Internet access was monitored. But before that, there really was no monitoring of what
users were looking at on the Internet.
At McLeod, all user browser Internet access went through a web proxy server. The proxy server would cache content, for example frequently used pages were fetched from the proxy instead of another fetch from the Internet. The intent was to optimize web browser response time by getting the page locally and saving repeated Internet searches thereby alleviating bandwidth use on the Internet T1 connection. It also had the ability to modify the URL paths, but I don't recall us using that. Another great feature of a proxy server is it logs all access to the Internet with a date/time stamp, the URL of the page the user requested and the IP address of the requesting user's computer.
To understand how many people were using the Internet and what typical content they were looking at, I requested the network folks to provide me with the proxy server log file. What I found was that some people spent a significant amount of work time browsing the Internet on non-work related web sites. There were also some people spending time on obvious porn sites. I worried that this non-productive, questionable web access could lead to the firm shutting down the Internet access due to abuse. So, I brought up the issue to my VP. I proposed that we "mine" the proxy logs to look for staff going to questionable sites, build a list of excluded sites and mine the data for top Internet users to understand if their excessive Internet access was for legitimate business use.
Surprisingly, my VP was not of the opinion that we should police our users. Regardless, I suspected that one day this would be required. I started work on the development VAX to take the proxy log and parse out web pages against an exclusion list of keywords to create an exception file for further analysis. As I wanted to keep it quiet that were were mining the proxies for user access, we called the project the "Northern Miner" after the Canadian mining magazine of the same name. It was a fairly innocuous project name; it could be anything. The analysis revealed that over time there was an increasing use of Internet access to non-work related websites. A couple of weeks later my VP came back and said we needed to start monitoring user access; something at the senior level had changed!
Thereafter, the proxy logs were sent weekly to the Security team which did the monitoring using their own data mining scripts. Our own mining initiative was short-lived. Security realized the importance of ensuring the bank's facilities were not being used for illegal/harmful access. In the coming months I came to know that several people were dismissed as a result Internet policy violations. One, who I knew personally, was dismissed because of spending most of his work hours browsing porn sites. More serious violations followed. It was the Internet "Wild West" back then. It's a different world today with many companies providing applications that block harmful content from entering the bank's networks. Today, Internet access is much more controlled and monitored in banking for all the right reasons.
Again, our mining effort was short-lived as others took up the work. We were proud to have recognized the need. We performed some of the early analysis that led to awareness of usage issues to come.
Other projects:
Video Conferencing
We got a call one day from the New York office if we could look into enabling IP based video conferencing from the New York office to Toronto. So we had a small project to try this. A couple of cameras were procured and we tried a conference call between the two cities over the internal wide area network (WAN). There was quite a lot of video frame freezing and speech cutoff. We concluded that this was not viable given the limited bandwidth between both city offices. The internal 10MBit network bandwidth would not have been viable either. As with other initiatives, the internal networks were just not designed for the onslaught of these new applications, be it Internet, Intranet or video conferencing. Inter-office connectivity between cities was also quite expensive. It would be many years before the networks caught up with demand for new capabilities such as video conferencing and IP telephony.
Scotia Capital Trading Platform
When Scotia Capital decided that it needed a trading platform for its clients, an external project manager (Diane?) was hired to lead it. We were asked to develop some early look and feel prototypes for the site before it was presented to the SVP. An ISS developer was also assigned to the PM for this early phase of work. We discussed various options for the site and presented some ideas which we thought were good. Many meetings were held and much of mine and Mike's time was spent on this. However, the PM was not satisfied with our mock ups. So, she asked the developer to come up with some ideas. Later that week he presented his mocked up web site. It literally had spinning eye GIFs as icons and was so bad that it was immediately shot down by the PM.
We simply could not come to any agreement with he PM. Eventually, she tired of us and a New York design firm was hired to provide various design options. We received the design templates from the firm. I admit the designs were great and the color combinations they proposed were excellent. A sample of the look and feel selected can be seen in the Mocatta site prototype that we mocked up. ScotiaMocatta was formed by Scotiabank's acquisition of Mocatta Bullion from Standard Chartered Bank in 1997.
I personally felt that our proposed designs were good. However, when you spend funds for a firm to come up with designs, there is always an "expert bias" of the work. Yes, we were a bit envious that the designs were really good!
WebTrends Analysis
Before there was Google analytics, the way to understand web site usage was either to do the analysis yourself using scripts against the web server log or buy an off-the shelf analysis package such as WebTrends. We used this package to produce monthly reports, or ad-hoc requests, on several web servers at McLeod. Having ran the first set of analysis of the public McLeod web site, we became the go-to group for any web site analysis as we had the WebTrends tool; purchased with our seed funding.
The analysis would reveal what web pages were mostly viewed, what IP and domains were users coming in from and what cities were the users located in. The analysis also listed "most requested pages", "Least requested pages", "Top entry and exit pages", "Top paths through site", "top downloaded files", "Most submitted Forms and scripts", "Most active organizations and countries", "Summary of activity by day and day of week", "Activity by hours of the day", etc. As trends, we noticed that Wednesday was consistently the top day for access to our web sites. The mornings were also the busiest time for web access with user activity trailing off in the afternoon hours. The WebTrends reports we produced were available to management on the I3G Intranet site.
One of the business managers noted that a large amount of Internet traffic hitting our public web server was originating out of Virginia, United States. Given the interest, he chided that perhaps Scotia should open an office in Virginia! The reason we had so much traffic coming from Virginia as that America On-Line (AOL) was based out of Virginia (Fairfax County). Since many people, even in Canada, had an AOL Internet dial-up account, their traffic originated from Virginia!
Trend analysis was great to determine if marketing campaigns resulted in public web page views. For example, the newspaper promotion of a new financial instrument, would it result in hits on the web page containing information on how to buy that financial instrument?
After I left Scotia McLeod to join GE Canada, I was contracted by Scotiabank to do web site analysis on their public web site. This went on for over a year, by which time Scotiabank took it upon themselves. Scotiabank provided the public web server log file monthly and I ran the analysis along with a broken links report. I used three tools running on my lowly 133MHz Intel Windows PC. The analysis typically ran for 48 hours due to all the IP to DNS name resolves. If there were any ISP glitches or power losses during the 48 hours, the whole thing had to be re-run again. It happened several times.
EPILOGUE
Scotia Capital Markets was a great place to work at. There was a much freedom granted in how you get work done. The work was also very challenging with many learning opportunities. For anyone looking to work in the financial sector I would strongly advise looking at Capital Markets as a great place to work. Capital Markets is usually small enough to feel as if you are working in a family; everyone knows each other. Because it's generally a smaller environment, you get exposure to many roles and technologies. At McLeod the executives would occasionally host breakfasts for the staff. I found this was a great way to meet and greet the folks you normally don't see. The experience I gained at McLeod was invaluable for my future roles.
Appendix A - HR Job Posting
First splash page.
Job listings.
Job listing detail.
Appendix B - I3G Web Site
I3G Intranet splash page on the development VAX running the Ohio State University web server.
WebTrends statistics report page. This is where web site analytics pages were stored.
Smple WebTrends analysis front page. The report actually ran for over 20 pages.
Only the first page is shown below.
This is the web page used for monitoring internal web servers. Eventually we monitored
other bank sites to determine their uptime and page latency to our own page monitoring server.
The page below shows the status of TD's public web site. The previous 4 pages of status
responses were stored; the idea being if a web site went down, we could see the
previous response times for any obvious degradation.
Appendix C - I3G's Project Management Application
Project spash page
Project status list page.
Project entry detail page. We didn't use a lot of detail, but more detail could have been added if needed.
It was mainly for tracking work in the pipeline and underway.
Sources:
Compiled on 08-04-2024 16:49:08