[ZOIS] Site Home Page * Contact ZOIS * Search * Contents * Demonstrations * Blog



Tuxedo started life in 1983 at AT&T's Bell laboratories as a Transaction Manager for a database on UNIX. The former being called TUX and the latter being DUX. Some of the initial target hardware was the AT&T 3B4000 an early example of MPP thinking (it was actually referred to as `the LAN in the Box'). Tuxedo was heavily influenced by the design of IBMs mainframe Transaction Processing Monitor (TPM) IMS. It was eventually released as a product on a UNIX platform in 1989. Since then the entire Tuxedo system has been sold as a bundle, together with UNIX, to Novell under whose ownership it remained until relatively recently when it was purchased by the new, venture capital funded BEA Systems. BEA Systems have in turn been taken over by Oracle (in 2008) and Tuxedo is now offically an Oracle product. The last major improvement was the introduction of SALT, a native SOAP addaptor (SOAP is in essence an Extensible Markup Language (XML) RPC) and the last major release was 10.

General Description

Tuxedo has a concept of Domains which are the largest unit configurable via a dispatch mechanism known as a Bulletin Board. Bulletin Boards, effectively a Name Service, run on individual computers with one of them being designated the master. Domains can talk to each other through specialized gateway Servers or indeed other TPM using ISO XAP protocols or Systems Network Architecture (SNA). The size of a Domain is limited by the number of computers that have to participate in the Bulletin Board. The more computers the longer it takes to boot the Tuxedo system. As such a system grows it becomes more unmanageable and the use of separate Domains linked by gateways becomes more attractive.

Tuxedo is a client/server system, with clients making requests of servers. In Tuxedo Servers are individual operating system processes running under the control of the Bulletin Board system. They advertise at least one and commonly a number of Services. Each Service encapsulates a piece of business logic, is analogous to a CICS Transaction program and can be conversational. There can be a number of instances of Servers running at the same time and inbound requests are queued and then distributed to the Servers. Traditionally when a server is occupied with a Service it cannot process further requests either for the busy Service or for any of the other Services that it advertises. We say traditionally for Tuxedo did have purely a single threaded model. Tuxedo from version 7 now allows multiple threads of execution within a single Server (as in Encina's Process Agents).

Tuxedo's Transactions are managed by a separate Server, an individual operating system process, the Transaction Manager. It is permanently connected to the Resource that it is using (such as a database). In this it differs from Encina whose equivalent to the Server, the Process Agent, maintains a separate thread of control within the same operating system process to manage its transactions.

More Information

There is more on Tuxedo in [Andrade &al], one of a number of books on OLTP dedicated to this particular TPM. BEA Systems also have a web site on Tuxedo, but much in the material there is of the form of a product brochure.

Example Code for Tuxedo

Remember the Source Code caveats.

In this example we're using the Tuxedo VIEW mechanism. This provides for a simple flat record type structure which is written by the user in a so called view file. The view file is then compiled and a link-able object and a `C' header file produced. The `C' header file defines the structure of a `C' struct which provides the data interface. Tuxedo will do data marshaling if necessary (it actually uses XDR). The VIEW mechanism has several advantages not necessarily found in other TPMs. It allows data dependent routing to be set up, a VIEW could contain a bank branch identity, for example and Tuxedo could then route this transaction to a particular branch.

Found in the client ...

        tpbegin ();

        buf = (struct io_buf_t *) tpcalloc ("VIEW", "io_buf", (long) 0);

        if (buf == NULL) {
                /* a fatal condition that needs reporting ... */
        } /* if */
        buf->delta = delta;
        buf->remote_account = remote_account;
        tpcall ("DEBIT-CREDIT", buf, &buf, &jnk);

        exec sql select amount from dosh_on_local_database 
                into :amount
                account = :local_account;

        if (amount - delta < 0) {
                report_to_user ("cant go overdrawn!");
                tpabort ();

        exec sql update dosh_on_local_database 
                set amount = amount - :delta 
                account = :local_account and 
                amount = :amount;

        tpcommit ();

The calls tpbegin and tpcommit bracket Transactions. We can prematurely mark a Transaction as a failure using tpabort, in the example we don't want to go overdrawn. The tpcall subroutine also has a vote in the two phase commit. Should the transaction be marked as a failure then the final tpcommit will fail too and the entire Transaction (client and server) will be rolled back. Note the private memory allocation, via tpalloc which can catch the unwary 'C++' coder out.

The client, which as we can see has the Transaction control calls the DEBIT_CREDIT service ...

        void DEBIT_CREDIT (TPSVCINFO *inbound)
                struct io_buf_t buf;
                buf = (struct io_buf_t *) inbound->data;
                exec sql select amount from dosh_on_remote_database
                        account = :buf->remote_account
                        into :amount;
                if (amount + buf->delta) < 0) {
                        report_to_user ("can't go overdrawn!");
                        tpreturn (TPFAIL, CANT_GO_OVERDRAWN, 
			    NULL, (long) 0);
                } /* if */
                exec sql update dosh_on_remote_database
                        set amount = amount + :buf->delta
                        account = :buf->remote_account and 
                        amount = :amount;
                tpreturn (TPSUCCESS, (long) 0, NULL, (long) 0); 
        } /* DEBIT_CREDIT */

As far as the SQL is concerned the server is much like the client. Note how the call tpreturn is used to mark a Transaction as being failed or successful. The tpcall in the client will actually return an indication of this. It will also return with a user defined variable allowing a hint as to why a failure occurred to be passed back. We've deliberately left handling this variable out, er, for the sake of clarity (being lazy has nothing to do with it). In both the client and the server side the struct handle io_buf_t is found in the include file, generated by compiling the VIEW.

Tuxedo's Name

According to knowledgeable sources [Andrade &al], Tuxedo's name came from one of the original developers, Tom Bishop quipping that it was "TUX Extended for Distributed Operation", TUX standing for "Transactions on UNIX". "Tuxedo" is famously the American word for a Dinner Jacket and it acquired this name from the Tuxedo Park Country Club in the 19th Century where it was popular. The name "Tuxedo" for that location was derived from Algonquin Indian as transcribed by early Dutch settlers and means something like "place of the bear". Tuxedo Park is today a gated suburban village about 45 miles north-west from central New York.


$Date: 2012/07/18 10:07:21 $

Break Frame * E-mail Webmaster * Author * Copyright