The Journal of American Academy of Business, Cambridge

Vol.  2 * Num.. 2 * March  2003

The Library of Congress, Washington, DC   *   ISSN: 1540 – 7780

Most Trusted.  Most Cited.  Most Read.

Members  / Participating Universities (Read more...)

All submissions are subject to a double blind peer review process.

Main Page     *     Home     *     Scholarly Journals     *     Academic Conferences     *      Previous Issues     *      Journal Subscription


Submit Paper     *     Editors / Reviewers     *     Tracks     *     Guideline     *     Standards for Authors / Editors

Members / Participating Universities     *     Editorial Policies      *     Jaabc Library     *     Publication Ethics

The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Journal of American Academy of Business, Cambridge will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work. All submissions are subject to a double blind peer review process.  The Journal of American Academy of Business, Cambridge is a refereed academic journal which  publishes the  scientific research findings in its field with the ISSN 1540-7780 issued by the Library of Congress, Washington, DC.  The journal will meet the quality and integrity requirements of applicable accreditation agencies (AACSB, regional) and journal evaluation organizations to insure our publications provide our authors publication venues that are recognized by their institutions for academic advancement and academically qualified statue.  No Manuscript Will Be Accepted Without the Required Format.  All Manuscripts Should Be Professionally Proofread Before the Submission.  You can use for professional proofreading / editing etc...

The Journal of American Academy of Business, Cambridge is published two times a year, March and September. The e-mail:; Journal: JAABC.  Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via the e-mail address above. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above. Address advertising inquiries to Advertising Manager.

Copyright 2000-2017. All Rights Reserved

  Development of the Accounting Profession in Taiwan

Dr. Raymond S. Chen, California State University Northridge, Northridge, CA



Over the past forty years, the economic development in Taiwan has been nothing short of a miracle.  Gross national product (GNP) increased from US$1,562 million in 1960 to US$297,657 million in 2000, which translates to an increase of over 190 times. Per capital GNP increased from US$154 in 1960 to US$14,216 in 2000, an increase of over 92 times.  Taiwan, in short, has become the nineteenth largest economy and fifteenth largest trading economy in the world.  Along with economic development, comes the increased demand of services for the accounting profession.  Although there are many factors attributing to the development of the accounting profession in Taiwan, this paper identifies the most significant factors that assisted in Taiwan’s development of the accounting profession. These factors are evident in the governmental policies designed to attract foreign investments and to further the formation of the domestic capital market.  These, in turn, fostered demand for accounting services, which were clearly influenced by foreign accounting practices that further stimulated professional development.  In Taiwan, certified public accountants have established large practices to meet the growing needs of businesses.  This paper illustrates the diversity, magnitude, and growth of professional services provided by accounting firms.  These services include representing clients in registering businesses, trademarks, and patents with government agencies; as well as in corporate or bankruptcy dissolution.  The growth of the accounting profession over the past four decades has been extremely impressive in Taiwan.  Many developing nations have devoted significant efforts in formulating strategies to foster the development of their accounting professions, and usually focus on highly developed nations, such as the United States of America, for reference in modeling their strategies.  This paper presents Taiwan as an example of strategic planning for the development of the accounting profession in developing nations. Taiwan is a mountainous island, with the Central Mountain Range as its spine running from north to south and its major watershed between east and west. The mountain range occupies more than half of the island. Scores of peaks rise above 10,000 feet with the highest being 13,113 feet.   Around the mountainous area are numerous independent hills, with an average height of 5,000 feet. Taiwan, including the offshore Penghu islands, covers an area of 35,981 square kilometers or 13,892 square miles.  Rivers in Taiwan are wide but short.  They are mostly shallow or dried up in the dry season, while there are floods in the rain-bearing wind season. Soil of alluvial origin on the plains and in the valleys covers about one-fourth of the island and is its chief resource.  The upland soils, subject to drastic erosion, are leached, acid, and infertile. There are limited mineral resources.  With many mountains, Taiwan has abundant timber.  However, with the low quality, inaccessibility, and high costs of production, import of lumber has become necessary.  Fifty years ago, Taiwan was basically a rural and insulated society, as the modernization that had occurred under Japanese rule had been lost during World War II.  When the Nationalist government of the Republic of China lead by Chiang Kai-shek moved its seat to Taiwan in 1949 at the time of the Communist takeover of mainland China, the economic development in Taiwan was at a virtual standstill because of civil war.  At that time, there was no large shareholder base of publicly held corporations, foreign corporations or investments in Taiwan.  Consequently, there was little need to perform the attest function and other services of accounting firms. In Taiwan, two systems for obtaining a CPA certificate have been established.  One is through the passing of the formal CPA examination as required by the CPA Law, that was first enacted in 1945 by Republic of China.  The other is through fulfillment of the requirement of the Evaluation of CPA Qualification Law, enacted in 1946.  To practice, CPA certificate holder must obtain CPA license from Ministry of Economic Affairs and join the local Institute of CPAs. [1].  The Evaluation of CPA Qualification Law allows persons not taking the completed formal CPA exam to obtain a CPA certificate.  This Law has been amended from time to time to encourage the normal channel in obtaining a CPA certificate by taking the formal CPA examination.  This Law, however, does include provision for reciprocity of licensed accountants of other nations.  Prior to the 1970 amendment, a Chinese CPA certificate would not be issued to foreign professional accountants unless reciprocity was established.  Post 1970, Taiwan government has relaxed its reciprocity requirement and may now issue a CPA certificate to foreign licensed accountants upon passing the examination of legal regulations on accounting and its related matters in Taiwan.  A candidate can apply for CPA certificate without taking the CPA examination if he or she has one of the following three qualifications: (1) holds a Ph.D. degree in Accountancy, (2) serves as chief or assistant chief accountant or auditor of a government agency or government-owned enterprises, or (3) has been a full professor of accounting in a college or university. [2].  The formal CPA examination is given once a year by the Examination Yuan of Taiwan government at the same time as the examination for senior employees of the government and other professional practitioners. The establishment of the Examination Yuan, an independent government branch, parallel to executive, legislative, judicial, and investigation branches of government, to perform the examination function is one of the unique characteristics of the constitution of Taiwan government.  Passing the CPA examination is contingent on attaining a predetermined average score on all subjects.  It differs from the procedures in the United States where the applicants may re-take the failed sections of the examination under certain circumstances.


Complex Strategic Decision Processes and Firm Performance in a Hypercompetitive Industry

Dr. Johannes H. Snyman, Oklahoma Christian University, Oklahoma City, OK

Dr. Donald V. Drew, Oklahoma Christian University, Oklahoma City, OK



In hypercompetitive environments, where change is rapid and ambiguous, firms need more than just rational or incremental strategic decision processes.  In fact, it is more valid to think of successful firms as pursuing complex strategic decision processes in order to match the environment’s characteristics.  This study investigated the relationship between complexity as a strategic decision process and firm performance.  Consistent with the hypothesis, complex processes were found to be significantly related to higher firm performance than were similar, unitary, or impoverished strategic decision processes. Like the computer industry, the banking industry is undergoing dramatic change.  Deregulation has caused significant consolidation since 1984 and, with the repeal of the Glass-Steagall Act of 1933, additional consolidation is likely to continue (Soper, 2001).  The Gramm-Leach Bliley Act of 1999, which modernized the financial services industry, is increasing convergence between bank and non-bank financial institutions accelerating the pace of mergers and acquisitions (McTaggart, 2000).  Technological developments are also transforming the industry at a revolutionary pace (Giannakoudi, 1999).  ATMs and telephone banking which were introduced in the 1970s, home banking via cable television in the 1980s, and PC banking followed by Internet banking in the 1990s have reshaped the industry.  In the 2000s, new technologies such as multiapplication smartcards and cooperative agreements with mobile phone operators will continue the pace of change (“Banks have no future,” 2000). These technological changes coupled with deregulation and globalization are relaxing entry and exit barriers and increasing consumer demand for better and more sophisticated services, making banking a hypercompetitive industry (Bogner and Barr, 2000; and D’Aveni, 1994).  The hypercompetitive nature of the banking industry is not just limited to megabanks, even small community banks are under attack (Lanham, 2001; and Silverman and Castaldi, 1992).  To compete successfully, small community banks must continually adjust their strategies.  Mistakes are too costly to make and a strategy of imitation leaves one continually in second place, an untenable position for many of these institutions.  Small community bankers must not only pay attention to the content of strategy, they should also focus on the processes by which strategy is crafted.  A simple, easily copied strategic decision process can prove to be as disastrous as the wrong strategy in a hypercompetitive environment.  This study captured the strategic decision processes and financial performance of community banks during the 1990s, a very turbulent decade for banking.  According to researchers, a community bank is a commercial bank with less than $500 million in assets that serves a local community (Silverman and Castaldi, 1992).  Although the focus of the study is on community banks, making generalizability difficult, it does provide insight into complex, simple, unitary, and impoverished strategic decision processes, not just rational decision processes (Brouthers, Brouthers and Werner, 2000), and their relationship to financial performance.  Strategic decision processes are the arts, crafts, and discourses used by organizations during strategy formulation and implementation (Hendry, 2000).  The purpose is to obtain congruence among four key imperatives, the environment (including technology), organizational structure, strategic leadership, and strategy (Miller, 1987).  These imperatives interact to form a configuration which provides an organization with uniqueness and direction (Miller and Whitney, 1999).  A well-crafted configuration can produce a rare, nonimitable, nonsubstitutable, and nontransferable resource which can produce a sustainable competitive advantage (Barney, 2001).  Given the complexity of obtaining such a configuration, strategic decision processes must encompass the full spectrum of organizational activities involving corporate boards, top managers and organization members (Rindova, 1999).  Over time these strategic planning processes form patterns that can be described and identified.  Various patterns or modes of strategic decision processes have been identified during the past six decades.  Eisenhardt and Zbaracki (1992)  classified the major paradigms into comprehensive and bounded rationality, politics and power, and garbage can.  In the same year, Hart (1992) also identified three paradigms: comprehensive and bounded rationality, vision, and involvement.  Rationality consists of a systematic, analytical, and formal process.  Comprehensive rationality calls for an exhaustive analysis prior to decision making, while bounded rationality emphasizes decision making limited by cognitive and political realities.  Hart treated Eisenhardt and Zbaracki’s politics and power paradigm as another form of bounded rationality.  His second paradigm stresses the importance of  top managers articulating a clear strategic vision or future direction for the firm.  However, visionary leadership involves commitment and involvement of all in the organization in order for the vision to be realized (Dess and Picken, 2000).  The third paradigm, involvement, which is similar to Eisenhardt and Zbaracki’s garbage can, consists of managers developing an ad hoc strategic decision process in response to a perceived crisis, not before; the decision process is therefore random (Das and Bing-Sheng, 1999).


Speech Recognition Technology for the Medical Field

Dr. Harrison D. Green, Eastern Illinois University, Charleston, IL



This paper focuses on innovative approaches to improving the usefulness of speech recognition technology (SRT) for the medical field.  Two key factors were initially identified which included the need for integration of speech technology with existing applications and the need for continuous update of the users’ speech files on all workstations in a networked environment.  A prototype solution, addressing both of these issues, was installed in a large veterinary clinic in west Texas.  The author used this setup, along with similar software installed on the Internet, as a research laboratory.  Data was gathered through direct observation and interviews with current and potential users.  The results of the study indicate that there are additional factors that are equally important or more important than those originally hypothesized.  These other factors include amount of training, power of macros, and workflow integration.  As a part of my consulting practice in the area of speech recognition, it was found that there is a definite need for continuous speech recognition technology (SRT) in the medical field.  Doctors use many long, technical words that are more easily spoken than typed.  While in the examination or treatment room, having to focus on typing tends to divert the doctor’s attention away from the patient.  Using a tape recorder makes correction difficult and requires the services of a transcriptionist.  This study focuses on a prototype multi-user system for continuous speech recognition that originally was installed in a veterinary office and currently is available over the Internet.  Up until about two years ago, there were several major barriers to the implementation of continuous SRT by professionals.  The computer reaction time was too slow; in other words, there was too much delay time between when a word was spoken and when it appeared on the screen.  Obtaining a high degree of accuracy required extensive voice training, and many professionals were not willing to invest the time required.  In addition, there were very few consultants available with speech technology expertise to assist professionals with initial setup and software usage. To achieve a suitable reaction time for normal conversation, an Intel Pentium III or AMD Athlon processor with a speed of at least 500-megahertz and 128 megabytes of RAM memory is required.  A computer with this much power can be purchased for less than $1,000 and is, therefore, becoming more cost justifiable.  The latest medical version of Dragon Naturally Speaking, 6.0, has higher initial accuracy and reduced training time to achieve 95 percent or greater accuracy.  Now, there are more SRT consultants and an increasing number of medical professionals who have enough training to assist other colleagues in getting started.  The use of continuous speech recognition software to prepare documents through word processors is becoming more prevalent.  Many professionals use speech recognition software to open and close programs, enter email messages, and navigate the Internet.  It is estimated that 30 percent of medical professionals (including physicians, veterinarians, dentists, and psychologists) have at least tried SRT for document entry in West Texas.  Initially, two factors were identified that appeared to be important to the successful adoption of SRT.  These factors included integration with existing software and the need for networking speech files.  Most doctors, for whom I consulted, indicated that unless the SRT could actually work from within their medical software, it would not be beneficial to them.  They wanted to be able to issue voice commands to move among menu items, to enter text where required, and to correct and train words while using the software. It was noted that most transcription actually occurs from within the existing medical program rather than from within word processing software.  With continuous speech recognition, each user must have an enunciation and vocabulary file that is specifically trained for his or her voice.  When a new word is trained or a correction occurs, the user’s file is updated on the current workstation.  In larger offices, doctors wanted to be able to switch between computers and not lose the results of any training or corrections.  To allow for multi-user access, the speech file update must occur not only on a single workstation but also on all of the workstations in the network.  A prototype speech recognition system that addressed both of the issues that were initially identified was set up in a large veterinary clinic in Texas.  Support for Dragon Naturally Speaking and most other speech recognition software is incorporated into Microsoft Word, Corel WordPerfect, Microsoft Excel, and Microsoft Outlook, but very few other programs.  Embedding speech recognition into medical software would require access to program source code.  Source code is either not available or too expensive to acquire.  If source code could be obtained, making the required software modifications would be a major programming effort in most cases.  The next best solution is to create macros through the speech software.  In the prototype solution installed in the veterinary office, a set of thirty macros was created.  These macros combine the built-in Windows keyboard commands for cutting, copying, and pasting with the Dragon commands for selecting, modifying and training. These commands are specific to the Patient Advisor program but can be modified to work with other software.  Based on this prototype, a set of similar macros was incorporated into demo software that could be run from the Internet at  Installed in the veterinary office is a multi-user speech file update program.  It monitors the date on the speech files for each user and workstation.  When a new speech file is saved, all of the other files on every workstation are automatically updated for that user.  This software works with any number of workstations and any number of users. 


What determines dividend policy:  A comprehensive Test

Dr. Tao Zeng, Wilfrid Laurier University, Waterloo, Ontario, Canada



This paper designs an empirical work to investigate the determinants of corporate dividend policy under the Canadian situation. It shows that firms pay dividend as a signal and to reduce agency costs. It also shows that liquidity and tax clientele effect are related to dividend policy. The Author gratefully acknowledges that financial support for this research was received from a grant funded by WLU Operating funds, the SSHRC Institutional Grant awarded to WLU, and CMA Canada - Ontario.  There has been a great deal of financial, economic, as well as accounting literature analysing why firms pay dividends, given the fact that effective tax rate on capital gains lower than the effective tax rate on dividends, i.e., the Adividend puzzle@ (Holder et al 1998, Dhaliwal et al 1995, Lamoureux 1990, Chaplinsky and Seyhun 1990, Abrutyn and Turner 1990, Mann 1989, Crockett and Friend 1988, Kose and Williams 1985, Feldstein and Green 1983, Litzenberger and Ramaswamy 1982, 1979, Miller and Scholes 1982, Feldstein 1970, and so on). To shed light on this puzzle, researchers try to figure out the benefits from paying dividends, which may offset the tax disadvantages. Some survey studies find that CEOs choose to pay dividends because they believe dividend can server as a signal to shareholders (Baker and Powell 1999, Abrutyn and Turner 1990, Baker et al 1985); because dividend can reduce agency costs and enforce manager to act in the interest of shareholders (Abrutyn and Turner 1990); because clientele effects exist (Baker et al 1985). Some event studies on signalling effect attempt to test whether a positive equity price response is associated with unexpected dividend increase, or vice visa. Several studies present evidence consistent with this argument (Dielman and Oppenheimer 1984, Aharony and Swary 1980, Charest 1978, Pettit 1977, 1972). Some studies, however, do not find the evidence indicate that dividend changes reflect no more information than that reflected in earnings (Gonedes 1978). Tax clientele studies (Dhaliwal et al 1995) argue that the ownership by tax-exempt or tax deferred investors will increase when firms begin to pay dividends.  This study differs from prior researches on 3 important ways. First, this paper examines the relationship between firm-specific characteristics and dividend policy. Mann (1989) argued that studies should go beyond event studies around dividend announcement days since there may exist underlying factors other than dividend itself that drive the change of return around dividend announcement. Second, this study designs the tests using corporate financial data, rather than taking a survey study. The argument from survey may provide reasons justifying the managers’ behaviour afterwards, rather than a motive beforehand. In addition, survey research involves non-response bias, and the bias is severe when the response rate is low. Third, this study makes a comprehensive test of the determinants of dividend policy. Prior researches usually focus on only one factor, e.g., signalling (Dyl and Weigand 1998, Brook et al 1998, Bernheim and Wantz 1985), agency cost (Born and Rimbey 1993, Crutchley and Hansen 1989, Easterbrook 1984), tax clientele (Dhaliwal et al 1995, Scholz 1992), or investment opportunity (Gaver and Gaver 1993, Smith and Watts 1992). It is argued that dividend policy may be a decision based on the combination of many factors inside and outside.  The remainder of this paper is organized as follows. In section 2, I review the literature relevant to the determinants of firms dividend policies. In section 3, I provide the empirical test method, data collection and variable measurement. In section 4, the test results are presented. Finally, I summarize and conclude in section 5. Given the effective tax rate on capital gains lower than that on dividends, researches have been taken to figure out the benefits from paying dividends, which may compensate the tax disadvantages. The benefits from pay dividends or the reasons justifying dividend payoff are summarized as follows:  One argument of why firms pay dividends involves tax effect. Shareholders receive and are taxed the returns to shares either as dividends or capital gains. Dividends and capital gains are taxed differently among various types of investors, individual investors, corporate investors, and tax-exempt or tax-deferred investors. Tax clientele hypothesis argues that tax clienteles prefer different dividend policies, and investors may attach to firms that have dividend policies appropriate to their particular tax circumstance. For example, corporate investors, whose dividends are taxed at a lower rate than capital gains, may prefer high dividend payout ratio; on the other hand, high-income individual investors, whose dividends are taxed at a higher rate than capital gains, may prefer low dividend payout ratio. The tests in this study for tax clientele hypothesis are that: positive relationship between firm dividend payoff and the ownership of corporate investors and tax-exempt or tax-deferred investors; a negative relationship between firm dividend payoff and the ownership of high-income individual investors. Another argument of why firms pay dividends is that dividends provide a mechanism for restricting managerial discretion. It reduces the agency costs of free cash flow by cutting down the cash available for spending at the discretion of management, and hence provide some protections to the firm against management that might benefit itself at the shareholders= expenses.


Theoretical Inconsistencies in Accounting: Why Don’t We Depreciate Land

Dr. Jeffry R. Haber, Hagan School of Business, Iona College, New Rochelle, NY



Accounting is part art and part science, built upon assumptions, principles, existing practice and grounded in constructs that are internally consistent and well thought out.  No matter how much is art and how much is science, it is fully an artifact of the business, economic, social and internal needs of users, both internal and external.  There exists no basis in the natural world for why things in accounting are the way they are.  We have what we have because accountants agree to apply a set of principles handed down by a group of policymakers.  In a systematic way, developed over many years, policies, procedures, rules and regulations stand alone and interrelate, explain why things are done and why things are not done, have been decreed and put into practice.  These policies, rules and regulations are developed to handle situations large enough and broad enough in scope to merit the attention of the profession.  When developing the set of rules that define and re-define current accepted accounting practice, it would not be reasonable to expect a rule developed for every possible transaction.  The world is too complicated for that.  It is more logical to have constructs defined in a way that allow inference and application in the handling of transactions.  However, it would be expected that where rules do exist they be followed, and further, applied to analogous situations.  As explained in ARB 43, the purpose of depreciation is the systematic and rationale allocation of the cost of an asset to the period of benefit.  Plant, property and equipment is subject to depreciation.   Historically, the land that is inseparable from the plant has not been subject to depreciation, though a literal reading of ARB 43 without the benefit of historical perspective would not indicate this treatment, since the cost of the productive facility includes the cost of the land as much as the building and improvements.  No precedent exists in an accounting pronouncement for this treatment.  Statement of Financial Accounting Standards No. 93 (Recognition of Depreciation by Not-For-Profit Organizations) is the first statement to explicitly state that land is not depreciated.  So when a building is acquired, the cost of the building is spread to the years of economic benefit, except for the cost that we allocate to land.  In some fashion, the total cost of the building, which usually includes in a single stated price the land, the building and the improvements, is allocated among at least those three components.  The arbitrary amount allocated to land is recorded as land and not depreciated.  The amounts allocated to building and improvements are recorded in the respective accounts and depreciated over the period of benefit.  Following long standing practice, land is not subject to depreciation.  For all these reasons, the same can be said for buildings.  Perhaps more importantly, the whole premise of depreciation and the basis for depreciation renders these reasons moot. Buildings too, have exceptionally long useful lives.  In fact, we ignore how long the buildings might (and probably will) last, and instead opt for an accounting convention of a maximum period of benefit.  So, regardless of how long buildings might last, will last, typically last, and can be expected to last, accounting forces a mandatory maximum useful life onto the calculation of depreciation.  The same rationale can be applied to land.  The process of depreciation is the systematic and rationale allocation of the cost of an asset to the period of benefit.  The land has a cost that is quantifiable (and in fact quantified).  Accounting rules could easily include land in the same sort of forced depreciable life situation as buildings. Land can increase in value, and except for problematic environmental situations, might be expected to.  This is neither relevant nor unusual.  First and foremost, depreciation is an expense allocation system, not a valuation method.  So, on the face, this reasoning is wholly flawed.  Second, buildings might be expected to increase in value as well.  This does not stop the process of depreciation. This reason is true, but irrelevant.  The only concern accounting would have would have for a depreciable asset is if it lost its revenue producing capability.  All assets undergoing depreciation are expected to maintain their revenue producing ability for as long as they are being depreciated (though some methods recognize decreasing contributions).  This is a prerequisite for the continuance of depreciation.  If the asset lost its revenue producing ability, it would be written off.  Why does it matter whether we depreciate land?  In one sense, it matters because under current definitions about what depreciations is (a cost allocation system) land should be subject to the same cost allocation rules as the building that rests upon it.  The building is separable from the land, but the converse is not also true.  In this regard, as the building goes, so should the land (as far as depreciation).  To not subject the land to depreciation means that the cost of the land has to be separated from the cost of the building and improvements, since the total historical cost usually comprises a single number which must be allocated to each component.  This gives rise to a certain arbitrariness at best, and manipulation at worst.  In addition, to not depreciate land causes a universal overstatement of assets and income.  And lastly, it generates a hypocritical stand against the rationale for depreciation.  Depreciation is a cost allocation method, not a valuation method as detailed in ARB 43.  SFAS 93 legitimizes the reasons for not depreciating land in direct opposition to the cost allocation rationale.  Accounting rules have never had a problem with limiting long periods for the purpose of convention.


Factors on Channel Integration Decisions of Taiwanese Manufacturers in the Export Market

Lanying Huang, Nova Southeastern University, FL

Chin-Chun Hsu, Research Fellow, Saint Louis University, Saint Louis, MO



The intent of this paper is to examine the nature of the determinants of export channel integration decisions of Taiwanese small- and medium-sized exporting companies and the findings of this study sheds new light on the characteristics of exporting companies in the newly industrialized countries.  . It provides empirical evidence that the phenomenal international channel integration of small- and medium-sized Taiwanese exporting firms is due to a combination of several theory-based factors.  Exporting is the most common way for manufacturers to do business in foreign markets. Firms still export on a regular and permanent basis even though they have been long involved in international business arena. In the beginning of exporting, a firm has to make two strategic decisions, that is, where and how. Firms first choose the target country to market their products, and then identify the most suitable type of export distribution channel structure to use. It is important that alternative structural arrangements, which entail differing degrees of commitment and risk, should be considered before initial entry into a foreign market since distribution structures are difficult to change and the wrong decision may leads to long-lasting inefficient performance.  Several theoretical frameworks have been offered to explain the distribution mode decision. However, they alone are not sufficient to explain variation in the degree of channel integration (Anderson & Coughlan, 1987; Klein, Frazier, & Roth, 1990). The production cost perspectives (Stern &El-Ansary, 1992) maintains that scale economies are the basis in deciding channel structure. The transaction cost paradigm (Williamson, 1975, 1981) states that forward integration is likely to be attractive when asset specificity is high as well as when there is a high environmental uncertainty. The internationalization process theory (Johanson & Wiedersheim-Paul, 1975; Johanson & Vahlne, 1977) implies that international experience influences the degree of commitments into a foreign market. The strategic rationale perspective implies the need to incorporate strategic corporate objectives in deciding channel integration structure, and asserts the competitive advantage of forward channel integration with differentiation strategy. This study attempts to incorporate several theories into a unified framework by adopting an eclectic approach, which addresses recently from Hill, Hwang & Kim (1990), Osborne (1996), and Aulakh & Kotabe (1997) to better explain channel integration decisions.  Most of the available research has focused on channels of distribution for firms from developed countries. Very little research has been done on the decision-making processes of firms from developing countries (Anderson & Tansuhaj, 1990; Da Rocha & Christensen, 1994) or newly industrialized countries (NIEs). It is now widely recognized that some newly industrialized country such as Taiwan is increasingly emerging as economic powerhouses.  This study examines the factors affecting export channel integration decisions of Taiwanese small- and medium-sized exporting companies over the past decade. It provides empirical evidence that the phenomenal international channel integration of small- and medium-sized Taiwanese exporting firms is due to a combination of several theory-based factors. This study also provides insights into the international competitiveness of firms based in newly industrialized countries. Due to a limited domestic market, Taiwanese companies internationalized early for scale and risk diversification, aided by government grants. Taiwan's economy is exporting driven and has been a real success story. Outstanding export performance has been dominated Taiwan’s economic growth. Every year, Taiwanese industries produce more than their domestic demand level. With a population are only 21 million persons and healthy developed industries, in order to be prosperous; Taiwan must produce and meet the demands for the world economy.  From 1986 to 2000, significant growth in export volume, scope of business, export area, took place with overseas offices, and alliances.  One particular feature in Taiwan is the pivotal role of small and medium-sized enterprises.  Taiwan’s SMEs are nimble and responsive to profit opportunities, making them competitive in the world market (Wade, 1990; Levy, 1991).  This led some to conclude the small-and medium-sized exporting companies are one of the pillars of Taiwan’s economic success (Amsedn, 1991).  The role that the small-and medium-sized exporting companies play in Taiwan may provide valuable experiences to the developing world.  Can the approach presented in this study explain the choice of export distribution channels of Taiwanese manufacturing exporters? In addition, which variables considered most strongly related with export channel structure choices of Taiwanese manufacturing exporters? The purpose of this study is to examine empirically the factors influencing Taiwanese manufacturing firms distribution channel decisions in exporting their products.


The Role of Advertising Played in Brand Switching

Dr. Jane Lu Hsu, National Chung Hsing University, Taiwan

Dr. Wei-Hsien Chang, National Chung Hsing University, Taiwan



Consumer satisfaction is an important subject pursued in marketing.  Since even satisfied customers would try alternatives for higher satisfaction levels, customer satisfaction cannot be considered as a sole indicator for brand loyalty.  Brand switching behavior is critical for newly issued brands to survive in the marketplace at the introduction stage and for firms to realize how to avoid losing existing customers.  This paper examines the influences of advertising on brand switching behavior among young adults in Taiwan using survey data.  Two durable goods, laptop computers and mobile phones, and two consumable goods, sports shoes and carbonated drinks, are considered.  Results indicate that for durable goods, a large percentage of young adults can be classified as innovative consumers and have high tendencies to switch brands.  For consumable goods, multi-loyalty is common.  Consumers with different levels of advertising perceptions have various possibilities to switch brands. Satisfying and retaining customers to sustain business are usually firms’ top priorities in marketing strategies.  Customers generate more profits to companies when they stay loyal to the brands.  Loyalty is considered providing fewer incentives for consumers to engage in extended information searching among alternatives (Uncles et al., 1998).  Hence, advertisements of competitors are less effectively to be acknowledged by the loyal customers.  Since the cost of retaining a loyal customer is one-fifth the cost of attracting a new one, serving repeat customers can be cost effective (Barsky, 1994).  Building brand loyalty reduces the costs of advertisements designed to draw new customers (Reichheld and Sasser, 1990; Heskett et al., 1990; Pappers and Rogers, 1993).  Therefore, satisfying and retaining customers are advantageous since the overall marketing costs can be cut down (Rundle-Thiele and Bennett, 2001).  The common belief is that satisfied customers have repeat purchasing behavior, then long-term profits are provided to the companies.  However, offers from competitors can attract even loyal customers to try alternatives or new brands.  As mentioned in Jones and Sasser (1995), the relationships between satisfaction and loyalty are neither simple nor linear.  Consumers switch brands not simply because they are dissatisfied with the current brands, but may because they want to try new brands, they are attracted by the discounts offered by other brands, or because the current brands are out of stocks.  Furthermore, advertising provides incentives and stimuli for consumers to switch brands.  Dick and Basu (1994) argue that relative attitude and repeat patronage affect customer loyalty.  They classify loyalty into four dimensions, loyalty, latent loyalty, spurious loyalty, and no loyalty.  Bloemer et al. (2002) discuss latently dissatisfied customers to be early warning signals since these customers report overall satisfaction but have latent characteristics of dissatisfied customers.  Mittal and Lassar (1998) measure relationships among overall satisfaction, intention to switch, technical quality, and functional quality.  They conclude that relationships between satisfaction and loyalty are asymmetrical while dissatisfaction guarantees switching but satisfaction does not promise loyalty. Since satisfaction cannot be considered equivalent to loyalty, either satisfied or dissatisfied customers have possibilities to switch brands for certain reasons.  Measuring satisfaction or dissatisfaction may not provide a reliable base for firms to visualize the customers who have intentions to stay with the current brands or those who want to switch.  Rationalizing customer switching behavior provides new insights into customer loyalty.  Previous studies have revealed the existence of brand switching behavior and the relationships between satisfaction and brand switching.  However, the effects of advertising on brand switching have not been exclusively examined.  This paper intends to reveal the role that advertising played in brand switching.  The results of this study not only fill in the gap of the relationships between advertising and brand switching in marketing literature, but also benefit companies utilizing advertisements to attract competitors’ customers or to promote newly issued brands. Brands are more than products.  Products comprise physical attributes and dimensions while brands reflect special relationships and bonds between products and customers.  Important contributions to the development of brands that proceed beyond products are brand positioning and advertising (Czerniawski and Maloney, 1999).  Advertisers endeavor to build strong brands and are concerned with the effectiveness of advertising in influencing consumer purchasing behavior (Franzen, 1999).  Advertising is critical for establishing the absolute and relative values compared to other brands in the marketplace.  Brand positioning is a strategic version of brand development while advertising provides guidelines and directions for the development of brands in marketing (Czerniawski and Maloney, 1999).  Brand positioning is to classify brands into groups with similar characteristics of other brands and is distinguishable from other groups of brands.  Brand positioning is based on a number of associated dimensions such as structural characteristics (visual, auditive, gustatory, and olfactory), product attributes, situational conditions, and symbolic applications (Franzen, 1999). 


Taxation of E-Commerce

Dr. S. Peter Horn, School of International Management Ecole Nationale des Ponts et Chaussees, France


No other innovation, or way of doing business, has revolutionized the international economy faster than the Internet. It took generations for the Industrial Revolution to play out around the world while the Internet Revolution has unfolded in less than a decade. The speed of this change has been astounding. In the Industrial Age, as change took place, governments were able to react accordingly. In the Internet Age, today's innovation is tomorrow's standard. Government are finding that they must act on Internet time, which is a daunting challenge.  This paper examines the current state of affairs with regards to the taxation of Internet commerce.  It analysis the historical perspective of the United States of America, the OECD, the WTO, and the European Union, and attempts to answer the question “What happens next?”. The biggest standards battle in the history of the digital revolution has again heated up and the fight is about taxes – taxes on e-commerce. The unprecedented growth in the Internet during the “internet bubble economy” highlighted the glaring problems with current taxation laws that address the remote purchases of goods and services.  While these problems and concerns may have been sidelined during the past couple of years with the “busting of the internet bubble”, the worsening of the worldwide economic slowdown and the surfacing of the global war on terror; they have not been adequately addressed. And, Internet commerce is not dead.  Recent statistics released from the US Census Bureau of the Department of Commerce shows that Internet commerce has risen during the last quarter of 2001 and the first quarter of 2002 in comparison to the last quarter of 2000 and the first quarter of 2001. There estimate of U.S. retail e-commerce sales for the first quarter of 2002, not adjusted for seasonal, holiday, and trading-day differences, was $9.849 billion, an increase of 19.3% from the first quarter of 2001.  Total retail sales for the first quarter of 2002 were estimated at $743.8 billion, and increase of only 2.7% from the same period a year ago.  E-commerce sales in the first quarter of 2002 accounted for 1.3 % of the total sales while in the first quarter of 2001 e-commerce sales were 1.1% percent of total sales.  The United Kingdom statistics also show a startling increase in e-commerce sales.  The Interactive Media Retail Group (an industry body for global retailing) is now collecting hard date on online sales to UK consumers. Their IMRG Index provides robust evidence that the UK e-retail market is significantly larger and growing faster than previously estimated.  The Index rose to 262 in April 2001, up from 100 in April 2000 – giving an estimate of e-commerce retail sales for the month of April 2001 of 210 million pounds sterling. This increase of 162% in e-commerce retail sales compares with only a 5.9% increase in general retail sales. Figures from the Index similarly show an increase of 10.4% in March, 2002 compared with February, 2002 and the organization estimates that e-retail will continue to grow ten-times faster than mainstream retail, with no indication that any sector is beginning to plateau.  In a press release dated June 2002, they stated “Half a billion people are online at home worldwide and a third of them shop online.”  They continued, “Europe now has more internet users than the US. The UK is responsible for a third of all e-retail sales in Europe, with online sales worth an estimated £507 million in May (2002) alone. Internet sales continue to surge, against the general retail trend, but while these direct sales are the most concrete manifestation of e-retail, and may reach 15% of all retail within a few years, they are only one element in the e-commerce equation.” They also highlighted that “throughout the first half of 2002 a steady stream of positive reports have been issued by e-retailers, whose ventures are showing profits - many for the first time - and experiencing rapid growth in sales. The UK e-retail market is currently growing at over 90% year-on-year, and is expected to be worth £7 billion this year (2002), representing almost 4% of the total retail market by the end of the year”.  DC Research confirms these figures. As the world's leading provider of technology intelligence, industry analysis, market data, and strategic and tactical guidance to builders, providers, and users of information technology; their recent research suggests that more than 600 million people worldwide will have access to the Internet - spending more than $1 trillion online. While the United States now accounts for 40 percent of the money spent online, they suggest that as residents of Asia and Western Europe increase their spending, The U.S. should only account for 38 percent by 2006. In some Asian nations, governments are lobbying to bring more citizens online, thus contributing to rapid Internet penetration in those markets. In Western Europe, e-commerce is expected to rise 68 percent this year as the adoption of the Euro brings better competition, price transparency, and improved deals for online buyers.  Accordingly, governments at all levels and all types of retailers are now addressing the best way to deal with legislative shortcomings surrounding the taxation of e-commerce; with local government groups pushing for tax assessment based on where the purchaser lives rather than the seller’s location, and businesses lobbying for a neutral, fair and equitable, easily administered system.   The EU recently acted unilaterally with their Electronic Commerce Directive.


Accounting Practices for Interest Rate Swap Derivatives

Dr. Raymond S. Chen, California State University Northridge, Northridge, CA



There has been a tremendous proliferation in the use of derivatives by many companies in recent years.  Some companies utilize derivatives as a type of investment instrument.  Others utilize derivatives as a risk exposure management tool.  With the prodding of the SEC to improve the accounting for and disclosure of derivative financial instruments, Financial Accounting Standard Board (FASB) issued SFAS No. 133, “Accounting for Derivative Instruments and for Hedging Activities,” in June 1998.  However, because of the complexity of derivatives and accounting rules adopted, the effective date of Statement No. 133 was delayed two fiscal years, to begin after June 15, 2000.  This paper illustrates the accounting principles and procedures for swap contracts.  The illustration of accounting rules applied in swaps will provide readers with better understanding of accounting rules for derivatives and how hedging accounting contributed to the complexity of Statement No. 133. To make the accounting rules for derivatives transparent and easily understandable, FASB should undertake a comprehensive approach to account for all financial instruments at fair value when the conceptual and measurement issues are resolved, regardless if the underlying financial instruments are hedged or not.  Under this comprehensive approach, it will eliminate the complex and inconsistent hedging accounting rules.  With the lack of accounting guidance and the resulting inconsistencies in accounting practice for financial instruments and transactions, the FASB decided to add projects on financial instruments to its agenda in 1986. [1].  These financial instrument projects were supposed to address accounting issues on liabilities and equities, recognition and measurement, and hedging.  Of these issues, the FASB considered the issue of the most relevant measurement attribute for financial instruments a priority.  Prior to the issuance of the SFAS No. 133, the fair value was required for certain investments but not for certain liabilities.  With the exception of held-to-maturity debt securities and equity method investments, the fair value was used in accounting for investments as reflected in SFAS No. 115, Accounting for Certain Investments in Debt and Equity Securities. [2].   The FASB had rejected the fair value reporting for liabilities partially because of the difficulty in obtaining a reliable fair value, as many liabilities do not trade in an established market.  However, in the Exposure Draft on accounting for derivative and similar financial instruments and for hedging activities, the FASB made a fundamental decision that the fair value is the most relevant attribute for financial instruments.  That decision was incorporated into SFAS No. 133, which stated that the fair value for financial assets and liabilities provides more relevant and understandable information than cost-based measures. [3].  Measuring all financial assets and liabilities at fair value is a fundamental requirement in derivative accounting.  However, in the absence of derivative transaction, fair value is not required to be used in accounting for liabilities.  This piecemeal approach of the fair value accounting in derivative transactions has contributed the complexity and inconsistency of the accounting principles and rules adopted by SFAS No. 133 and SFAS No. 138. [4].  The following two sections will demonstrate the application of fair values in accounting for swaps and how the gains or losses are reported in the financial statements.  The principle and procedures applicable to swaps are also applied to other derivative activities.  Derivatives can be used to manage the risk exposure of an investment asset or a liability.  To explain the basic issue, the accounting practice for an interest-rate swap as a financial instrument derivative that is a liability is presented first.  A swap involved in a financial instrument that is an asset will be discussed later.  An interest-rate swap is an agreement in which two parties agree to exchange the interest payments on debt over a specified period.  Interest-rate swaps consist of both fair value and cash flow hedges.  Fair value hedge protects against the risk of the change in value of the fixed-rate debt instrument while cash flow hedge protects against the risk of future cash flow of the variable-rate debt instrument. To illustrate the accounting practice for a swap, suppose that ABC has a $1 million, 7% fixed mortgage note due 10 years from now to County Bank.  ABC contracts with City Credit, a swap dealer, for a 10-year interest-rate swap (a derivative) from its 7% fixed interest rate to a variable interest note.  This swap protects ABC against paying a higher than market interest rate if interest rates decline.  When interest rates decline, the fair value of ABC’s mortgage note increases.  This swap protects ABC against the risk from changes in value of this fixed mortgage note payable.  Therefore, this swap is a fair value hedge.  Further assume the variable interest rate changes from 7% to 6% during the first year after the swap was contracted.  The fair value of the mortgage note is $1,068,016 at the end of the first year after the swap contract.  The calculation of the fair value of the note is shown below:  Present Value of Principal = $1,000,000 X 0.591898 = $591,898  (period = 9; rate = 6%); Present Value of Interest = $70,000 X 6.801692 = $476,118 (period = 9, rate = 6%); Total Present Value of Debt = $591,898 + $476,118 = $1,068,016.  The fair value of this derivative is the present value of an annuity of interest saving of $10,000 per year for nine years before the mortgage note is due.  That means the fair value of this derivation is $68,016.  The calculation of the fair value of this derivative is interest saving multiplied by present value of the annuity factor (Value of Derivative = $10,000 X 6.801692).  For the aforementioned fair value hedge, SFAS No. 133 requires that ABC to recognize gain or loss from the change of fair value of the derivative and the financial instrument being hedged in its current net income, along with interest expense of the current period.


Behavior of Prices of Oil and Other Minerals over the Last Few Decades: A Comparative Analysis

Dr. Abdulla M. Alhemoud, Arab Open University, Kuwait

Dr. Abdulkarim S. Al-Nahas, Arab Open University, Kuwait



This paper tries to find out whether the price changes of oil over the last three decades were significantly different from those of other primary commodities. The paper examines briefly the changing conditions of the oil market and investigates the behavior of oil prices during the period 1966 – 2000. The Johansen’s maximum likelihood method was applied to test for co-integration between oil prices and the prices of non- fuel commodities. Fluctuations in oil prices were compared with those of other 13 minerals during the same period. The analysis suggests that the decline in OPEC’s market share, combined with the rise of the elasticity of the demand for oil have weakened OPEC’s market power as measured by the relative surplus of price over marginal cost.. The statistical results also suggest that the coefficient of variation for oil prices during the period 1966-2000 was much higher than that of other mineral prices. Oil prices enjoyed a much stronger upward trend than prices of other minerals during the last three decades. The econometric results suggest that there is evidence of co- integration between the oil price index and the non- fuel commodity price index. The prices of most, if not all, primary commodities suffer substantial fluctuations, particularly over the long- run. There is little doubt that primary commodity markets exhibit considerably greater instability and uncertainty than those of manufactured commodities (Thiburn, 1977). Unfortunately, the fortunes of developing countries are associated with the prospects for primary commodity exports. This is so since many developing countries are heavily dependent on primary commodities for up to three- quarters of their export earnings, compared to approximately one – fifth in the case of developed Western economies. For example, the percentage of leading single commodity exports to total exports in oil – exporting countries in 1998 ranged from 43 per cent in the case of Ecuador to 98 per cent in the case of Saudi Arabia (IMF, IFS Yearbook, 2000).  The number of leading commodities varies from one to four in developing countries, with an average of two. As far as the non- oil exporters are concerned, single commodity exports as a percentage of total exports ranges from five per cent in the case of China to 94% in the case of the Bahamas. The variation of the number of the leading commodities in this case ranges from one to nine, with an average of three commodities (Adams and Behrman, 1992).  The aim of this paper is to find out whether the price changes of oil over the last three decades (1966 – 2000) were significantly different from those of other primary commodities. This comparison is essential to highlight the special case of the oil producers.  The paper is divided into five sections. Section one examines briefly the changing conditions of the oil market over the last few years. Section two investigates the behaviour of oil prices during the period 1966 – 2000. Section three compares fluctuations in oil prices with those of other 13 minerals during the same period. Section four tests for long-term relationship between oil prices and those of other minerals. Finally, section five summarizes the main conclusions of the paper.  The significance of the oil market is apparent when compared with other natural resources. For example the prices of phosphate rock increased in January 1974 by almost 400 per cent of its level in 1973. However, this increase passed almost unnoticed by the world at large which stands in sharp contrast with the reaction to a relatively smaller rise in oil prices in late 1973, when the oil embargo caused major international disruption. The major reasons for this are:  Oil is one of the most important commodities in the world, accounting for a large weight in international trade.  Oil is the main source of energy consumption.  There is no perfect substitute for oil in transport.  Oil accounts for a large part of the import bill of the most developed and developing countries. Many developing countries ran into heavy debt as a consequence of rising oil prices. The oil market took a very serious turnaround in September 1960 when Iraq, Iran, Kuwait, Saudi Arabia and Venezuela joined together and formed the Organisation of Petroleum Exporting countries (OPEC). The founding members accounted for 67 per cent of the world reserves, 38 per cent of World oil production and 90 per cent of the world oil in international trade at that time (Schneider, 1983). The establishment of OPEC came in response to an outrage by the host governments when cuts in the price of oil were made unilaterally by the oil companies, first in February 1959 (about eight per cent and then in August 1960 (about 6 per cent). Eight other developing countries, namely Qatar, Libya, Indonesia, Algeria, Nigeria, the United Arab Emirates, Ecuador and Gabon joined OPEC after 1960. However, Ecuador withdrew its membership in January 1993.  International companies dominated the pre- 1970’s oil market, and oil trade tended to be characterized by long – term contracts and low prices. The picture changed in the 1970,s and 1980’s when the share of the market controlled by the major oil companies declined until it vanished completely as OPEC members took full control of their oil resources. 


Leadership in the Government of the Gambia: Traditional African Leadership Practice, Shared Vision, Accountability and Willingness and Openness to Change

Dr. Michael Ba Banutu-Gomez, Rowan University, Glassboro, NJ



Interview data was obtained from 20 experienced senior Gambia Government Officials in the seven departments. This study focused only on those influence of leadership practices which can be termed meaningful practice since the primary aim of this research was to discover how the influence of leadership practice in government organizational culture in The Gambia, is associated through success stories told by senior government officials. The findings from this study were used to develop a model to help us understand leadership practice in African government. Many scholars of leadership cite challenges as the ideal time for leaders to provide their people with the motivation for change (Schein, 1992).  The Gambia is in challenging state now because it is an African country that depends, on a regular basis, on aid from countries outside of Africa.  The Gambia is in challenges of self-sustainability. It is suspected that the role a government leader plays in the organizational culture of the government in The Gambia will determine, in a significant way, if this nation is able to adapt to future changes in its internal and external environment. A “Learning” type of organizational culture is considered by most organizational culture thinkers to be the most flexible type of organizational culture because members are open to a process of continual learning of new skills and knowledge. (Kotter, Senge 1990).  The Gambia has a long leadership history and legacy, which must be examined and understood in order to access current leadership trends.  This traditional African leadership practice has been passed down from generation to generation despite British colonization, the slave trade and current Western influence.  In ancient West Africa, the king was the servant and shepherd of the people (T’Shaka, Oba, 1989).  His main and most important function was to serve the people.  Contrary to popular belief, dictatorship was never part of the indigenous African political tradition.  There were few despotic chiefs in traditional Africa.  While some African chiefs ruled for life, they were appointed with the advice and consent of a Queen Mother and/or a Council of Elders.  Nobody declared himself “chief-for-life” and his village to be a “one-party state” in indigenous Africa (Ayittey, George, 1991).  Even today in The Gambia villages and town discussions are held under a Baobab tree or a Bantaba, which is a meeting place to discuss issues pertaining to the community. These discussions are lead by the Alkalo who is the head of the village. Sometimes, the meetings are lead by elderly council. Although, the Alkalo and the elderly council lead the discussion, they in no way make decisions automatically. The whole community makes decisions through consensus process.  Some societies, however, did not have kings or chiefs.  It was in societies without Chiefs and Kings where African democracy was born.  Age groups governed these Chief-less societies, where the people were grouped according to age, and the elders’ deliberations were held in the presence of the community.  In these age group societies the rights of the community came before the rights of the individual.  African democracy was collective, communal and rooted in the will of the people (T’Shaka, Oba, 1989).  One important aspect of this system is that discussions were discussed in the presence of the community.  Therefore the community was well informed about the status of decisions. In summary open communication is a critical ingredient in traditional African leadership practice.  Even today if you visit The Gambia you will hear people talking about their age group.  Within your age group, which generally has a 3-5 year age span, you joke together, discuss and advise.  Older age groups disciplined and advised you and you must honor and respect the elders.  Likewise with younger age groups you discipline and advise them and they honor and respect you.  This study focuses on the influence of traditional African leadership practices in government organizational culture that has not been investigated in Africa before. At this time in history, it is imperative that organizations around the world, and especially in Africa, discover ways to release the creative energies, intelligence and initiative of people at all levels and integrate individual contributions so they can work together toward common purposes (Nixon, 1992). Many people in Africa may be wondering, “How can I express leadership in a world like we live in?” Actually, we can influence the ‘implicate order’ of the universe (logic behind unfolding events) by the choices we make (Brown, 1996).  Gambian government organizational structures have remained relatively unchanged since the departure of the British at independence.  For that reason, Gambian government leaders need to re-assess their role in the government organizational culture they lead. The manipulation of organizational culture is the key to effective leadership and organizational development. What Gambian government workers experience as the climate and culture of their organization ultimately determines whether sustained change is accomplished in The Gambia (Schneider, Brief, Guzzo, 1996).  This study was conducted on the seven government departments, in the Government of The Gambia. The study sample was composed of 20 experienced senior Gambian Officials and Leaders. The subjects were chosen based on availability of the Officials and Leaders in the departments. The required data was collected through the use of an Appreciative Interview questionnaire administered by the author.


An Empirical Study on Professional Commitment, Organizational Commitment and Job Involvement in Canadian Accounting Firms

Dr. Leslie Leong, Central Connecticut State University, New Britain CT

Dr. Shaio-Yan Huang, Providence University, Taiwan

Jovan Hsu, Nova Southeastern University, FL



Professional commitment in accounting firms is the acceptance of professional norms and goals. This study attempts to investigate the relationship between professional commitment, organizational commitment and job involvement of external auditors in professional organizations (public accounting firms) in Canada. First, it is to investigate the relationship between external auditors’ organizational commitment and professional commitment in accounting firms. The second purpose of the study is to examine the relationship between external auditors’ job involvement and professional commitment. The results of regression and correlation matrix indicated a positive relationship between professional commitment, organizational commitment and job involvement. From this study, we conclude that professional commitment is influenced by organizational commitment and job involvement in accounting firms. These results also supported the fact that the constructs influence the ethical behavior expected of external auditors. The effect of the work environment on professional employee attitudes and behavior has become an important research issue in behavioral science (Lachman & Aranya, 1986; Meixner & Bline, 1989; Montagna, 1968).  The study of professionals has long been concerned with the relationship between professionals and their employing organizations.  Recently, however, the commitment of professionals to the norms and values of their profession has become a popular focus of research as well (Aryee, Wyatt & Min, 1990; Baugh & Roberts, 1994; Hall, 1967; Thornton, 1968).  The accounting profession, like any other profession, exists only through wide public acceptance.  Public acceptance of a profession means that society perceives a need that can best be met by highly trained professionals who have some minimally acceptable standards (Roberson, 1993).  In the public accounting profession, professional commitment is the acceptance of professional norms and goals; therefore, high professional commitment should be reflected in a greater sensitivity to issue involving professional ethics (Lachman & Aranya, 1986).  Since professional commitment is very important to public accounting profession, this study will examine some possible factors that can affect the professional commitment in public accounting professionals. According to Aranya, Pollock and Amernic (1981), professions carry power and sublime prestige because professionals possess bodies of knowledge which are linked to the central needs and values of their social system.  Therefore, society expects that professionals should have strong professional commitment (PC) to serve the public, above and beyond material incentives.  Unlike other professions, the accounting profession exists primarily to serve the interests of third parties (public) rather than a second party (client) (Aranya, et al, 1981).  For example, external auditors are expected to conduct audits and report the results independent of the client, whereas the attorney is expected to be an advocate for the client in providing legal service (Konrath, 1999).  According to Shaub, Finn and Munter (1993), external auditors’ organizational commitment and professional commitment are two important factors affecting their ethical behavior.  Professional commitment becomes more important in the accounting profession.  Thus, job satisfaction and job performance are affected by workers’ professional commitment in professions (Baugh, Roberts, 1994; Clark & Larkin, 1992; Brierley & Turley, 1995).  In addition, many studies found that professional commitment is expected to be influenced by organizational commitment and job involvement (Blau, 1985; McElroy, Morrow, Power & Iqbal, 1993, Morrow & Wirth, 1989; Parasurman & Nachman, 1987; Shaub et al. 1993; Wiener & Vardi, 1980).  This study will discuss the relationship between professional commitment, organizational commitment and job involvement. According to Aryee, Wyatt & Min (1990), there are two contradictory positions evident between organizational commitment (OC) and professional commitment (PC) research.  One is when organizational values and the professional technical or ethical standards are incompatible, an organizational-professional conflict (OPC) occurs and this conflict decreases job satisfaction and increase turnover intention. (Aranya & Ferris, 1984; Brierley, 1998; Brierley & Turley, 1995; Gunz & Gunz, 1994; Pei & Frederick, 1989) (See Figure 1).  The other position is that organizational commitment and professional commitment are not antithetical.  Recently, more and more studies found that these two commitments are related to each other when the organization is willing to reward professional behavior (Batrol, 1979; Lachman & Aranya, 1986; Wallace, 1993).  Meixner and Bline (1989) pointed out that, citing Aranya et al (1981), the evidence indicates that organizational commitment is a function of professional commitment.  The conflict between organization goal and professional goal does not exist between public accounting firms and the accounting profession. (Aranya et al,1984; Lachman & Aranya, 1986).  Therefore, there is a positive relationship between organizational commitment and professional commitment in public accounting profession.  Professionals’ commitment to professionalism and organizational regulations should be complementing, not conflicting (Baugh & Roberts, 1994).


U.S. Earnings Inequality in the 1990s

Dr. Jongsung Kim, Bryant College, Smithfield, RI



Using weekly earnings data from CPS, this paper attempts to identify and interpret the pattern of earnings inequality in the U.S. from 1994 to 1999, with a special emphasis on earnings inequality within and across certain ethnic groups.  Young Hispanic workers with lower education were identified as the most vulnerable group, White and Asian workers with college education in their prime age wound up at the other extreme in the earnings distribution.  To reduce earnings inequality, the long term focus should be on the improvement in educational attainment, worker’s training, and placement of workers after graduation. The U.S. earnings and income inequality widened in 1980s, and this pattern continued to persist in the early 1990s.  Even after the economic downturn that ended in 1991, the period of growing inequality persisted as income disparities continued through the early 1990’s (Karoly, 1996).  The widening disparity in the 1980s and early 1990s runs counter to the historical pattern of a narrowing of the disparity in income during the periods of economic growth (Blank and Card, 1993).  After the U.S. economy experienced a business-cycle trough in 1991, household income continued to drop until 1993 when median income reached its lowest level for most demographic groups.  However, by year 2000 the U.S. economy has recovered so remarkably that the median household income was $42,148, reaching the highest level ever recorded in the Current Population Survey (CPS) in real terms.  Further, all major ethnic groups reached a new all-time high in median household incomes in 2000: $45,904 for non-Hispanic Whites, $30,439 for Blacks, $33,447 for Hispanics, and $55,521 for Asian and Pacific Islander households (DeNavas-Walt, Cleveland, and Roemer, 2001).  Despite the strong economic expansion in the 1990s that brought about these remarkable records, earnings and income inequality in the U.S. have increased substantially in the last decade (Dadres and Ginther, 2001).1  For example, the share of aggregate income received by the lowest 20 percent household decreased from 4.2 percent in 1968 to 3.6 percent in 2000, while the share of household income received by the richest 5 percent increased from 16.6 percent in 1968 to 21.9 percent in 2000.  During 1980s and early 1990s and eventually until 1994, the increase in earnings and income inequality in the U.S. was not confined to between groups classified by schooling and experience.  Earnings and income inequality also increased within these groups.2  Using weekly earnings data from CPS, this paper attempts to identify and to interpret the pattern of earnings inequality in the U.S. from 1994 to 1999, with a special emphasis on earnings inequality within and across ethnic groups.3  The focus on racial/ethnic groups is relevant when the racial/ethnic composition in the U.S. demographic landscape is under dramatic changes.4  Identification of the sources of earnings inequality will guide the direction of public policies, especially redistribution policies.  Views that relate the earnings distribution to the economic growth have received a great deal of attention.  Kuznets (1955) hypothesized that the process of economic development would first be accompanied by rising disparities in economic well being, followed by a period in which the distribution would be stable or eventually move toward greater equality.  However, as the earnings inequality has been increasing in the U.S. and a number of other developed countries for a period of time that exceeds short-run cyclical changes in earnings disparities, Kuznets’ hypothesis has faced both theoretical and empirical challenges (Karoly, 1996).  Another line of view is concerned with the impact of earnings inequality on the economic efficiency.  Critics of Kuznets’ causality dispute that earnings inequality may not be followed by more equal earnings distribution.  They argue that the mere prospect of low earnings may discourage workers that they may give up looking for a work, thus widening the gap even further.  In a similar vein, an extremely unequal earnings distribution creates economic inefficiency.  The prevalence of extremely unequal income distribution may instill hopelessness in those who are in the lower end of the income spectrum, thus leading them to feel that they are not adequately rewarded for their contribution, and further discouraging them from investing in human capital.  This type of vicious cycle may be perpetuated, and will eventually prevent the productivity maximization in the personal level.  If this prevails in society as a whole, as people in the lower rung of the income ladder feel that advancing toward better economic status may not be an option in their lifetime, and frustration and the feeling of alienation may accumulate, leading to a social unrest.  This is detrimental to economic growth. Despite the strong social preference for an egalitarian income distribution shared by a group of people who believe that reducing income inequality is a moral obligation, no consensus has been established about the appropriate level of income inequality, thus the controversial question whether income inequality is truly undesirable still remains unanswered.  Views against the unequal income distribution give rise to counter argument that if income distribution becomes extremely equal, it is also possible that incentives to maximize productivity would be adversely affected.  Recent demise of socialist regime in which equal distribution has been strongly emphasized is a telling example that equal income distribution may not be a solution for the maximum social benefit, and even if it can be attained by political intervention, it is unable to be sustained for long.  Issues about relative income and absolute income are even more controversial. 


North American Free Trade Agreement – Is It Delivering What It Promised?

Dr. Balasundram Maniam, Sam Houston State University, Huntsville, TX

Dr. Hadley Leavell, Sam Houston State University, Huntsville, TX

Dr. Richard Thaler, Sam Houston State University, Huntsville, TX



In 1994 the United States, Canada, and Mexico entered into a trade alliance that we know today as the North American Free Trade Agreement.  The agreement was constructed to allow for monumental progress to be made in trade flow, foreign direct investment, and economic liberalization within the region.  Evidence indicates that this alliance has indeed successfully accomplished many pre-set objectives in a relatively short period of time.  The successful accomplishments achieved by the partners in this alliance could well establish a model for expansion of bilateral agreements throughout the hemisphere. Regional trade alliances are formed between countries in an effort to maximize trade opportunities through preferential access to their markets for members within the alliance.  The 1994 North American Free Trade Agreement (NAFTA) joined Canada, the U.S., and Mexico into an agreement intended to increase trade and investment, eliminate tariffs, reduce non-tariff barriers, and establish provisions concerning the proper conduct of business in the free trade area.  The extent to which this agreement has achieved its intended objectives is a source of great debate.  This study will examine the data currently available relating to the initiatives of NAFTA.  It will review literature relating to this subject from 1992 to present.  It will begin by specifically identifying the main objectives of this agreement, which includes the economic and political motivations.  The extent to which NAFTA has successfully accomplished its intended mission will then be discussed in detail.  This will be followed up by arguments questioning whether or not the parties involved in this agreement are truly reaping the anticipated benefits of this accord.  It will conclude by assessing the overall success and impact of this agreement on the economies of these three countries, and by doing such, identify whether NAFTA has delivered what it promised. In December of 1992, the U.S., Canada, and Mexico signed a trade agreement we know today as NAFTA.  The alliance was a new, improved, and expanded version of the Canada-U.S. Free Trade Agreement signed in 1988 (Hufbauer, Schott, 1993).  After ratification by the three legislatures, the major intent of this alliance was to increase the level and fluency of trade between the U.S. and its northern and southern neighbors, thereby, improving resource allocation.  This would be achieved by the elimination of trade barriers and by use of other methods of trade liberalization practices.  NAFTA was specifically designed to eliminate tariffs and non-tariff barriers on regional trade within five to fifteen years.  This was to include liberalization of trade in the area of agricultural products.  Quotas and tariffs relating to the textile and apparel industry would also be phased out.  NAFTA’s passing created financial opportunities due to the liberalization of investment rules in the region.  The intention was to promote cross-territorial investment between the parties, which would be strengthened by effective protection and enforcement of intellectual property rights in each country.  This would permit and promote greater direct foreign investment between the nations.  The agreement also had the intended purpose of addressing some important political and domestic issues of the time.  The agreement was seen as an opportunity to have a positive impact on the social and political concerns in Mexico at the time, and thus provide needed economic and domestic stability to the region.  NAFTA was seen as a tool to increase the standard of living for individuals residing in Mexico.  The anticipated rise in the average wage for Mexican citizens would not only improve conditions in that country, but would in turn reduce the influx of illegal immigration into the U.S.  The agreement also was developed with a mechanism for the resolution of trade disputes between participating countries.  This again was seen as a mechanism by which these countries would become more politically aligned due to their economic and social investment in one another. With passage of NAFTA, trade and investment barriers were reduced substantially.  Trade and investment among the three countries had almost tripled within the first six years of the agreement (Giffin and Pastor, 2001).  Just in the first year of the agreement (1994), trade among NAFTA partners increased by 17 percent, an increase of over $50 billion.  This was attributed to strong economic growth in North America and to the reduced trade barriers of NAFTA (Leapard and Veramailay, 1996).  Trade between the three countries in 1994 exceeded $348 billion. Total trade between the U.S. and Mexico increased from $81.5 billion in 1993 to approximately $128.1 billion in 1996 (NAFTA-The Mexico Factor, 1997).  By 1998, U.S. trade with Mexico increased to $173.4 billion.  This rate of trade growth exceeded that of all other top trading partners of the U.S. (McClenahen, 2000).  In 1999, Mexico became the United States’ second largest trading partner surpassing Japan.  By 2000, over $650 million in products crossed the U.S.-Mexico border each day and total trade between the countries was $261 billion, three times the 1993 pre-NAFTA average (Fact Sheet on NAFTA, 2001).  The increase in trade flow can partly be attributed to the reduction of tariffs between these two countries.  The average tariff on U.S. goods entering Mexico prior to NAFTA was approximately 10 percent.  By 2000, the average tariff was between 2 to 3 percent. Likewise, tariffs on Mexican goods entering the U.S. have dropped to less than 1 percent as opposed to a 2 to 3 percent tariff before the agreement (Sarkar and Park, 2001).  Mexico and Canada continue to be prime exporting locations for goods manufactured in the United States. 


A Financial Appraisal of Florida’s Environmental Horticulture Industry

Dr. John Haydu, University of Florida, Mid-Florida REC, Apopka, FL

Dr. Alan Hodges, University of Florida, Dept. of Food and Resource Economics, Gainesville, FL

Dr. John Cisar, University of Florida, Ft. Lauderdale REC, Ft. Lauderdale, FL



Information was collected from 37 wholesale nurseries on sales, production, operating expenses and net returns in 1999. Nursery products represented among the sampled firms included container and field-grown woody ornamentals, tropical foliage, and flowering plants. This information was compared to data from 1990 and 1995 to examine financial changes that have occurred among nursery businesses. In 1998 the average nursery had annual plant sales of $2.71 million (M), total income of $2.89M, and net firm income of $548 thousand (K). Firms used an average production area of 55 acres, employed 49 full-time equivalent (FTE) persons, and managed total capital of $5.26M. As a share of value produced, costs were 34.7 percent for labor, 26.1 percent for materials, 5.0 percent for equipment/facilities, 10.0 percent for overhead, 3.8 percent for depreciation, 3.9 percent for interest, and 4.6 percent for management. Net profit margin averaged 18.9 percent and rate of return on capital investment was 7.9 percent. Compared to previous results for 1990, firms in 1998 were significantly larger—sales increased 66 percent in inflation-adjusted terms, production area increased 95 percent, employment increased 114 percent, total capital managed increased 122 percent, net worth increased 99 percent, and net income increased 45 percent.  However, profitability and productivity were lower—net margin decreased 19 percent, rate of return on net worth decreased 20 percent, value produced per square foot decreased 11 percent, and inventory turnover declined 27 percent. These results confirm that profitability in the Florida nursery plant industry has continued to decline as the industry becomes increasingly competitive. Nursery and greenhouse crops, also referred to as “floriculture and environmental horticulture” by the USDA, represent the sixth largest agricultural commodity group in the United States, with a farm gate value of $12.1 billion in 1998, up 2  percent from the previous year (Johnson, 1999).  This value has grown an average of $440 million per year since 1990, making it one of the fastest growing sectors in U.S. agriculture. This expansionary trend is due to the strong demand for landscape plants and the relatively high unit value of nursery crops. Demand for landscape plants in particular has been driven by a strong national economy and an increase in new housing developments which are large consumers of landscape plants, bedding plants, and sod. The high unit value of ornamental crops is illustrated by the fact that nursery farms account for only one-fiftieth of U.S. farms, yet generate one-sixth of total farm cash receipts. The average floriculture and environmental horticulture farm yields nearly four times the net income as does the average traditional food and fiber farm, and horticulture crops routinely outperform all other farm commodities in terms of net income per farm (Johnson, 1999).  Cash receipts for environmental horticulture products, comprising landscape-type plants for outdoor use, such as trees, shrubs, groundcovers, and turfgrass, rose from $5.8 billion in 1991 to $7.1 billion in 1998. Floriculture products, consisting of bedding and garden plants, cut flowers, potted flowering plants, and potted foliage plants, accounted for an additional $5 billion. Domestic growers of cut flowers and cut cultivated greens realized modest gains in cash receipts in 1998, but their market share fell due to increasing competition from foreign imports. Nursery and greenhouse crops are concentrated in the West and South, mostly due to favorable climate conditions, but also driven by proximity to concentrated urban populations. Still, many Mid-western and Northeastern states with minor production in the past are becoming increasingly important producers of ornamental crops. Ten states account for two-thirds of total production in the U.S.  The top six states, ranked from top to bottom are California (20 percent), Florida (11 percent), Texas (9 percent), North Carolina (8 percent), and Ohio and Oregon, each with 5 percent (Johnson, 1999).  As noted, Florida is the second largest producer of ornamental plants in the U.S., with over 5,000 registered wholesale growers (DPI, 1998), and wholesale cash receipts of $1.28 billion in 1998. Ornamental plants produced include woody ornamentals (landscape trees and shrubs), tropical foliage, flowering plant products, cut foliage, and turfgrass sod. Florida dominates production of tropical foliage with about 90 percent of U.S. sales. Although growth of the state’s ornamental plant industry has gone through several up and down stages since the early 1970's, overall growth has been quite strong, with the most dramatic increases in the latter 1990's through 2000 (Hodges & Haydu, 1999, Hodges & Haydu, 2002). Information Collected and Reported. Information for this report was collected from 37 wholesale ornamental nursery firms in Florida for the 1998 fiscal year.  Data were also analyzed for 12 of these same firms that previously provided information for 1990 and 1995, respectively. In most cases, the data represented a calendar year period of January to December, however, in a few instances, up to six months data were for a prior year.  Firms participated in the Nursery Business Analysis Program voluntarily, so it is not a statistically representative sample of firms, however, it is believed to represent firms with above-average management quality by virtue of their willingness to participate in such quality improvement programs.


 Structural Adaptation in the Florida Ornamental Plant Nursery Industry in the 1990s

Dr. Alan W. Hodges, University of Florida, Food & Resource Economics Department, Gainesville, FL

Dr. John J. Haydu, University of Florida, Mid-Florida Research and Education Center, Apopka, FL



The state of Florida has a large industry for producing ornamental plants, which continues to grow rapidly.  Industry surveys were conducted in 1989, 1994 and 1999 to evaluate economic trends.   Survey results suggest that the industry has undergone significant structural changes during the 1990s in response to increasing competition and industry maturation. Consolidation has resulted in larger firms, with the market share for firms having at least $1 million in annual sales increasing from 74 to 84 percent.  Other economic trends include greater labor productivity, increasing diversity of ornamental plant products, less seasonality in product sales, a shift in markets from landscaper to retailer outlets, especially mass merchandise chains, wider distribution of products outside the state of Florida, increased forward contracting, increased advertising, and greater use of telephone contacts for sales.  Ornamental plants are the sixth largest agricultural commodity group in the United States, with a farm level value of $12.12 billion (Bn) in 1998 (Johnson, 1999).  Ornamentals are also the fastest growing major segment of U.S. agriculture, with sales increasing by 30 percent between 1991 and 1998, representing average annual growth of 2.0 percent in inflation-adjusted terms (Figure 1). This growth was due to the continued strong demand for plants, driven by a robust economy, expansion in housing, and increasing per capita consumption. Retail expenditures for plant products in the U.S. reached $54.79 Bn, or $203 per capita in 1998, and increased 2.1 percent annually (inflation-adjusted) between 1986 and 1998 (Figure 2). Nursery and greenhouse products are classified as floriculture crops and nursery crops. Floriculture crops, including annual and perennial flowering plants, cut flowers and cut cultivated greens, and foliage plants, were valued at $3.93Bn in 1998, while nursery crops such as woody ornamental trees and shrubs, sod, and unfinished plant products represented $8.18Bn in sales or roughly two-thirds of industry value. The state of Florida is the second largest producer of ornamental plants in the United States, following California, and ornamentals are the second largest agricultural crop in Florida, following citrus. A recent study showed that in year 2000 there were over 5000 commercial firms in Florida's ornamental plant industry, which accounted for $2.25Bn in sales, $3.48Bn in total economic output impact, $2.52Bn in total value added impact, and employment of 54,000 persons directly and in related businesses (Hodges and Haydu, 2002). Ornamental plant growers in Florida managed 126,000 acres of production area in 1997 (NASS, 1999). Florida dominates the US market for tropical potted foliage and cut foliage crops with a 60 percent market share. Sales of greenhouse and nursery crops by Florida growers increased 10.7 percent (inflation-adjusted) during the period 1991-98, representing average annual growth of 1.3 percent (Figure 3).  Since the mid-1980s, the ornamental plant sector in Florida has experienced moderate growth characteristic of maturing industries, and has incurred problems common to other segments of U.S. agriculture, including increased competition, over-production, depressed prices, reduced profitability and increased business failures (Hodges and Haydu, 1992). The present paper describes changes occurring in the Florida ornamental plant industry during the past decade in response to these economic forces.  Surveys of Florida's ornamental nursery industry were undertaken as part of the National Nursery Survey sponsored by a multi-state group of land-grant university economists and horticulturists known as the S-290 Committee of the USDA Cooperative Research and Extension Education Service. Survey questionnaires were mailed to selected firms in 1989, 1994 and 1999, requesting information on business activity for fiscal years 1988/89, 1993/94 and 1998/99,  respectively. Information was collected on age and organization of the business, annual sales volume, employment, product mix, production systems, market outlets, interstate and international trade, sales and advertising methods, product pricing, and factors limiting firm expansion. Questionnaires used in all three survey periods were very similar, with minor exceptions.  Two source lists were used to select Florida firms for the survey: the Florida Department of Plant Industry (DPI) registry of certified nurseries, and the membership of the Florida Nurseryman and Grower's Association (FNGA, Orlando). Survey sampling was concentrated on the largest firms in the industry, i.e. those having greater than 50,000 units in plant inventory, or at least eight full-time employees. A total of 104, 183 and 259 firms responded to the survey in 1988, 1994 and 1999, respectively (Table 1). The increasing numbers of firms contacted and responding in the latter two survey periods years is indicative of growth in the industry as well as expansion of the survey list sampling frame. Response rates were at least 25 percent for all survey years.  Based on these sample numbers, the margin of error for estimation of a proportion with 95 percent confidence ranged from 5.1 to 8.0 percent (from Cochran, 1953). Survey questionnaires were sent to selected firms by first class mail, and included an addressed, postage-paid return envelope. Two separate mailings were done approximately two months apart. 


The Legal Regulation of E-Commerce Transactions

Dr. Everett Durante Cordy, Albany State University, Albany, GA



E-commerce is firmly established as the new way to do business in the new economy.  E-contracts, Internet Banking, and digital signatures have become standard tools of conducting business.  How has the law responded to this new way of doing business?  What about torts and crimes that are committed while doing business in cyberspace?  What is the appropriate forum to resolve disputes that arise when doing business electronically?  It is the purpose of this article to review how the law is dealing with these and other questions which arise when engaging in e-commerce transactions Technological innovation has spawned new ways to transact business.  Motivated by concerns for improvement in profitability, efficiency, speed, competitiveness, and customer relationship building, businesses have increasingly adopted technology-based systems to transact business with other businesses, private consumers and governmental agencies.  Despite the recent shakeout, e-commerce has emerged as an entrenched part of, and in many instances the preferred way, transacting business.  It has become a fundamental part of the corporate enterprise. According to Forrester Research, the world Internet economy is predicted to reach $1 trillion dollars by the end of 2001.  On-line advertising is expected to reach $33 billion worldwide by 2004, and U.S. on-line spending will exceed $6 billion in 2001.  North America will lead global e-commerce transactions to $6.9 trillion in 2004, while 60% of the world on-line population and 50% of on-line sales will be made outside of the U.S. by 2003.  U.S. business trade over the Internet skyrocketed to $251 billion in 2000, up from $109 billion in 1999.  According to Computer Industry Almanac, there will be 165 million on-line users in the U.S. by 2002, and over 490 million and 765 million users worldwide by 2002 and 2005, respectively.  In 2000, over 65% of all U.S. businesses, and over 50% of businesses worldwide, are estimated to engage in some type of e-commerce transaction, and the numbers are growing.  E-cash, smart cards, "click-on" contracts, digital signatures, web hosting, and e-commerce transactional solutions have all become synonymous with the preferred ways to transact business in today’s "new economy".  However, as e-commerce continues to emerge as the preferred way to transact business, our legal system often struggles with the inability of existing laws to deal adequately with issues presented by new technology.  What is the legal framework for transacting business in Cyberspace?  When can a court exercise jurisdiction over a party who conducts business over the Internet?  What laws can be applied when businesses commit tortuous, and sometimes criminal, acts in Cyberspace?  What are the legal limits on the content of business speech, Web site access, and Web-Page content?  What legal protection exists for trademarks, copyrights, and other intellectual property existing in digital form?  What law should be applied to reflect business practices regarding Cyberspace contracts and agreements, such as “click-on” agreements, software licensing agreements, e-data interchange, and on-line sales?  Just as technology has fundamentally changed the ways of doing business, so has technology changed the way the law anticipates and responds to on-line business transactions.  Indeed, the viability and sustainability of e-commerce rests, in large part, on the ability of the law to establish a legal framework to accommodate this new business practice.  It is the purpose of this article to examine the legal framework that is emerging to accommodate e-commerce transactions.  To appreciate the awesome task of developing a legal framework to accommodate e-commerce transactions, it is necessary to identify the various types of transactions that have emerged as a result of technology.  What comes to mind when you think of e-commerce? or eBay, perhaps. Well, e-commerce is much more than e-tailers (Internet retailers).  It's made up of exchanges among businesses, consumers, and government agencies, which can be classified according to which party initiates and controls the exchange transaction and which party is the target of the exchange.  Electronic commerce refers to the end-to-end, all electronic performance of business activities.  E-commerce includes electronic data interchange (EDI), which refers to the on-line exchange of routine business transactions in a computer-processable format, covering such traditional applications as inquiries, purchasing, acknowledgments, and financial reporting.  Table 1 shows the possible types of e-commerce transactions, along with a Web example of each. There are nine (9) possible types of e-commerce transactions, based upon who initiates the transaction and the target of such exchange.  In e-commerce jargon, B2C means business-to-consumer (initiated by a business, aimed at consumers) transactions, B2B means business-to-business transactions, and B2G means business-to-government transactions.  Business-to-business transactions account for most of today's e-commerce sales volume, because they generally involve higher prices and larger quantities than B2C transactions.  Although B2C transactions are a smaller part of e-commerce than B2B transactions, they are capturing an ever-larger share of all retail sales.  Government agencies on the local, state, and federal level represent a huge and lucrative market for businesses selling all kinds of goods, services, and information.  Government agencies purchased more than $77 billion annually in technology goods and services.  In the State of Georgia, as well as in most other states, vendor registration and bidding on state contracts is increasingly (almost exclusively) being done on the Internet.  It is now possible to file incorporation documents electronically in most every state.  It is imperative that businesses that do business with these states, or with any governmental entity, be proficient in e-commerce technology.  Consumer-to-consumer (C2C) transactions are of lower dollar amounts and account for a far smaller piece of the   e-commerce pie than B2B, B2C, or B2G transactions.  However, the success of companies like eBay demonstrates that millions of people are jumping on the Internet bandwagon to buy and sell personal items and services person-to-person.  In consumer-to business (C2B) transactions, the consumer, not the business, initiates and controls the exchange.  Benefits to consumers engaged in C2B transactions include allowing consumers to band together for volume discounts on merchandise offered on Web sites; allowing consumers to post requests for items they want to purchase, so businesses can respond with specific offers; and searching consumer advocate Web sites for free information about problems other consumers have reported with particular businesses, charities, and investment firms.  One of the fastest growing areas of e-commerce is consumer-to-government (C2G) transactions, which cover consumer-initiated transactions with governmental agencies.  Governmental Web sites allow consumers to pay governmental agencies on-line for traffic fines, real estate taxes, and government licenses and permits.  Similarly, more and more governmental agencies are establishing Web sites to provide information and services to consumers (G2C).  Governmental Web sites allow on-line transactions such as driver's license renewals, as well as business transactions such as permit applications.  Many states have begun selling seized and surplus property to consumers through Web sites such as eBay.  


The Legal and Regulatory Response to Predatory Lending in the Mortgage Industry: With Emphasis on the State of Georgia

Dr. Everett D. Cordy, Albany State University, Albany, GA



Throughout the 1990s, the financial deregulation of the consumer financial services industry has given rise to the rapid growth of the “predatory lending” (or fringe banking) industry, which includes check cashing outlets, payday loan companies, rent-to-own stores, high cost first and second mortgage companies, sub prime auto lenders, traditional pawn shops, and the growing business of auto title pawn companies.  This article examines the legal and regulatory response to predatory lending in the mortgage industry in the United States, with an emphasis on practices and legislations in the State of Georgia. Predatory lending practices have ruined the financial lives of thousands of vulnerable people, especially the elderly, minorities and female homeowners.  A relatively recent by-product of growth and diversification in the financial services industry, predatory lending has grown explosively in the last ten years.  “Predatory lending” in general is the practice of making loans to customers who have poor credit but who have home equity, and charging high interest rates, loaning amounts that are beyond the ability of the customer to repay.  Often the end result is that the customer loses the home or is forced to file bankruptcy or must be rescued by family members.  Companies engaged in predatory lending disproportionately target minorities, low income families, the elderly and female home owners, according to surveys, presumably because they are frequently less able than other home owners to understand financial terminology and have less access to conventional financing (or at least are less aware than others of alternative methods of financing).  The exact definition of “predatory lending” is a matter of regulation and policy debate, particularly as pressure mounts to eliminate it.  Further discussion on this point is in Sections Four and Five of this report.  The term is commonly used to include such features as these, according to Gramlich: making unaffordable loans based on the assets of the borrower rather than on the borrower's ability to repay an obligation ("asset-based lending"); inducing a borrower to refinance a loan repeatedly in order to charge high points and fees each time the loan is refinanced ("loan flipping"); engaging in fraud or deception to conceal the true nature of the loan obligation from an unsuspecting or unsophisticated borrower.  Other practices include requiring the customer to buy over-priced, single premium credit insurance, which is often added to the amount financed, pressuring a customer to contract for purchases (such as unnecessary home repairs), including balloon payments the customer can not pay, and aggressive enforcement of loan terms forcing early default. During the past several years, as the sub-prime lending market has grown and predatory lending has become more common, various political and regulatory bodies have gathered evidence on the specific methods of predatory lenders and their impact on individual customers.  Edward Gramlich, Federal Reserve Governor, makes the point that the line between legitimate and predatory lending is difficult to define, in that the practices that have expanded access to many thousands of borrowers previously denied credit are the same practices that if misused can be destructive.  Gramlich describes some targeted practices as follows:  Most of the time, balloon payments make it possible for young homeowners to buy their first house and match payments with their rising income stream. But sometimes balloon payments can ruin borrowers who do not have a rising income stream or who are unduly influenced by an immediate need for money. Most of the time, the ability to refinance mortgages permits borrowers to take advantage of lower mortgage rates, but sometimes easy refinancing invites loan flipping, resulting in high loan fees and unnecessary credit costs. Often credit life insurance is desirable, but sometimes the insurance is unnecessary, and at times borrowers pay hefty up-front premiums as their loans are flipped. Generally advertising enhances information, but sometimes it is deceptive. Most of the time, disclosure of mortgage terms is desirable, but sometimes disclosures are misleading, with key points hidden in the fine print. The most systematic data available on the prevalence of predatory lending is at this time indirect.  This is in part because there is substantial disagreement about what constitutes predatory lending, but more importantly it is due to the limited data available and the fact that current laws and regulations do not cover all predatory lenders.  What is clear is that predatory lending by any definition has increased sharply during the past ten years.  In 1994, the $35 billion in sub-prime mortgages nationally represented less than 5 percent of all mortgage originations. By 1999, sub-prime lending had increased to $160 billion—almost 13 percent of the mortgage origination market.  Growth in selected metropolitan areas is illustrated by the following table from a HUD-sponsored study by Abt Associates, Cambridge, Mass., which examined data collected under the Home Mortgage Disclosure Act (HMDA) (6). As sub-prime lending has grown, so has the number of foreclosures resulting from this lending activity.  The following table, also from the Abt study, show an explosive growth in foreclosures in the sub-prime market considering the short time frame studied. The Abt study also appears to support the frequent allegation that sub-prime lenders often are more interested in foreclosure than lending, in that the age of loans foreclosed in the study markets was much shorter than for conventional lenders.  In Atlanta, according to the study, the median age of loans being foreclosed by sub-prime lenders was 2 years, compared with a median of 4 years by other lenders reporting in HMDA. Similarly, in Boston, loans by sub-prime lenders starting foreclosures had a median age of 3 years, compared with a median age of 7 years for other lenders reporting in HMDA. Thus, in all three of these market areas, loans by sub-prime lenders reach foreclosure much more quickly.  In fact, with most of these loans reaching foreclosure in 2 years or less from origination, it seems likely that the loans were not affordable for the mortgagors even at the time of origination.


Corporate Governance: A Theoretical Perspective

Dr. Malek Lashgari, CFA, University of Hartford, West Hartford, CT



Corporate governance mechanism varies significantly among different countries. These differences appear to be a function of their respective economic structures and cultures. The underlying reason for these corporate governance systems, however, is the stakeholders pursuit for preserving their respective share of profit earned by business enterprises. This paper reviews corporate governance systems in Germany, Japan, and the United States. While these countries differ in their respective corporate governance structures, the basic underlying link among them appears to be explained by existing theories developed in various branches of science. These theories are well developed in economics, thermodynamics, and philosophy.   Common stockholders have the right to elect their representatives on the board of directors of a corporation. Members of the board of directors assume the responsibility of monitoring, directing and appointing the firm’s managers. In this manner disperse shareholders are potentially empowered in setting direction, monitoring performance and controlling distribution of profits of the corporation. In particular, this internal control mechanism is purported to integrate the interests of common stockholders and  executive managers of a corporation by rewarding good corporate performance. The board of directors has the right and responsibility to remove poorly performing managers.  Historically, dissatisfied shareholders have “walked away” from the corporation by selling their shares at depressed prices and thereby incurring losses. Alternatively, major shareholders either through hostile actions, “investor activism” or a friendly approach, “relationship investing” have pursued their objectives in monitoring corporate managers. Furthermore to the extent that U.S. corporate laws permit, competing managers would remove incompetent ones and take over poorly performing firms. These aforementioned actions collectively are purported to add value for the existing shareholders. Cross shareholding is common in Germany, in which a large percentage of corporate shares are held by banks as their respective creditors, large investors and other interested companies. The members of the “supervisory board,” who are elected by common stockholders and employees, approve major corporate decisions. This supervisory board appoints members of the “management board” who are responsible for running the firm. Corporate executives report to the supervisory board and major shareholders.  Corporate internal control mechanism in Japan entails the formation of “Keiretsu,” which is formed by cross shareholdings by a network of inter-related companies and banks. A collaborative and relationship investing communication network known as the “Presidents’ Council,” provides a continuous dialogue among the firms decision makers, lending banks, and major shareholders within the Keiretsu.  Owners, lenders and employees of a corporation together with the government and the society at large are governed by formal and informal contracts. These contracts provide a basis for distribution of income and wealth among these parties. The society expects the corporation to act in a humane manner typically known as “socially desirable” and “economically targeted” investments.  The government takes a share of earnings, as do creditors and common stockholders on an annual basis. For this reason elaborate accounting systems have evolved to provide transparency and depth of information regarding the stream of cash flows to the corporation. Such clarity in the corporate results of operations would naturally reduce the monitoring costs for the parties involved.  As the proportional share of the payoffs to a particular party rises, the benefits associated with internal control mechanism would increase. Thereby, large shareholders and major lenders tend to closely monitor the systematic division of earnings and to verify information regarding the accuracy of their respected stream of cash flows. Douglas North (1994), the 1993 Nobel Prize Laureate, states that human beings have learned through the dimensions of time and by way of trial and error how to make economies perform better. In effect, North declares that successful economic systems possess institutional structures that could adapt to economic changes through time. Relationship investing, investor activism, and cross shareholding investments have evolved though time in support of North’s theory.  Relationship investing entails involved ownership that may include sharing valuable experience and expertise in a friendly, long-term approach to investing. This is purported to motivate and direct corporate managers to pursue actions that would lead to creating wealth to various stakeholders. Cross shareholdings as observed in Japan and Germany are a form of relationship investing. Activist investors are large shareholders who try to enforce changes in the firm’s practices, policies or its management for the purpose of improving performance of the corporation.


Performance Measures and Profitability Factors of Successful African-American Entrepreneurs:  An Exploratory Study

Dr. Barbara L. Adams, South Carolina State University, Orangeburg, SC

Dr. Viceola Sykes, South Carolina State University, Orangeburg, SC



This exploratory study was conducted to determine what financial and non-financial measures executives of African-American companies perceive as important and what factors they used to measure their success.  The chief executives of some of the nation’s largest black-owned businesses are making it clear that they will not be held back by conventions or boundaries that have plagued many minority owned companies.  In spite of the record growth of black-owned businesses in recent years, there has been little research on such companies.  Data for this study was collected by a survey of the chief executives of Black Enterprise Magazine’s 100 list, which includes the top 100 industrial/service companies and the top 100 auto dealers, ranked according to revenue.  The results indicate that both financial and non-financial performance measures and profitability factors are an integral part of the management strategies employed by “successful” African-American entrepreneurs. The growth of African-American businesses has been expanding significantly over the last decade.  A report published by the Milken Institute (1998) indicates that from 1987 to 1992, the number of minority-owned firms (Blacks, Hispanic and Asian) grew at a rate of 4.7%, which was double the rate of all U. S. firms.  In comparison, African-American owned businesses increased at a rate of approximately 8%.  Despite this record growth, the report indicates that African-American businesses remain underserved by the capital markets because of  “perceptions of being small, unprofitable and unfavorably located.” While the success or failure of a business can be attributable to many factors, the significance of each may differ among companies depending on the nature of the business, location, background and/or ethnicity of the majority owners.  For example, African-Americans may face certain barriers such as more stringent capital acquisition hurdles, not usually encountered by majority-owned companies.  Each year, Black Enterprise Magazine (2000) publishes a list of Black-owned businesses (B.E. 100s) that have risen above negative perceptions and are not held back by conventions or boundaries that many Black-owned businesses face.  According to the editors of Black Enterprise, (p.108) “these are companies that have become powerhouses of the New Economy – “companies that are staking their claims, rewriting the rules and embracing an economy driven by technological innovation, high productivity and soaring financial markets.” What drives these entrepreneurs to rise above certain obstacles in the capital market?  What key performance indicators (KPI) do these companies track and how do they relate to the companies’ profitability and ability to compete for capital? How do the owners of these companies measure their success? Currently, there is little or no research that examines the performance of African-American entrepreneurs, thus potential answers to these and other questions related to successful Black-owned companies was the motivation for this exploratory study.  The results indicate that African-American entrepreneurs think both financial and non-financial measures are important in measuring the success of their business, but they place more emphasis on financial measures related to profitability.  They ranked personal philosophy, vision, persistence and hard work as the most important factors related to their success.  Not surprisingly, they also include religion and family among the top 10 factors that have contributed to their personal growth and success. The next section of this paper contains a review of the literature.  This is followed by a discussion of the methodology used, including a description of the survey instrument and subjects.  The results are discussed in the next section.  The paper ends with a discussion of the conclusions, implications and limitations. Performance measures are used to evaluate a company’s success in achieving its goals.  A report by The Conference Board on Strategic Performance Management (1998) found that companies using performance measurement are more likely to achieve leadership positions in their industry and are almost twice as likely to handle a major change successfully. A review of the performance measurement literature indicates there are various approaches and viewpoints to performance measurement.    Many researchers believe that the best performance measures are those derived from and linked to a firm’s strategy (see for example, Kaplan and Norton 1992, 1993; Nanni et. al. 1992; Langfield-Smith 1997).   Neely and Adams (2000), however, argue that there is a fallacy in this approach that is based on a misunderstanding of the purpose of measurement and the role of strategy.  They believe performance measures are designed to track whether a company is moving in the direction to reach a particular destination, whereas strategy is about the route a company takes to reach the desired destination.  Several performance measurement models such as the balanced scorecard, the business excellence model, and the performance prism were reviewed.  Some of the models included both financial and non-financial indicators, while others focus only on non-financial indicators.  Each of these models and relevant research using financial and non-financial performance measures were examined to identify a set of key performance indicators (KPI) to use in the current study.


Enterprise Middleware Management: Enterprise Java Beans (EJB)

Dr. Mustafa Kamal, Central Missouri State University, Warrensburg, MO
Shah Mumin, Central Missouri State University, Warrensburg, MO



Java plays a dominant role in developing programmable web, and object-oriented design by forcing us to think about our design of inanimate objects in terms of real-life examples. Enterprise Middleware Management (EMM) seeks to integrate current legacy systems with the client/server, and the web-based applications.  The Enterprise Java Beans (EJB), seeks to provide state-of-the-art middleware solutions for today’s complex and ever changing 3-tier to n-tier computing architecture. The J2EE provides “highly available, secure, reliable and scalable” enterprise applications.  Because of the added layer of Internet and eCommerce, client/server model is somehow obsolete in web based enterprise application services.  Enterprise services are constructed using distributed applications, consisting of several tiers: client requesting information, data resources on the back end, and one or more middleware applications.  Integrating current legacy systems with the client/server, and the web-based applications is a critical component in the success of EMM. Although a number of approaches are available, EJB seems to provide a state-of-the-art middle ware solutions for today's complex and ever changing 3-tier to n-tier computing architecture.  Java has revolutionalized the idea of programmable web and the concept of object-oriented design.  We started to think about our design of inanimate objects in terms of real-life examples.  Enterprise middleware management seeks to integrate current legacy systems with the client/server and web-based applications.  Although Java client-side applications and applets were a huge success, server-side programming was introduced to overcome many of the shortcomings associated with the client-side programming.  The Java 2 Enterprise Edition (J2EE), with Enterprise JavaBeans (EJB) as a major component, seeks to provide state-of-the-art middleware solutions for today’s complex and ever changing 3-tier to n-tier computing architecture. The Java 2 Enterprise Edition is initiated by Sun Microsystems to provide “highly available, secure, reliable and scalable” enterprise applicaions (Sun Microsystem, 2001).  Because of the added layer of Internet and eCommerce, the client/server model is somehow obsolete in web-based enterprise application services.  Enterprise services are constructed using distributed applications consisting of several tiers: client requesting information, data resources on the back end, and one or more middleware applications running in between to control flow of information and data security. The J2EE provides aforementioned servies with the following services defined by Sun Microsystems: J2EE Applicaion Programming Model: Java programming language (indcluding Servlets and JSP), JVM, JavaBeans components model, EIS-tier consisting of EJB, JDBC, JNDI, JMS, JTS, JavaIDL, etc.   J2EE Platform : Deployment specification, Java technology standards, IETF standards for J2EE, and CORBA standards for J2EE. J2EE Compatibility Test Suite: provided by Sun to make sure that vendors are conforming to the J2EE APM. J2EE Reference Implementation: provides reference implementation for testing and developent purposes. The following graph, taken from J2EE APM 1.0 (Sun Microsystem, 1999), provides the multi-tier J2EE architecture: The diagram above shows the typical J2EE architecture, with clients inside and outside the firewall requesting services to the web server in a form of CGI requests (html, jsp, servlet, xml).  The web server, if necessary, gives control to the application servers (middleware like BroadVision, ColdFusion, etc) to fetch data from the EIS-tier (either EJB, or CORBA client calls to the ORB).  The core J2EE API is supported by numerous, related Java technology-based initiatives, including; JNDI (Java Naming and Directory Interface) ;  JDBC (Java Database Connectivity) ; JTA (Java Transaction API) ; JTS (Java Transaction Services) ; JMS (Java Message Service), RMI (Remote Method Invocation), JavaMail, JavaIDL. Although it is tempting and sometimes necessary to explain the related technolgies, we will keep our focus mainly on EJB, due to time and space constraints. The Enterprise JavaBeans architecture has been defined in the “Enterprise JavaBeans Specifications” by Sun Microsystems.  This architectural overview is a detail study of the following Java specifications:  Enterprise JavaBeans Specifications v 1.1,   The J2EE Application Programming Model v 1.0 Enterprise JavaBeans to CORBA Mapping v 1.1.  Sun BluePrints Design Guidelines for J2EE. The J2EE application programming model promotes a multi-tier architecture for implementing scalable, accessible, and highly secure application services. In a typical multi-tier system, following are the tiers:first tier: client. middle tier: business logic.third tier: data services.Middle-tier is ususally partitioned into two parts:  1. business logic  2. presentation logic,  The J2EE programming model enables the developer to focus on solving application-specific problems and relies on Enterprise JavaBeans (EJB) technology to provide the solutions for complex system-level problems. In the J2EE platform, the part of middle tier that supports business logic is referred to as the EJB tier. The following diagram shows the break-down of EJB tiers (Thomas 1998): The EJB container contains the EJB components.  It provides services such as transaction and resource management, versioning, scalability, mobility, persistance, and security to the components it contains. 


Validating Decision Models in Operational Research

Dr. Awni Zebda, Texas A&M University-Corpus Christi, Corpus Christi, TX



Over the years, operation researchers and management scientists have suggested that decision models should be used only if their benefits exceed their costs.  However, the cost-benefit analysis lacks practical applicability because of the difficulty of measuring the costs and benefits of models.  Thus, researchers have often used model validity as a surrogate for model value.  This paper examines and evaluates some of the different methods used and/or proposed for validating models.  The understanding of the limitations and shortcomings of the different validation methods is essential for the appropriate use of these methods in validating decision models. Decision analysts and researchers develop quantitative decision models to aid decision making in business organization.  As noted by many researchers (e.g., Finlay and Wilson [1987], Hill and Blyton [1986]), establishing the value of these models is a necessary prerequisite for their use by practicing managers.  According to Gass [1983, p. 605], "the inability of the analyst [and researcher] to demonstrate to potential users ... that a model and its results have ... credibility" is one of the primary reasons that decision models are not widely used in practice.  The purpose of this paper is to provide insight into the most widely recommended method for evaluating decision models, model validity, and its limitations as a means for establishing the value of decision models.  The understanding of the shortcomings of different validity tests is essential for their effective use in validating decision models.  The paper is organized as follows.  In the next section a framework for validity tests is provided.  Section three examines the use of the subjective method in validating decision models.  Sections four and five discuss and evaluate the concept of validating models by examining their predictive ability.  Section six examines assumptions validity.  Section seven examines the use of model confidence as an alternative to model value and model validity.  Finally, a summary and concluding remarks are provided.  The discussion in the paper draws not only on MS/OR literature but also on other literature, in particular, behavioral science literature.  Decision scientists (e.g., Churchman [1970]), economists (e.g., Marschak and Radner [1972]), and behavioralists (e.g., Einhorn and Hogarth [1981], Johnson and Payne [1985]) have suggested that the choice of a decision model should be based on cost/benefit analysis.  The cost/benefit analysis, however, lacks practical applicability because of the difficulty in measuring the costs and benefits of decision models (e.g., Zebda [2002]).  As a result, researchers have often used model validity as a surrogate for model value (e.g., Davis et al. [1975]).  Such use is appealing because both "value" and "valid" indicate strength and worth.  One of the confusing aspects of the literature dealing with model validity is the large number of terms used by researchers.  A partial list of terms used by researchers include: "event validity," "variable validity," "logical validity," "parameter validity," "retrospective validity," "solution validity," "output validity," "replicative validity," "statistical validity," "face validity," "subjective validity," "predictive validity," "passive validity," "internal validity," "external validity," "assumptions validity," "structural validity," "hypothesis validity," "purpose validity," "futuristic validity," "outcome validity," "input-output validity," "operational validity," "technical validity," "criterion validity," "mathematical validity," and "conceptual validity."  This partial list suggests that there is no consensus among researchers.2  Therefore, to facilitate the evaluation of the different validation methods, the remainder of this section present an attempt to develop a logical and easy to understand dichotomy of validity tests.  As shown in Figure 1, validity tests may be classified by two dimensions: type (objective) and method.  Within the first dimension there are four validity tests: logical, prescriptive, descriptive (assumptions), and predictive validity.  According to the second dimension, validity tests are classified as either empirical or subjective.  That is, validating models (their logic, prescriptions, assumptions, predictions) can be done either subjectively or empirically. Assumptions validity (known as hypothesis validity) examines the correspondence between the model's assumptions and the real problem.  Some researchers (e.g., Gass [1983], Schellenberger [1974]) use the term mathematical validity to imply assumptions validity when the assumptions being validated are mathematical in nature (e.g., linearity assumption in linear programming models, additivity and multiplicativity).  The terms "parameter validity" and "variable validity" are used in place of assumptions validity when the assumptions being examined are the assumed values of the variables (i.e., the parameters) included in the model.  Predictive validity (known also as convergent validity) tests examine the ability of models to predict behavior and can be classified into two types.  The first type is used in behavioral literature and examines the ability of models to predict the behavior of the decision maker.  The second type (known also as event validity, output validity, input-output validity, outcome validity, and solution validity) examines how the solution obtained by the model corresponds to the behavior of the real system.  This second type of predictive validity has been used in management science and engineering literature (e.g., Elmaghraby [1968], Schellenberger [1974]) and, unlike the first type of predictive validity, it emphasizes the correspondence between the model's output and the output of the real system. 


Gateway Hardware Case; A computer-based interactive case for an Accounting Principles II course

Michael J. Krause, Le Moyne College, Syracuse, NY



The Gateway Hardware Case offers a challenge both to academics and to their students.    This case is an expository one (fact-based case) rather than a narrative one (story-based case).  Gobeil and Philips (2001) studied expository and narrative cases.  Among their observations, they found that the narrative- style case helped “low-knowledge” students do a better job with applying case facts. The Gateway Hardware Case consists of five modules.  (Some academics may say that such sub-division disqualifies the presentation from even being called a “case”.)   Aside from a possible semantics debate, research should be undertaken to see if sub-dividing an expository case improves “low-knowledge” students ability to apply case facts  to the extend that they are able to do so with the narrative  style case.  As to undergraduate students, the Gateway Hardware Case gives them an opportunity to study in detail the end of the accounting cycle when an entity employs a primitive bookkeeping system.  To facilitate this endeavor, the case requires students to use Microsoft Excel as the means to organize and refine the original unadjusted data.  At the end of the case, the student should be able to appreciate the value that a CPA firm adds to general purpose financial statements when undertaking an engagement where the raw bookkeeping data lacks an accrual accounting focus.  Gateway Hardware’s accounting system is primitive.  Except for a sales journal, the corporation simply keeps track of transactions using the cash basis.  At the end of the year, the outside CPA firm reviews the internal bookkeeping and prepares required adjusting entries so that the financial statements can be put into compliance with generally accepted accounting principles.  You might find it interesting that a 1993 AICPA report found a considerable number of CPA firms studied had no audit clients. In this case you will function as the outside CPA contracted to prepare the annual financial statements. You will proceed by reading and analyzing narratives associated with fifteen “Adjusting Journal Entries”.  However, technically speaking, not all narratives describe adjusting entries in the strictest use of the term.  Some entries will be correcting entries necessary because the rudimentary accounting system employs a bookkeeper with limited accounting knowledge.  And one “adjustment” will simply be a reclassification entry that could go unrecorded without affecting the final calculation of net income.  The case consists of five modules.  In the first module you must develop a worksheet template using Microsoft Excel.   You need to establish a general framework strategy for the worksheet template.  The recommended form for your worksheet would be to place on page one the balance sheet accounts.  Then use page two for the income statement accounts.  Lastly the adjusting entries should be listed in general journal form on page three. The adjusting entries on page three must be directed into the adjustment columns for pages one and two.  Then in turn this entry placement should automatically update the adjusted trial balance and either the income statement or balance sheet accounts columns.  As you begin the case with Module I, you will enter into your worksheet template the unadjusted trial balance (see Appendix) developed in-house by Gateway Hardware's bookkeeper at year-end.  Twenty-four different accounts will only be identified in this phase with the word "suspense" plus a numerical identifier.  These twenty-four accounts apply to new account names to be found in future lessons in the Accounting Principles II course.  The exact account name will be identified as you study the related topics in your instructor’s selected textbook. If you are developing a worksheet template for the first time, you will learn about the preliminary work necessary to provide service to a client on a routine basis.  The real time savings happens the second time you prepare the client’s financial statements.  With that thought in mind, do not be afraid of making errors.  With a well-constructed worksheet in place, entering correct data will automatically re-do total calculations of earnings and financial position. Adjusting Journal Entry #1: An analysis of the store’s bank reconciliation statement shows that a deposit in transit went unrecorded in the December, 2003 cash receipts journal.  The deposit’s source was cash sales for the last two days of December that amounted to $15,010. Adjusting Journal Entry #2: Since Gateway Hardware fails to use a Purchases Journal and/or an Accounts Payable system, Accounts Payable for the current year are recorded only as needed to prepare financial statements.  Following standard orders, the bookkeeper records as a debit to the account "Purchases" all cash disbursements for merchandise inventory.  In addition, the bookkeeper never records reversing entries at the start of the year.  Therefore invoices for merchandise acquired in late November and in December 2003, amounting to 195,850 have yet to be recorded.  While unpaid invoices from the end of 2002 amounting to $168,968 remain in the general ledger system as shown in the unadjusted trial balance. Adjusting Journal Entry #3: (Suspense #21): Gateway Hardware uses a periodic inventory system. 


Digital Divide and Implications on Growth: Cross - Country Analysis

Dr. Antonina Espiritu, Hawaii Pacific University, Hawaii



The growing importance of information and communication technology (ICT) in today's new or knowledge-based economy provides many opportunities for countries to accelerate their economic growth. However, there seems to be a large and growing gap in terms of access to and use of information and communication technology among and within developed and developing countries, otherwise known as the digital divide. Using a sample of 36 countries, this paper explores the role of ICT on economic growth and tests the existence of digital divide. The regression results suggest a positive and significant relationship between internet use and growth. Also, the hypothesis of no digital divide was rejected and found a significant evidence of differential growth between developed and developing countries due to difference in internet access and usage. Developments in information and communication technology (ICT) have opened up new and different possibilities of economic and social changes from which developed and developing countries can potentially benefit. Given the trend in ICT, it does not only accelerate the diffusion of information and technological know-how but also provide a virtual setting for instantaneous human interactions and easy access to global markets. Hence, the potential benefits from an internet-enabled transformation of business organizations into so-called global production networks are vast. The internet can help reduce transactional costs as it can drastically reduce the time it takes to transmit, receive and process routine business communication tasks. The internet has also expanded the scope for management of information as browsers can be used to access the information systems of suppliers and allow business transactions to be completed much more quickly.  However, there is a large and growing concern about disparities between industrialized and developing countries, especially with respect to internet access and use which have touched off a worldwide debate about the existence of global digital divide. According to the World Economic Forum / Pricewaterhouse Coopers LLP Survey of 1,020 global CEOs, half believe that the internet will widen the economic gap between developed and developing countries while 38% believe it will narrow it. Undoubtedly, the question remains open -- can the internet act as a powerful equalizer and bring about an even playing field to the global marketplace or will it reinforce the existing income inequalities within and between countries? This paper intends to address this issue by using a traditional growth model to examine the effect of internet along with other factors relevant to economic growth. Specifically, two main hypotheses are to be tested. First, in order to explore the relationship between the use of internet and income growth, a hypothesis that increases in the number of internet users translates to higher growth is to be examined. The second hypothesis of interest in this study is to test empirically the existence of the so-called global digital divide between those who have and those who do not have access to the internet and its implications on output growth. The main conjecture is that increases in the number of people in industrialized countries having access to the worldwide network translate to its faster output growth than those countries with a lower number of internet users.  The next section provides an overview of the state of global internet access and use. Section III describes the data and regression model. Section IV presents the estimation results and some policy implications are presented in the concluding section. In the early 1990s, due to user-friendly innovations such as the creation of world wide web (www ) with free, easy use of browsers and falling computer prices, the internet was no longer confined to the scientific community but has become more widely accessible to different communities. According to UNDP Report (1999), the number of computers connected directly to the worldwide network or known as the internet hosts, rose from about 100,000 in 1988 to more than 36 million a decade later. In mid-1998, industrialized countries reported having 88% of worldwide internet users while developing Asia, with over half of the world's population, accounted for a mere seven percent of total internet users and is projected to increase to no more than nine percent by 2003 (IDC Research). The internet continues to penetrate deeply into the industrialized, high income countries with almost 192 million internet users as compared to only 50 million users in the low and middle income countries as of 1999 (ITU).  In terms of investment on information and communication technology, according to World Information Technology and Services Alliance (WITSA), as of 1999, only 55 countries can account for 98% of global spending on IT. Obviously, increasing global connectivity requires having the proper telecommunication infrastructures in place. However, the cost is immense and for developing nations, deciding how to use their limited resources poses a difficult dilemma. Nevertheless, even though the obstacles to affordable and wider access remain formidable and evidence of real benefits from this ICT revolution are still deemed inconclusive and some anecdotal, global connectivity and e-commerce do present real opportunities.


Using the Problem-based Learning to Enhance Student’s Key Competencies

Dr. Chen-Jung Tien, National Taiwan Normal University, Taiwan

Dr. Jui-Hung Ven, China Institute of Technology, Taiwan

Dr. Shoh-Liang Chou, National Tao-Yuan Senior Agricultural and Industrial School, Taiwan



From the trends of educational reforms in recent decades, many advanced countries have viewed the key competencies as important assets of people. The key competencies are important features of working life and therefore essential to boost and maintain employment, so that the promotion of the key competencies has become an integral part of secondary and even in higher educational systems in many countries. This paper aims to compare the key competencies among different countries, including the SCANS of America, the key skills of UK, the key competencies of Australia, and the ten basic competencies of 1-9 integral curriculum of Taiwan. This paper also explores the key competencies that can be developed in the problem-based learning (PBL) and how to use the PBL to enhance student’s key competencies.  Because of the rapid changes in technology, economy and working environment nearly two decades, many advanced countries have been emphasizing on the development of key competencies of people to maintain employability and continuing learning. Raizen (1989) indicates that general skills are new workforce competencies that enable people working in different workplaces. Such skills, not restricted in specific workplaces, are the concept of key competencies. Hereafter, many researches emphasizes the importance of key competencies, such as “generic skills” of Stasz (1990), “work force basics” of Department of Labor (SCANS, 1991), “new work skills” of Resnick & Wirt (1996), and “new model worker” of Flecker & Hofbauer (1998). All skills or competencies listed above, though in different terms, are the concept of key competencies.  Many recent researches also emphasize the importance of the key competencies. OECD  (2000) suggests that a country should enhance the human capital to meet the globalization and knowledge based economy by increasing the key competencies of people. Hesketh (2000) points out that communication skills, learning abilities, problem solving skills, teamwork skills and self-management skills are considered first by the England employers. Those key competencies are even more important than the professional competencies. Stasz (2000) suggests that people in the new economical era should have two-dimensional skills, inter-personal skills and intra-personal skills. The former includes teamwork and leadership and the latter includes motivation, attitude, continuing learning, problem solving, negotiating with colleagues and customers, analytical ability and applying technology. The scholars in Taiwan also have the same views. Du (1999) indicates that the technical and vocational education should promote both the professional competencies and the key competencies concurrently. If there are no professional competencies there are no prospect for students. If there are no key competencies, the future of students will be restricted. Hong & Tseng (1999) concludes in their studies that the employee owned key competencies enables Taiwan to have the same competitiveness as advanced countries in high technology. Traditional teaching is teacher-centered and teachers always teach subject area knowledge and students pursue unique answers. There is lack of opportunities to solve problems related to real life situations, lack of teamwork trainings, and also lack of knowledge-sharing mechanisms. The problem-based learning, changing the ways of teaching by teachers and the ways of learning by students, provides good opportunities that may be helpful to enhance the students’ key competencies in the learning process. In the past two decades, a lot of researchers tries to integrate the terms and definitions of “key competencies”. Due to the diversities, there is no consensus reached so far. But there is no doubt at all that key competencies are recognized as cognitive, learned and behavior attributes (Weinert, 1999; Kearns, 2001). Table 1 shows different terms of key competencies. Some are using skills and some competencies. This paper uses “key competencies” instead.  In spite of different definitions and notions, key competencies have the following characteristicsKey competencies are multifunctional: Key competencies are needed for different facets of life such as family life, social life, professional life, and even daily life. Key competencies are transferable across different fields: Key competencies are used not only in school, society and workforce market, but also in the personal life including career development, lifelong learning, inter-personal attributes and intra-personal attributes. Key competencies are higher of cognitive abilities: The construction of key competencies involves individual active reflections and mental processes.


A Theoretical Model for Matching Entry Modes with Defensive Marketing Strategies

Dr. Mansour Lotayif, Plymouth University, Plymouth, Devon, UK



Given the fact that entry modes and defensive marketing strategies represent two different streams of literature, it was thought there is no relationship between them. The following endeavor resembles a thrown stone in a stagnated lake by trying to match between them. The four pillars: opportunities or risk offered by each defensive strategy and entry mode; continuity probability of these risks and opportunities; resources and time needed to deploy each strategy and entry mode could help facilitate this mission. Consequently, the two literatures were analyzed and based on certain assumptions; the matching model has been suggested.  Pan and Tse, (2000); Jean-Pierre and Hennessey, (1998); Miller and Parkhe, (1998); Goodwin and Elliott, (1995); Woodcock et al., (1994); Erramilli and Rao, (1993); Erramilli and D'Souz, (1993); Terpstra and Sarathy, (1991); Dahringer and Muhlbacher, (1991); Erramilli, (1989); Boddewyn et al., (1986); Sapir, (1982); and Shelp, (1981) argue that, entry modes could be wholly-owned and fully controlled entry modes (e.g. branches, subsidiaries, representative and agency offices), shared-owned and shared controlled entry modes (e.g. joint ventures and partially mergers and acquisitions), contractual entry modes (e.g. licensing, franchising, and calculated alliances), and purely marketing oriented entry modes (e.g. direct and indirect exporting entry modes), as shown in Figure (1).  Traditionally, the focus across entry mode literature was mainly concentrated on the factors and the conditions behind the use of each entry mode. From this perspective, four main schools of thought have been put forward to explain the choice of entry modes; Gradual incremental involvement (Luo and O'Connor, 1998; Chu and Anderson, 1992; Johanson and Vahlne, 1990; Root, 1987; Davidson, 1980; Johanson and Vahlne, 1977; Dubin, 1975; and Stopford and Wells, 1972). This school connects between the commitment of resources in the target market and both the risk in this market and the international experiences the organization has. Therefore, the higher the risk in the target market, the less the resources commitment entry modes deployed in that market. Also, the higher the organization experiences, the more tendency to use high resources commitment entry modes. Transaction Costs Analysis (TCA) (Kumar and Subramaniam, 1997; Ghoshal and Moran, 1996; Erramilli and Rao, 1993; Erramilli, 1989; Kogut and Singh, 1988; Beamish and Banks, 1987; Anderson and Gatignon, 1986; Williamson, 1986; Williamson, 1985; Davidson and McFetridge, 1985; and Caves, 1982). In TCA school of thought, entry mode decision is dealt with as if it is a transaction. Therefore, all cost associated with various aspects of the value-added chain from the production to the consumption (Pan and Tse, 2000) are considered. The basic premise in TCA is that, organizations will internalize those activities that they can perform at a lower cost but will subcontract those activities externally if other providers have a cost advantage.  Dunning eclectic theory or location-specific factors or contingency theory (Woodcock,  el al., 1994; Zejan, 1990; Hill et al., 1990; Dunning, 1988; Caves and Mehra, 1986; and Dunning 1981). In contingency theory, Kogut and Singh 1988 found that, industry, firm, and country-specific factors influence the entry mode selection decision. Dunning (1980), (1981) argued that, the choice of entry mode depends upon the position of ownership, internationalization, and location advantages.  Agency theory (Carney and Gedajlovic, 1991; Brickley et al., 1991; Williamson, 1988; Horstmann and Markusen, 1987; Senbet and Taylor, 1986; and Jensen and Meckling, 1976). In the agency theory the principles (new entrants) are highly motivated to collect data about their agents (entry modes in foreign markets) in the target market. It uses the metaphor of a contract to describe relationships in which one party delegates work to another (Jensen and Meckling, 1976).  Based upon four judgmental pillars, each entry mode and defensive strategy will be analyzed. These four pillars are opportunities or risk offered by each entry mode; continuity probability of these risks and opportunities; resources and time needed to be deployed. Firstly, wholly owned and fully controlled entry modes especially branches and subsidiaries represent the highest level of resources commitment in the target market (Pan and Tse 2000; Vanhonacker, 1997; Anderson and Gatignon, 1986). Literally, these two entry modes are used by organizations (parent organizations) that are globally oriented and their competitive position in one country is significantly affected by their position in another one and vice versa (Porter, 1986). . Also, the risk of business failure is spread over a much wider geographic area (Porter and Takeuchi, 1986, p. 116). Therefore, huge marketing efforts to deteriorate local rivals strategies are anticipated. Given they have come to stay as long as they can, branches and subsidiaries could impose the maximum level of threats for local rivals as a result of the constant interaction with various local parties (Hill et al., 1990; Hennart, 1988; and Contractor, 1984)  These threats will last for long time, as they necessitate a major resource commitment in the overseas location (Vanhonacker, 1997; Anderson and Gatignon, 1986) and calling for an actual investment to set up an independent operation. However, representative and agency offices represent the other edge within wholly owned category. Secondly, shared controlled entry modes e.g. joint ventures and partial mergers and acquisitions represent the second highest resources commitment, and consequently the second risky category, as there are local partners with whom risks are shared (Pan and Tse 2000).


What has happened in the Business World of On-Line Distance Learning?

Dr. Richard Gendreau, Bemidji State University, Bemidji, MN



On-line distance education has been around long enough to establish a track record all over the world. This paper looks at what has happened in the business world of on-line distance education. There are proposed changes in federal regulations affecting financial aid with more universities doing on-line distance education. Both on-line and traditional classroom education are moving towards assessing the outcomes of their students. The U.S. Department of education is becoming involved in the accreditation process. Several institutions have dropped out of the on-line distance education market. Educational institutions and the U.S. Military are heavily involved in developing and offering on-line distance education all over the world.  The United States Distance Learning Association defines distance learning as "the acquisition of knowledge and skills through mediated information and instruction, encompassing all technologies and other forms of learning at a distance (Roblyer and Edwards, page 192)." Mediated information is the information processed between the professor and the student. It is the exchange of insights (Heerema and Rogers, pages 14 and 16). The text, Instructional Technology for Teaching and Learning, (Newby, Stepich, Lehman, and Russel, page 210), defines distance learning as " organized instructional program in which teacher and learner are physically separated." On-line distance learning courses are offered to students anytime, anywhere, and anyplace utilizing computers to access the Internet, computerized presentations, and e-mail (Nasseh, Marklein, and Kauffman).  The government needs to take a realistic view towards financial aid and on-line distance education. At the present time there are two rules that directly affect on-line distance learners' ability to qualify for financial aid. The first is the 12-hour rule, which requires the learner to be in a physical classroom for at least 12 hours a week. The second is the 50-percent rule, which forbids an institution from offering federal financial aid if more then 50-percent of its courses or students are involved with distance education (Carnevale, page A51).  The U. S. House of Representatives passed H. R. 1992 that would diminish the affects of the above rules. The Senate has a companion bill, S.1445, which is yet to be passed. With Congress rushing to work on appropriation bills and the homeland security bill, political observers think the bill will have to wait until 2003 (Carnevale, page 1). The bills would suspend the 50-percent rule for institutions with a loan-default rate below 10-percent for the last three years. The 12-hour rule could be ignored if learners spend at least one-day a week interacting with the professor. E-mail or a discussion room could qualify as interaction while some definitions such as one day a week are not clear (Carnevale, page A51 and Carnevale, page A43 ).  The Learning Anytime Anywhere Partnership and the Education Department grant program that helped spur experimentation with distance learning education has cut the budget for new awards. The House and Senate has passed close versions of a bill that "...effectively kills the distance education grant program (Carnevale, pages 1 & 2)." The Education Department, higher education experts, and distance education officials have called for changes in the rules. Their testimony will be used in 2003 to persuade members of the senate to change the rules and when Congress reauthorize the Higher Education Act (Carneval, page 1 and American Council on Education/Educause). The congressional initiative or lack of initiative is an indication of the mistrust of on-line distance learning. Classroom time used to be the only method of measuring quality programs offered by institutions and to protect the learner against fraudulent institutions. Now, assessment is the appropriate method of measuring quality, honesty, and integrity.  With thousands of on-line courses being delivered around the world, several studies of assessing the quality of on-line learning have been made. On-line educators are still developing assessment policy to demonstrate the accuracy of their assessments. Because on-line distance education is relatively new, many critics set a higher standard than traditional classroom education when judging quality.  A study prepared by the Institute for Higher Education Policy has identified the most essential benchmarks for quality Internet-based distance education. The major categories of benchmarks to ensure the highest quality of Internet-based distance education include the following: institutional support, course development, teaching, learning, course structure, course support, faculty support, and evaluation and assessment (Institute of Higher Education Policy, pages 25 & 26). A recent report, written by the Council of Higher Education Accreditation, shows that accrediting agencies can effectively evaluate distance education programs. The accrediting agencies have developed new standards for on-line institutions to measure learning outcomes (Carnevale, page 1).  "(A)ssessment is taking center stage as on-line education experiments with new ways of teaching and proving that they're teaching effectively (Carnevale, page A43)." Both on-line and traditional classroom education are moving towards assessing the outcomes of their students. It is a cultural change influenced by education, students, parents, accreditation agencies, politicians, employers, and higher education administrators (Carnevale, page A43).  According to Heerema and Rogers (Heerema and Rogers, pages 14-21) higher education is confronted with a choice between quality and quantity on-line education. The problem faced by higher education is whether the trade-off between quality and quantity will take place or if quality and quantity can be obtained simultaneously. Making the appropriate investment in designing a course with information technology can eliminate the quality versus quantity trade-off. "By investing in substantial course development, we convert the cost structure from a variable expense to a fixed expense (Heerema and Rogers, page 20)." The fixed expense will decrease as enrollment increases without a decrease in quality.  The information technology will allow student-faculty interaction to be designed into the course. This will customize the content of an on-line course to respond to complex student questions. Institutions will have to make an investment of resources up front to build in the information technology as part of the course design as new courses are being developed.  Alley and Jansak (Alley and Jansak, pages 1-21) identified the basic levels of guidance for designing quality on-line courses: principles, practices, and applications. Then they developed ten basic principles integrating the three levels of guidance for instructional design of quality courses. The end result is practical applications for the design of quality courses (Alley and Jansak, pages 1-21).


Product Architecture and Product Design: A Complexity Perspective

Dr. Tony Brabazon, University College Dublin, Ireland

Dr. Robin Matthews, Kingston Business School, London



The objective of this paper is to develop a conceptual framework which can be employed to provide insight into the impact of product architecture on the process of product design of assembled products.  The key argument of this paper is that product design can be considered as a search process which takes place on a design landscape, the dimension and topology of which is determined by the choice of physical components and the choice of architecture of interconnections between these components. Not all design landscapes offer equal opportunity nor are all landscapes equally difficult to search. Designers may trade-off these two items. A representation of both the design landscape and the related search process is constructed in this paper. Kauffman’s NK model is utilised to examine the impact of interconnection density and structure on the topology of the design landscape.  The Genetic Algorithm, is introduced as a means of modelling the learning process implicit in product design. It is argued that the algorithm can incorporate a variety of relevant search heuristics. The combination of the NK model and the genetic algorithm provides a framework which through a simulation methodology, can be used to investigate the impact of different modular structures on process of product design.  This paper examines a component of the product design problem and has as its objective, the development of a conceptual framework which is capable of providing insight into the impact of product architecture on the process of product design of assembled products, and the evolution of these designs over time. A complex systems perspective is adopted as it is considered that products are systems of components (Tushman and Nelson, 1990) which can  exhibit emergent properties (Holland, 1995). The functionality of a product depends not just on the behaviour of the individual components but also on the ‘architecture’ of the interconnections  between these components. Individual modules in a product’s design may  contain varying numbers of components (or other sub-modules). They may have differing internal connection structures between their components and differing external connection structures with other modules and/or components within the product. Product design represents the creation: ‘of solutions … that satisfy perceived needs though the mapping between functional elements and the physical elements of a product’ (Loch, Terwiesch and Thomke, 2001, p. 664).  This definition draws a clear distinction between a product’s functional elements, which represent the individual operations or traits that comprise the overall performance of a product, and the physical elements which represent the parts, components and sub-assemblies that implement the product’s functions (Loch, Terwiesch and Thomke, 2001). Recognizing the distinction between physical components and their architecture highlights that product innovation can occur either as a result of component innovation or through architectural innovation (Henderson and Clark, 1990). Architectural innovations may facilitate increasingly complex combinations of components  ‘…the more complex arise out of a combinatoric play upon the simpler. The larger and richer the collection of building blocks that is available for construction, the more elaborate are the structures that can be generated.’   (Simon, 1996, p. 165)  Both the physical components and their connection architecture are selected, either explicitly or implicitly, by a designer from a set of possibilities. Hence, product design can be considered to consist of a search process (Winter, 1984; Balakrishnan and Jacob, 1996; Fleming and Sorenson, 2001) which commences with the selection of a set of physical elements and possible architectures, defining a ‘search space’ within which designers focus their efforts. Common procedures for optimising or satisificing in search problems include mathematical optimisation methods and heuristic algorithms. It is argued that given the bounded rationality (Simon, 1955) of designers, practical product design is likely to employ heuristic algorithms which may include reuse of existing components, directed imitation and trial and error. The observed utilisation of modular components in product design is consistent with this perspective, as is the long-standing view that product innovation primarily consists of a process of recombination of pre-existing components (Schumpeter, 1934).  The remainder of this paper is organised as follows. In the next section, the NK model is introduced to examine the impact of  interconnection density and structure on the topology of the design landscape. The key implication is that the choice of architecture by the product designer, impacts on the topology of, and constrains, the design space which is then subject to a search process. Not all spaces offer equal opportunity nor are all spaces equally difficult to search. Conceptualising design as  high-dimensional search intuitively suggests that designers utilise heuristics to guide their design efforts. It is posited that these heuristics can be considered as a process of distributed, social learning (Birchenhall, 1995), wherein, designers may directly imitate existing ideas, alter existing ideas incrementally or engage in trial and error learning. These forms of learning (search) can be modelled using a suitably defined evolutionary algorithm, such as the Genetic Algorithm (Holland, 1992).  The origins of the NK model lie in studies of adaptive evolution (Kauffman and Levin, 1987; Kauffman 1993) but application of the model has expanded greatly beyond this domain to include technological change (Kauffman, Lobo and Macready, 1998), organisation design (Levinthal, 1997; Rivkin, 2000) and product innovation (Frenken, 2001). The NK model attempts to describe general properties of systems of interconnected components. In its basic form, the model describes a system of N components each of which can assume a number of states  or ‘alleles’. In the case of product design, if  the number of versions (states) for each component is denoted by Sn, the related N-dimensional product design space consists of  discrete design possibilities. As N increases linearly, the number of design possibilities increases exponentially. The product design problem can be represented as a combinatorial problem in that the designer is searching for the best, or at least a satisfactory, combination of components in order to attain the required design objectives.


Country of Origin:  A Critical Measure in Work Commitment Studies within Multi-Cultural Contexts

Dr. Adela J. McMurray, Swinburne University of Technology, Melbourne, Australia



This multi-method study examines Country of Origin (COO) as a measure at the individual level, in the work commitment literature. An International literature review uncovered that many of the existing empirical studies appear to share a common oversight in their approach to uncovering work commitment within multi-cultural contexts. These studies omit to include the COO variable and in doing so, create difficulties in drawing work commitment comparisons in multi-cultural contexts, such as Australia and the USA. This study provides evidence that the inclusion of COO as a measure in multi-method studies undertaken in multi-cultural contexts has the potential to yield a more accurate picture of work commitment. Measurement may be either quantitative or qualitative yet is a central issue in research because it defines the links between theories and the data used to test them. Quantitative measurement focuses on the systematic allocation of values to variables that may signify events, objects, or a person’s characteristics whereas qualitative measurement focuses on labels, qualities and names. This multi-method study using both quantitative and qualitative measurement, addresses measurement equivalence within a multi-cultural population. The creation of measurement equivalence across groups is a reasonable precondition to carrying out cross-group comparisons but is seldom tested in organizational research (Vandenberg and Lance, 2000).  The theoretical underpinnings of the methodology in this study departed from the singular use of rational and scientific models grounded in the Lewinian field theory dominant in the work commitment literature. Instead, the study offered a synergy between Lewinian Field theory and social construction whilst specifically examining Job Involvement (JI) and Protestant Work Ethic (PWE). No research methodology is free of bias. Each has weaknesses and this may only be corrected by ‘cross-checking’ with other methodologies within the single study (Webb et al, 1966:175). Although the rational elements in all three companies dictated the main theoretical thrust in the design of the study be that of empiricism, these findings were crosschecked, expanded and explained by the fieldwork data, which was gathered and analysed from the symbolic interactionist approach. Sieber (1980), cited in Lawler III, Nadler and Cammann (1980), asserts that integrating the unique qualities of each methodology within a single study, results in ‘enormous opportunities for mutual advantages’ within the research design, data collection, and the data analysis. Sieber (1980:4444) believes this integrated approach to researching, results in a ‘new style of social research’. JI and PWE constructs are psychological in nature, yet the fieldwork data is sociological. By fusing the two disciplines, the study became multi-disciplinary.  Another departure from the dominant research designs found in the work commitment literature was that this study was conducted simultaneously in three different organizations. This enabled the empirical and qualitative results to be compared between the three studies. This multi-method study explored the relationship between Country of Origin (COO) and work commitment, in particular JI and PWE. The study was based on an entirely new theoretical approach, using four different research techniques of interview, observation, survey and documentary analysis, simultaneously in three organizations where almost the entire population of each participated in the study. The study proposed the theory that, because Australia rapidly evolved from a homogenous to a heterogeneous (multi-cultural) society, work commitment researchers needed to consider adapting their measures to include, as an antecedent to work commitment, the COO as an independent variable. Little (1997) conducted an investigation into psychological construct comparability. He found that strong factorial invariance had shown that psychological constructs were essentially similar in each socio-cultural setting and therefore comparable. This evidence lent support to approval of operationalizing the JI and PWE psychological constructs in cross-cultural comparisons within one cultural setting.  Globalisation is modifying national cultures so that they are becoming increasingly more complex and diverse. Many Western countries no longer have a mono-cultural context and, with globalisation, most national cultures and workplaces are becoming multi-cultural. According to Mische (2001:74) to be multi-cultural, means that in addition to gender, ethnic and cultural diversity one has to be pluralistic which also embraces such things as different life experiences, religion, age, income, customs, sexual preference, physical and intellectual capabilities, and personal choices that are reflected in society as a whole. These qualities are in turn influenced by a number of factors such as geographical origins, personal background, social affinities, education, economic status and the personal commitments of the individual to his/her culture and customs. For the purposes of this study only geographical origins (COO) will be addressed.  Researching in such a rich and challenging multi-cultural context one would have to search through scientific inquiry, for ways in which to conceptualise cultural identity. It is difficult to make generalisations within any national context or workplace, if studies ignore the respondent’s COO at the individual level. This paper suggests that research studies addressing various forms of work commitment could benefit by taking a micro focus by investigating the respondent’s COO as it exists within national contexts and workplaces and is applicable to any study conducted within a multi-cultural society.  Hofstede (1980) set the pace in cross-cultural studies by examining nationalities within IBM worldwide, unfortunately his Value Survey Model (VSM) is yet to be entirely validated as many of the international management studies have been unable to be generalised to the bulk of the international literature because they could have possibly overlooked a critical variable such as COO within multi-cultural settings.


Measuring Predictive Accuracy in Agribusiness Forecasting

Dr. Carlos W. Robledo, Louisiana State University, Baton Rouge, LA

Dr. Hector O. Zapata, Louisiana State University, Baton Rouge, LA

Dr. Michael W. McCracken, University of Missouri, Columbia, MO



A recurrent need in business forecasting is that of choosing a best forecasting model. A model choice is often made by the minimization of a criterion such as the mean squared error (MSE). The model with lower MSE is considered better for forecasting; however, the statistical significance of such nominal differences is rarely questioned, an important concern to a decision maker who may be undecided about updating an existing model or adopting a new one. Recent developments in the forecasting literature (Diebold-Mariano, 1995; Stock and Watson, 1999) introduce out-of-sample tests for significance in the equality in MSEs.  This paper provides an empirical evaluation of these tests using quarterly data, 1981:3-1999:4 for the U.S. wheat market. Models are updated using fixed, rolling, and recursive schemes. It is found that for consumption, inventories, exports, prices and production nominal MSE comparisons would favor a model over another. However, when testing the significance in MSE improvements, the results suggest that increments of even 15% may not justify choosing a new model. A Monte Carlo experiment warns that nominal differences in MSE comparisons, when post-sample size is small, may generate only seemingly better forecasting models. Recent changes in U.S. agricultural policy have encouraged agricultural producers and agribusiness managers to become more conscious of the need for market intelligence in agricultural decision-making. This trend gained momentum in the1996 farm legislation (the Fair Act) through the elimination of farm subsidies in the form of target pricing and the introduction of transitory payments.  Similar provisions have been enacted in the Farm Security and Rural Investment Act of 2002 (the FSRI Act), which move agriculture towards a freer market environment. For forecasting researchers, this new agribusiness decision environment creates an opportunity for revisiting and/or developing structural models that reflect market dynamics and forecast market trends accurately. Farmers and agribusiness firms that trade in grain markets, for instance, use outlook information for deciding when and how to sell or buy. Policy makers usually need information such as projections of acreage, production and prices while formulating policies.  But how is the choice of a best forecasting model made? Typical forecasting exercises measure accuracy via point estimates.  In a two model comparison, for example, mean squared errors (MSEs) are estimated and the model with the smaller MSE is considered better for forecasting.  Forecast users, however, rarely know the extent to which they may rely on such relative improvements in MSEs. In other words, a decision maker may benefit more from knowing that a 10% reduction in MSE, for instance, is significant compared to a 15% reduction that may not. Recent developments in forecasting methods are designed to measure the significance in forecast improvements (Diebold and Mariano, 1995; West, 1996; West and McCracken, 1998; Corradi et al., 1999). Diebold and Mariano (DM) developed a test of the difference in MSEs while Stock and Watson (SW) suggest testing the same hypothesis using a ratio of MSEs.  In agriculture, most commodity models are built using annual, and in some instances such as for the U.S. wheat market, quarterly data. Such data series are short compared to the hundreds or thousands of observations available in finance and other fields, which also renders the post-sample size small in forecasting evaluations. The DM and SW tests have an asymptotic justification, thus, it is empirically relevant to assess, given the data length, how well these tests perform in small-samples.  The main objective of this paper is to examine the performance of the DM and SW tests in choosing forecasting models and to assess how well these asymptotic tests perform when small samples are used in out-of-sample forecasting. The wheat market was chosen because structural models for wheat are readily available and provide an economic foundation for building multiple time series models. Also, wheat supply and demand data have been collected for many decades, facilitating the study of dynamics using modern time series methods such as unit-root and cointegration.  This paper is organized as follows. The next section introduces a structural model for the U.S. wheat market. A dynamic reduced form specification and related time series methods are discussed in the second section The third section presents the tests of predictive accuracy. The fourth section describes the Monte Carlo experiment that quantify test performance in small samples. The results are analyzed in section five, followed by conclusions.  Chambers and Just (1981) developed an econometric model of the wheat market that examined the dynamic effects of exchange rate fluctuation on this and other markets. There are five endogenous variables, namely, production, disappearance, inventories, exports, and prices. This is a simple specification that models the dynamics of the system and links the U.S. macro economy with the U.S. wheat market by means of exchange rates. Exchange rate dynamics and their impact on commodity markets continue to be an issue of research interest in agriculture.


Teaching the Quality Service Consulting Project to Business School Students

Dr. Gene Milbourn, Jr., University of Baltimore, Baltimore, MD



This paper will provide an outline on how to structure a consulting project for business school students on the topic of quality service.  It will suggest a step-by-step program that a team of students can follow during a 3-4 week semester segment to develop a set of recommendations to improve the quality service of an enterprise.  Three models are featured: the five-factor quality service model of Parasuraman, Zeithaml, and Berry; the customer driven company model of Whitely and the Forum Corporation; and, the customer value analysis model of Gale.  While not intended to be a literature review, some research is reviewed as appropriate for pedagogical purposes.  Compete surveys, scoring keys, templates for each segment of the consulting project are provided in the appendices. The cover of Time magazine asked the question “Pul-eeze! Will someone Help Me?”  The accompanying article reported the deterioration of customer service to be due to the general economic upheavals such as high inflation, labor shortage, and low-cost business strategies.  Prices increased 87% during the 1970’s and to keep prices from further skyrocketing, customer-service training was slashed and computers and self-service schemes were introduced in wholesale fashion [Time, 1987].  Businesses were seen to have developed the same habits and inattention to quality service that had plagued American manufacturers on the quality issue in years past.  Today,  e-commerce  retailers are forcing the bricks-and-mortar businesses to upgrade their customer focus to be competitive.  Data now exists that suggest that the economic well being of companies fluctuate with the quality of service.  The Strategic Planning Institute found in studying the confidential data provided by thousands of business units – the PIMS database—that quality service leads to very positive financial and strategic outcomes.  Grouping business into those providing low and those providing high quality service, it was found that, in addition to maintaining a price differential of 11%, return on sales was 11% higher and annual sales growth was 9% higher in the group providing high quality service.  In addition, the high service quality group experienced a 4% positive change in market share while the low service quality group registered a –2% change  [Gale, 1994].   An earlier report concluded that many companies “overinvest in cost reduction and capacity-expansion projects because they believe they can “run the numbers” to “justify” a project.  They underinvest in quality service improvement because they have not learned how to calibrate its strategic or financial payoff.”  The research on the behavior of dissatisfied customers is consistent and expected.   A dissatisfied customer typically does not complain and simply purchases from another store.  Research across eight industries found that 25% of dissatisfied customers do not return to the offending store; 41% of customers experience a problem in shopping;  94% of customers do not complain about a problem;  63% of  customers are not pleased with a business’ responses to their complaints; and, customers are five times more likely to switch stores because of service problems than for price or product quality issues .   However, when customers do complain and when their problems are resolved quickly, an impressive 82% would buy again from the business [TARP, 1995].  High quality service is found to be a dominant cause of  repeat customers across industries.  More noteworthy, however, is that a customer generates an increasing amount of profit each year the “customer is a customer.”  A business becomes skilled in dealing with the customer while the customer is found buying more as well as referring others to the business [Reichheld et. al., 1990]. The quality service consulting project starts by dividing a class into teams of three to four students.  Each team selects a business that will allow it to survey its customers and to conduct a focus group exercise.  The project will take about two and half to four weeks depending on students’ ability and on the demands of the instructor.  Team papers about fifteen pages and a slide show using presentation software can be requested from each team.   The cognitive and process outcomes include learning the role of quality service and customer value analysis, and the development of team and consulting skills from conducting a project with others in an on-going enterprise.  The project is divided into the following five sections: (1) Personal Customer Service Inventory, (2) Quality Service Inventory, (3) Internal Service Quality Climate Survey,  (4) Customer Value Analysis, and (5) Recommendations.  These sections represent the steps I have followed in teaching this topic in my business management courses.  The Personal Customer Service Inventory is a short personality test that measures a person’s service orientation.  Many stores use this type of instrument to select sales clerks.  They generally measure personal characteristics as friendliness, tactfulness and open-mindedness.  Of the five sections, this is the one that is that can be deleted if time is not sufficient for the entire program.  The main survey tool is the Quality Service Instrument.  This is a 22-item questionnaire that is completed by customers of a business.  It measures five factors of quality service which include the tangibles, empathy, reliability, assurance, and responsiveness.  These terms will be defined elsewhere in the paper.  Here, it should be understood that these five factors account for much of the variation of customer’s perception of quality service across industries. 


Personalization and Customization in Financial Portals

Dr. Altan Coner, Yeditepe University, Istanbul, Turkey



In today’s Internet economy, customers have more choices than ever. Their expectations for individualized and special treatment from corporations have never been more demanding. As an important segment of Internet economy, in order to meet their customer expectations, financial institutions present personalized and customized solutions to catch one-to-one relationships with their customers. Therefore, transforming raw data of customers into meaningful recommendations or suggestions become an important process for financial portals. Moreover, the concept of personalization and customization has expanded in scope to emphasize a much broader notion of customer relationship management (CRM). CRM takes personalization and customization from a portal technology to a corporate philosophy where the understanding of customer experience and behavior in a portal becomes a critical issue for the enterprises. This research paper examines preferences of customers regarding personalization and customization in financial portals.  In marketing history, we can see a number a different philosophies that guide a marketing effort. From mid C19th to early C20th, during industrial revolution, production concept was widely accepted. According to production concept, demand for a product was greater than supply. As Henry Ford mentioned, "Doesn't matter what color car you want, as long as it is black.” This was a typical quote during the “production concept” era. 1920's to mid 1930's World War II to early 1950's, businesses understood that emphasis was needed to sell the product to increase profits. In this era selling was the main purpose of a business. Demand for a product was thought to be equal to supply. So, they focused on advertising facilities. 1930's to World War II 1950's to present, supply for a product were greater than demand. This created an intensive competition among suppliers. Companies first began to determine what the consumer wanted, then produced what the consumer wanted, and then sold the consumer what he/she wanted. That was the beginning of marketing era.  Actually, the signals for the importance of customer satisfaction had already been mentioned before 1960s. In 1912, LL Bean founded on the marketing concept, in his first circular: "I do not consider a sale complete until goods are worn out and the customer still satisfied. We will thank anyone to return goods that are not perfectly satisfactory. Above all things we wish to avoid having a dissatisfied customer." To illustrate the marketing concept, Peter Drucker (1954) said: "If we want to know what business is, we must first start with its purpose...There is only one valid definition of business purpose: to create a customer. What business thinks it produces is not of first importance-especially not to the future of the business or to its success. What the customer thinks he/she is buying, what he/she considers "value" is decisive-it determines what a business is, what it produces, and whether it will prosper." For the last several decades, four Ps of marketing – product, price, place and promotion – were widely discussed in detail. (McCarthy, 1960; Kotler, 1971, Kotler and Armstrong, 1999) In fact, each aspect was proposed in the formulation by Borden (Grönroos, 1994) long before Kotler and a final model was already structured. By using this finalized model, Kotler discussed extensively modern marketing issues and made some additions to the final model. In the 1990s after Kotler’s additions, this model was widely accepted, applied and founded the bases of modern marketing.  Thinking of factors affecting modern marketing, probably globalization is the most important one to be considered. Since the 1980s, technological advances such as global telephone and computer networks have reduced geographic and even cultural distances. As a result, companies can now buy supplies and produce and sell goods in countries far from their home offices. Products conceived in one country are now being manufactured and then sold in many others. In the 1980s, as large commercial companies began to build private Internets, ARPA investigated transmission of multimedia—audio, video, and graphics—across the Internet. Other groups investigated hypertext and created tools such as Gopher that allowed users to browse menus, which are lists of possible options. In 1989, many of these technologies were combined to create the World Wide Web. Initially designed to aid communication among physicists who worked in widely separated locations, the Web became greatly popular and eventually replaced other tools. Also during the late 1980s, the U.S. government began to lift restrictions on who could use the Internet, and commercialization of the Internet began. In the early 1990s, with users no longer restricted to the scientific or military communities, the Internet quickly expanded to include universities, companies of all sizes, libraries, public and private schools, local and state governments, individuals, and families. Short, easy-to-remember domain names were once in short supply. (Encarta, 2002) Many domain names that used the simple format http://www.[word].com, where [word] is a common noun or verb, and .com referred to a for-profit business were mostly taken by 2001. Until 2001, only a few extensions were allowed, such as .com, .org, and .net. By 2002, however, additional extensions began to be used, such as .biz for businesses and .info for informational sites. This greatly expanded the number of possible URLs.  Improvements in technology have enabled marketers to become more consumer oriented, helping to develop “relationship marketing”. Relationship marketing has attracted considerable recent interest from marketing academics and practitioners. Practitioners have seen the potential advantages of reducing levels of customer "mix" by improving the retention rates of profitable customers (Reichheld and Sasser, 1990) (Webster, 1992).


Work Values Ethic: A New Construct for Measuring Work Commitment

Dr. Adela J. McMurray, Swinburne University of Technology, Melbourne, Australia

Dr. Don Scott, Southern Cross University, Lismore, Australia



This study reviews recent research on work ethic constructs and the resultant development of a construct for measuring a work values ethic in an Australian manufacturing environment. The construct was tested for validity, using confirmatory factor analysis and was found to represent a valid measure. It was also tested for reliability using Cronbach’s alpha and showed a satisfactory level of reliability in line with other previously developed measures for this construct. The findings suggest that, in an Australian manufacturing context, the Protestant Work Ethic (PWE) is not a valid construct and should be replaced by the Work Values Ethic (WVE) construct. There are six forms of work commitment, which are regarded as being relevant to an employed individual, five are universal and one is non-universal. The universal forms of work commitment are: work ethic endorsement, which encompasses the importance of work itself including 'Protestant Work Ethic'; Job Involvement, which relates to the extent to which one can identify with and is absorbed in one's job; affective (attitudinal) Organizational Commitment, which refers to an individual's emotional attachment to their organization; Calculative Commitment, which deals with an individual's perceived costs of leaving the organization; and Career or Professional Commitment, which relates to the importance an individual places on their occupation for each form of work commitment. Union Commitment is regarded as the non-universal form of work commitment because its applicability is declining in various countries namely the United States (Morrow, 1993). This paper examines the work commitment component called Protestant Work Ethic (PWE), both as an existing measure, and in its transition to a new measure of a ‘Work Values Ethic (WVE)’ for use in an Australian Manufacturing environment. PWE is considered to be a psychological construct and has been studied in a wide range of contexts and in various disciplines such as psychology, sociology, economics, anthropology and the social sciences (Wentworth & Chell, 1997). Today, it is predominantly found within the social psychology literature and is viewed as holding a significant position in the sociology of knowledge (Kebede, 2002). PWE is also the oldest concept of work commitment and its origins are found in the publication entitled "The Protestant Ethic and Spirit of Capitalism" (Weber, 1905)   Hamilton (1996) states that there are a number of flaws in Weber's original work that cast doubt on its authenticity and relevance. For example, an obvious error relates to the data. Weber attempted to show that Protestants were more prevalent than Catholics, but his data showed the overall percentage of student enrolments adding up to 109%. When the figures were recalculated, the Protestant/Catholic ratio was equal. Weber had used this original data to serve as proof that Protestants, rather than Catholics, were educating themselves for business careers. However, in spite of this, Hamilton (1996) asserts that although economists have debunked the PWE theory, sociologists and psychologists should retain their support of the construct and for its use in research.  There appears to be a general understanding of Weber's idea of the PWE but this is not true of its exact nature and relevance. Weber did not provide a definition that was exact enough to be measured easily. Consequently, many characteristics have been given to the PWE and there is an inconsistency in the measurement of the construct (Robertson, 1985). One reason for this could be the significant overlap between different variables (Furnham, 1982).  Kelvin & Jarrett (1984), who contend that work is a norm with no moral significance attached to it, offer the proposition that there is no such thing as a PWE, but rather a 'wealth' ethic.  However, Bellah (1963) in his Japanese study found evidence for PWE beliefs, although he concluded by questioning whether the term 'Protestant' was appropriate.  Weber's original thesis was entitled 'Protestant ethic' yet researchers in the 1970s included the term 'work' and started to refer to the original term as the PWE. While one would question the justification for this, it is suggested that perhaps PWE was used as a situational variable in these studies, being renamed and appropriately applied within a particular point in time and context.  According to Morrow (1993, p.1), the PWE has established a reputation in the psychology and organisational behaviour literature and may be defined as 'the extent to which one intrinsically values work as an end in itself.'  A key word in Morrow's definition of PWE is 'values'. Interestingly, a review of the psychological values literature reveals that the majority of values studies conducted between 1974 and 1986 dealt with the PWE and work ethic constructs.  PWE is therefore an abstract, general aspect of work commitment with a focus that is seen to be long term and indirectly related to the work place (Blau & St. John, 1993). However, although PWE is the oldest work commitment construct, its history of being researched has been spasmodic. This has placed limitations on the generalisability of many findings and highlights the need to further explore the measurement of the construct with a view to developing a valid and reliable measure.


Human Resource Management Practices, Strategic Orientations, and Company Performance: A Correlation Study of Publicly Listed Companies

Dr. Simon K. M. Mak, City University of Hong Kong, Hong Kong

Dr. Syed Akhtar, City University of Hong Kong, Hong Kong



This study examined the relationships of human resource management practices with strategic orientations of organizations and their financial performance. The human resource management practices included job description, internal career opportunity, job security, profit sharing, training, performance appraisal and voice mechanisms. Strategic orientations comprised cost, quality and innovation. Data were collected from 63 publicly listed companies through a questionnaire that contained objective measures of human resource management practices and subjective measures of strategic orientations. Company performance was measured in terms of return on equity. Correlation analysis indicated that only job description and profit sharing correlated positively and significantly with the company performance across both managerial and non-managerial employees. Except for internal career opportunity, other human resource management practices were associated with one or more strategic orientation. Results favour a mixed approach to the adoption of human resource management practices based on both strategic orientations and company performance. In recent years, researchers have focused on how a firm’s employees can collectively be a unique source of competitive advantage that cannot be imitated by competitors (Barney, 1991). Human resources are not as readily imitated as equipment or facilities are. Thus, investments in firm-specific human capital can further decrease the probability of cross-company imitation (Jones & Wright, 1992; Wright & McMahan, 1992). Bailey (1993) observed that human resource management (HRM) practices could enhance the return from employees’ discretionary efforts, which would lead to payoffs that were greater than the relative increase in costs incurred.  Based on a survey of 495 firms, Delaney, Lewin, and Ichniowsky (1989) observed that specific practices in the following areas represented "sophistication" in human resource management: human resource planning, job design and job analysis, employee selection and staffing, training and development, performance appraisal, compensation, grievance and complaint procedures, employee involvement and participation plans, information sharing programmes and attitude surveys. Huselid (1995) added three practices commonly held relevant to a firm's performance: the intensity of its recruiting efforts (selection ratio), its promotion criteria (seniority versus merit) and the average number of hours of training per employee per year. Researchers have argued that the use of these HRM practices can improve the relevant skills, job-related knowledge and abilities of a firm’s employees, thus improving its performance (Huselid, 1995). The extant literature utilises two approaches that have been dominant in describing the link between HRM practices and firm performance. These include the universalistic and contextual approaches. Using these approaches, the present study examines the possible associations of HRM practices with strategic orientations of organizations and their financial performance in a sample of publicly listed companies. The universalistic view of HRM assumes that there are ‘best’ HRM practices that have positive and additive effects on company performance across different organizational and environmental situations (Huselid, 1995). Building on this assumption, several research studies have reported positive associations between firm-level measures of HRM practices and organizational performance (Arthur, 1994; Delaney, 1997; Huselid, 1995; Ichniowski, Shaw, & Prennushi, 1994; MacDuffie, 1995, U.S. Department of Labour, 1993).   The U.S. Department of Labour (1993), for instance, compiled a survey report documenting the studies on the effectiveness of ‘high performance work practices’, i.e., HRM practices designed to provide employees with skills, incentives, information and decision-making responsibility that improved business performance and facilitated innovation. In one survey included in the report, data from700 firms from all major industries showed that companies utilising a greater number of innovative human resource practices had higher annual shareholder return from 1986-91 and higher gross return on capital. The ‘innovative best practices’ included personnel selection, performance appraisal, incentive systems, job design, promotion systems, grievance procedures, information sharing, attitude assessment and labour-management participation. In a second study focusing on the Forbes 500 companies, data indicated that firms with more progressive management styles, organisational structures and reward systems had higher rates of growth in profits, sales and earnings per share over the five-year period from 1978-83. A third detailed study of over 6,000 work groups in 34 firms concluded that an emphasis on workplace cooperation and involvement of employees in decision-making resulted in higher productivity and were positively correlated with future profitability. Considering these studies together, the survey report of the U.S. Department of Labour (1993, p. 15) concluded that ‘specific practices such as training, alternative pay systems, and employee involvement are often associated with higher productivity’. In a major study, Huselid (1995) specifically studied the effects of HRM practices on turnover, productivity and corporate financial performance. The HRM practices comprised the high performance work practices mentioned previously. Based on a survey of nearly 1,000 firms, Huselid (1995) reported that these practices had statistically significant effects on both intermediate employee outcomes (turnover and productivity) and short- and long-term measures of corporate financial performance.


Usable Web Based Simulated Annealing and Tabu Search for the Facility Layout Problem

Dr. Hong-In Cheng, Iowa State University, Ames, Iowa

Dr. Patrick E. Patterson, Iowa State University, Ames, Iowa



Facility layout problems arise commonly in manufacturing systems where it is desired to minimize material handling cost. Typically, heuristic methods are developed to solve these problems because of the inefficiencies and restrictions of optimal methods. To solve facility layout problems on the World Wide Web (WWW), we chose the quadratic assignment problem (QAP) formulation and two intelligent search heuristics. Java was used to program several applets. We first solved an unknown QAP using simulated annealing (SA) and tabu search (TS) methods programmed in C/C++. Through result comparison SA and TS were verified as useful heuristic methods for this problem type. Efficient values of the search algorithm variables that provide good solutions while minimizing CPU time are recommended. Web based simulated annealing (WBSA) applets that present layout solutions and the total layout costs were then developed. The WBSA applet was tested on a well-studied QAP, comparing competitive configurations with other heuristics. A TS applet developed to perform web based tabu search (WBTS) was also shown to be a good method for providing useful layouts. Finally practical rules are presented for interactive interface design of web-based program by the usability test. A facility layout represents the spatial arrangement of all facilities in a manufacturing system. It represents the spatial assignment of workstations, machines, material handling facilities, storage spaces and offices. The facility layout is very important in a manufacturing system because material handling, storage level and the productivity of the operator can be directly affected by the facility layout.  The facility layout problem has been studied for decades. However, it still remains an interesting problem for researchers working on combinatorial optimization problems. The objective of the facility layout problem is to design a block layout that minimizes the total layout cost. Total layout cost is usually referred to as material handling cost, and is expressed as the product of workflow and travel distance. After a block layout is determined, further work converts the block layout into a detailed layout that assigns specific facility locations, aisle locations, input/output points, and so forth (Figure 1). The illustrations below demonstrate two different forms of a facility layout. There are two traditional approaches used to solve the facility layout problem. The first is the quadratic assignment problem (QAP) approach and the second is the graphic-theoretic approach (Meller and Gau 1996). The QAP was introduced first by Koopman and Beckman (1957) to solve the equal size facility layout problem. QAP assumes that all facilities are of equal size and all sites are fixed and known. The QAP objective is to find out the optimal assignment of n candidate facilities (departments, machines, or workstations) to n candidate sites which minimizes the total layout cost (Chiang and Kouvelis 1996).    The QAP is NP-Complete (Sahri and Gonzalez 1976). Some optimal methods have been implemented for the layout problem (Chiang and Chiang 1998). However, these optimal procedures can not solve the problem when there are more than 15 departments (Bukard 1984). Because of this computational inefficiency and the restriction of optimal algorithms, many heuristics have been developed. For further study of heuristics, refer to Armour and Buffa (1963), Nugent, Vollmann, and Ruml (1968), Schiabin and Vergin (1985), Heragu and Kusiak (1988), and Skorin-Kapov (1994).  QAP applications include backboard wiring, campus planning, typewriter keyboard design, hospital layout, control panel problem, job assignment, and suburban land-use problem (Bukard 1984). Although other formulations have been developed, the QAP is still frequently used. More recently, intelligent heuristic approaches have been applied to the QAP. Simulated annealing (SA) and tabu search (TS) are representative of these intelligent heuristic methods (Chiang and Chiang 1996).  Annealing is a process which heats metal or glass to high temperatures and then cools them slowly to prevent brittleness. Simulated annealing (SA) is an artificial intelligence search method that employs this annealing concept. Kirkpatrick, Gelatt and Vecchi (1983) were the first to introduce SA, while Bukard and Rendl (1984) were the first to apply SA to the QAP formulation. SA was reported to give near optimal solutions for combinatorial problems. For the QAPs, SA generated competitive solutions which only deviated 1-2% from the best-known solutions (Chiang and Chiang 1998). Since SA uses the Metropolis algorithm (Metropolis, et al 1953), it can accept non-improving moves and so overcome local minimums. A move is the swapping of two departments which changes the facility layout and the total layout cost. A non-improving move is a move that makes the system less efficient.  Tabu search (TS) was introduced by Glover (1986) and first applied to the QAP by Skorin-Kapov (1990). TS is an intelligent search method which will not get trapped at a local minimum. An important characteristic of TS is the addition of flexible memories to the search procedure (Chiang and Chiang 1998).


Measurement of Intangible Success Factors in Four Case Organizations

Dr. Antti Lönnqvist, Institute of Industrial Management, Tampere University of Technology, Finland



This paper focuses on the measurement of organizations’ intangible success factors. They consist of the amount and quality of intangible assets, the use of intangible assets and the actions related to intangible assets. Their management and measurement is considered important because of the critical role of intangible assets for many organizations. Although several methods have been developed for measuring intangible success factors, there are problems in applying them in practice. The topic is analyzed both theoretically and empirically. The research questions are how the intangible success factors to be measured are chosen, what they are like, and how they are measured. Four case studies are presented and analyzed in order to provide concrete examples and to discover answers to the research questions.  Intangible assets consist of, e.g., the employees’ competencies, organization’s relationships with customers and other stakeholders, culture, values, image and management processes (See e.g. Edvinsson and Malone 1997 or Sveiby 1997). They are critical for most organizations. Thus, the management of intangible asset has emerged as an important practice and a research area (Petty and Guthrie 2000, p. 161). Performance measurement of intangible assets is also an active research area. There are several methods available for measuring intangible assets. However, organizations have not widely adopted them (Lönnqvist 2002; Nordika 2000) and none of the newly developed measurement tools have been commonly accepted.  This paper focuses on measurement methods that can be used as managerial tools of a business unit or even a smaller organizational unit. The measurement is carried out by first identifying the measurement objects and then designing their measures. Measurement objects are usually called success factors. The measurement of intangible assets should take into account the dynamic nature of these assets. This refers to that measurement focuses not only to assets but also the activities affecting them (Johanson et al. 1999, p. 8). Therefore, the term intangible success factors is used here to refer to managerially relevant intangible assets and the activities related to improving or utilizing the assets.  There has been a lot of research aiming to identify the different types of intangible assets (See Petty and Guthrie 2000, p. 161). However, the intangible activities have not received as much attention. It seems that there is still confusion regarding how the intangible success factors to be measured are chosen and what the intangible success factors are like. (see Figure 1). These questions are approached by analyzing different measurement methods. In addition, four case studies are analyzed to provide further evidence to the theoretical considerations and to provide concrete examples of intangible success factors.  After the intangible success factors have been chosen, the next step is to design measures (See Figure 1). This seems to be a difficult phase in practice (Lönnqvist and Mettänen 2002). There are hundreds of measures proposed in the literature (Danish Agency for Trade and Industry 2000; Edvinsson and Malone 1997; Liebowitz and Suen 2000; Sveiby 1997; Van Buren 1999). However, they may not be easily applied to a specific situation. In addition, some of the measures are designed to be used together with other measures and thus are not very usable by themselves.  The second research question in this paper is how are the intangible success factors measured. This question is answered by literature research and analyzing four case studies where measures of intangible success factors where constructed. Practical issues considered include what the measures are like, how they are designed and how the data is collected for them. The case studies are exploratory since there is no clear idea regarding how the measures should be chosen or designed. The cases are explored in order to discover common challenges and viable solutions. There are several methods available for measuring the performance of an organization. Many of them include or focus on measures of intangible success factors. In this chapter, five common measurement methods are described and analyzed in order to determine how the measurement objects and their measures are supposed to be chosen.   Currently, the most commonly known performance measurement framework is the Balanced Scorecard (PMA 2001). According to the framework, measurement objects (i.e. success factors) are chosen based on an organization’s vision and strategy. They are chosen from several perspectives in order to provide a balanced view of the organization. Performance measures are designed after the critical success factors have been chosen. Kaplan and Norton discuss about measuring both financial and non-financial factors. However, although not mentioned as such, intangible success factors, e.g. customer satisfaction and competencies, are also considered as an important part of the balanced performance measurement. (Kaplan and Norton 1996) It should be noticed that similar features are also included in other balanced performance measurement frameworks (Tuomela 2000, pp. 97 - 102).  The first actual intangible assets measurement framework presented was Sveiby’s Intangible Assets Monitor (Sveiby 1997). It focuses on measuring only intangible assets that are classified into employees’ competencies, internal structure and external structure. According to the framework, each of the three groups of intangible assets should be measured from three different perspectives. They are growth and renewal (e.g. number of years in the profession), efficiency (e.g. value added per professional) and stability (e.g. average age of employees) of intangible assets. Each of the three perspectives in the three groups of intangible assets should be measured by one or two measures. However, there is no indication regarding how the intangible factors and their measures are to be chosen. Edvinsson and Malone (1997) have presented the Navigator measurement framework. The framework seems quite similar to the Balanced Scorecard. However, there are some differences. The underlying idea in the Navigator is the organization’s intellectual capital.


Subjective Productivity Measurement

Dr. Sari Kemppilä, Tampere University of Technology, Finland

Dr. Antti Lönnqvist, Tampere University of Technology, Finland



Productivity is an important success factor for all organizations and, thus, it should also be managed. Productivity measurement is a traditional tool for managing productivity. There are several different methods for productivity measurement. In certain situations, these traditional methods may not be applicable suggesting that there is a need for other kind of measures. An alternative approach for the traditional methods is subjective productivity measurement. Subjective productivity measures are not based on quantitative operational information. Instead, they are based on personnel’s subjective assessments. The data is collected, e.g., using questionnaires. The objective of the paper is to present subjective productivity measurement as a new and potential managerial tool for productivity measurement. In addition, some evidence regarding the practicality and usefulness of the method is presented. The paper is based in reviewing studies in which subjective productivity measurement has been used. At the moment, there are only a few experiences regarding the use of subjective productivity measures. There are several problems regarding the validity, reliability and practical using principles of the measures. Despite the problems, subjective productivity measurement appears to be very potential method for measuring productivity in situations where the objective methods fail. Productivity is an important success factor for all organizations. Improvements in productivity have been recognized to have a major impact on many economic and social phenomena, e.g. economic growth and higher standard of living. Companies must continuously improve productivity in order to stay profitable. Therefore, productivity should also be managed. Productivity measurement is one traditional and practical tool for managing productivity. (See e.g. Uusi-Rauva and Hannula 1996; Sink 1983).  There are several different methods for productivity measurement. Most of the methods are based on quantitative data on operations. In many cases, it is quite difficult and sometimes even impossible to collect the data needed for productivity measurement. An example of this situation is the work of professionals and experts. Their work is knowledge-intensive and the inputs and outputs are not easily quantifiable. Therefore, the traditional productivity measures are not applicable. In these types of situations, there is a need for other kind of measures.  An old but scarcely used approach to productivity measurement is subjective productivity measurement. Subjective productivity measures are not based on quantitative operational information. Instead, they are based on personnel’s subjective assessments. The data is collected, e.g., using survey questionnaires. The objective of the paper is to present subjective productivity measurement as a new and potential managerial tool for productivity measurement. In addition, some evidence regarding the practicality and usefulness of the method is presented. The paper is based in reviewing studies in which subjective productivity measurement has been used. According to Sink (1983), the overall performance of a company is comprised of at least seven criteria: effectiveness, efficiency, quality, productivity, quality of work life, innovations, and profitability. Productivity is thus a key success factor for all companies. Hannula (1999) has stated that organizations must be able to continuously increase their productivity in order to stay profitable. Therefore, productivity should also be managed.  Productivity can be defined simply as output divided by the input that is used to generate the output. Output consists of products or services and input consists of materials, labor, capital, energy, etc. Productivity is affected only by the quantities of inputs and outputs. The main difference to a closely related concept, profitability, is that profitability is also affected by the changes in prices of input and output. (See Hannula 1999; Uusi-Rauva 1996)  Productivity measurement is one traditional and practical tool for managing productivity. Ideally, total productivity would be measured. Total productivity is the total output divided by the sum of all inputs. As a concept, total productivity is fairly simple. However, the measurement of total productivity is very difficult in practice. The main problem is that different outputs (products and services) and inputs (e.g. labour, material, energy) cannot be summed up. An obvious solution would be to use monetary values but then it would be about profitability measurement. (See e.g. Uusi-Rauva 1996).  There are several more practical methods available for productivity measurement. Perhaps the most common of them is to use partial productivity measures. Partial productivity ratios can be calculated by dividing total output by some input factor. For example, labour productivity is the ratio between total output and labour input. If partial productivity ratios cannot be calculated because the total output cannot be determined, even more simple method is to use physical productivity measures. They are obtained by dividing some typical output (e.g. number of serviced customers or production amount of main product) by an essential input (e.g. machine hours or labour hours). (See e.g. Uusi-Rauva 1996).  Indirect (or surrogate) productivity measurement can be used in cases where it is impossible to get the data needed for partial and physical productivity measures. According to Sink (1983), surrogate productivity measures include factors and managerial ratios that are not included in the concept of productivity, but are known to correlate with it. In other words, the idea behind indirect productivity measures is that certain symptoms or phenomena are related to problems in productivity. They include, e.g., high defect rates, machine defects, unused capacity, high material scrap, unnecessary transports, poor atmosphere, and long waiting times. Indirect productivity measures focus on these factors related to productivity. The factors are identified and measures are designed for them case by case. The following list presents further examples of indirect factors affecting productivity: Work habits: absenteeism, tardiness, safety rule violations. Work climate: number of grievances, employee turnover, job satisfaction. Feelings or attitudes: attitude changes, favorable rea ctions, perceived changes in performance. New skills: decision made, conflicts avoided, listening skills, reading speed, frequency of use of new skills.  Development or advancement: increases in job effectiveness, number or promotions and pay increases, request for transfer.  Initiative: number of suggestion submitted/implemented, successful completion of projects. 


United States Versus International Financial Statements

Dr. Gurdeep K. Chawla, National University, San Jose, CA



The Securities and Exchange Commission (SEC) requires foreign companies offering their securities in United States (US) exchange markets to restate or reconcile their financial statement according to US Generally Accepted Accounting Principles (GAAP).  The requirement is designed to provide domestic investors with standardized financial information which can be used in making appropriate investment decisions.  However, the requirement makes it more expensive for foreign companies to do business in US.  This study will evaluate the quality of financial information provided by US GAAP and compare it with the quality of financial statements prepared according to International Accounting Standards (IAS).  A sample of standards issued by International Accounting Standards Board (IASB) will be compared against the US GAAP.  The standards selected for comparison are based upon major differences noted in form 20-F, reconciliation or restatement of financial statements prepared according to foreign accounting standards to US GAAP filed by foreign companies with SEC, and outlined by other authors and experts such as "Doing Business" series by Price WaterhouseCoopers.  The study will be helpful in determining whether SEC should continue to require restatement or reconciliation of foreign statements according to US GAAP. It will also be beneficial to IASB in evaluating its accounting standards and need, if any, for developing additional standards in its efforts to harmonize accounting standards across national boundaries.  A foreign company's (Gulf International Bank) financial statements, prepared according to IAS, were compared against a domestic company's (Microsoft) financial statements, prepared according to US GAAP, to note the appearance of similarities between IAS and US GAAP financial statements.  In addition, Foreign companies' SEC filings (Form 20-F) were reviewed for major differences that materially affected financial statements noted in reconciliation or restatement of foreign financial statements according to US GAAP.  It is a good source to study differences between foreign accounting standards and US GAAP.  The reports (Form 20-F) have been professionally prepared by foreign companies listing their securities in US markets and have been audited by renowned Certified Public Accountants (CPAs) with a great deal of experience and expertise in the field.  SEC's review and acceptance adds further credibility to the information provided in the reports.  Some major publications, "Doing Business" series by Price WaterhouseCoopers being a notable one, were also reviewed to ascertain major differences between foreign accounting standards and US GAAP.  The financial statements appear to be very similar.  However, there are a number of differences between IAS and US GAAP.  Following are the major differences which materially affect the financial statements: Valuation of Assets/Properties:  The US GAAP require that the properties, plant, and equipment be recorded in the financial statements based upon their historical costs and allow permanent decrease in the value to be expensed.  IAS require the initial recording based upon actual costs but allow revaluation of assets using their fair values in future years.  Valuation of assets based upon their fair values has been debated in US and foreign countries for a long time.  The proponents argue that fair values provide updated information which can be used for analyzing financial statements and making decisions whereas historical values are relatively less relevant.  However, critics argue that it is difficult to determine fair values unless assets are actually sold in the market.  The valuation process, critics further argue, is a result of subjective judgments made by appraisers and do not provide objective "hard numbers" as is the case with historical values.  Moreover, there is room for manipulation of fair values because of subjective judgments made but historical values are not open to such manipulation.  Overall, historical values appear to be more reliable because they provide us with actual costs incurred to acquire assets.  Valuation of Inventories:  US GAAP require that inventories be recorded at lower of cost or market value.  US GAAP allow use of specific method, average-cost method, first in first out (FIFO), and last in first out (LIFO) methods to record cost of goods sold and value inventory.  IAS require valuation of inventories based upon lower of cost or net realizable value.  Actually, market value and net realizable value are different terms for same value (market value is what can be actually realized or received from sale of inventories).  IAS also allow the usage of weighted average, FIFO, and LIFO methods to determine cost of goods sold and value inventory.  US GAAP and IAS are very similar with regards to valuation of inventories.  Goodwill and Intangible Assets:  US GAAP required, Accounting Principles Board (APB) Opinion No. 17, that the excess of acquisition price over net assets acquired (goodwill) be amortized over its useful life, not exceeding forty years.  US GAAP required similar treatment for other intangible assets such as brand names, licenses, trademarks and patents, etc.  However, Financial Accounting Standards Board (FASB) Statement No. 142 superseded APB Opinion.  According to the Statement, goodwill and intangible assets will not be amortized but tested for impairment and appropriate adjustments will be made.  IAS allow revaluation based upon fair values of intangible assets.  In addition, IAS has a rebuttable presumption of life of intangible assets to be limited to twenty years.  It allows intangible assets to be amortized over twenty years but such amortization should be justified and duly disclosed.  It also requires that R&D expenditures to be expensed as incurred.  US GAAP and IAS are similar in that both take into consideration the fair value of goodwill and intangible assets. 


Exchange Rate Crises and Firm Values: A Case Study of Mexico’s Tequila Crisis

Dr. Cathy S. Goldberg, University of San Francisco, San Francisco, CA

Dr. John M. Veitch, University of San Francisco, San Francisco, CA



Exchange rate crises that lead to large devaluations in a country’s currency generally result in significant economic disruptions for that country. One group of firms, however, is expected to benefit from this event – export-oriented firms. A significant devaluation should raise an exporter’s profits as the increase in the value of foreign currency revenues brings higher expected future profits. In an efficient market, the value of export firms should be less affected by a currency crisis than firms that are primarily focused on the domestic economy. We conduct an event study of the effects of the 1994 Tequila crisis, and consequent peso devaluation, on a cross-section of Mexican firms. We find that export firms as a whole outperform non-export firms in the months after this crisis. In corporate finance, the value of a firm should equal the present discounted value of its future free cash flows. A firm’s Free Cash Flows (FCF) depend crucially on the interplay between its revenues and expenses. To the extent that a firm’s revenues and expenses are in different currencies, exchange rate changes will change the firm’s future FCF’s in its home currency and therefore the market value of its stock.  A firm whose expenses and revenues are primarily generated within the local economy may be negatively impacted by a currency crisis that produces a significant devaluation of the domestic currency. To the extent the currency crisis results in a sustained contraction of the domestic economy, non-export firms may experience declining unit sales, revenues and profits. Thus a currency devaluation should generally have negative impacts on the value of non-export firms in the affected economy. Note that firms that rely on imports of raw materials are likely to suffer the greatest falls in value due to the increase in their expenses as well as a decline in revenues.  In contrast, a firm that is primarily an exporter may be positively affected by the same currency crisis. The firm’s expenses in local currency are not directly affected by the devaluation, but its revenues in foreign currency now translate into larger amounts of local currency. Export firms are thus likely to experience increases in their profits and future free cash flows denominated in local currency. As a result, the local currency value of an export firm’s shares may increase despite any negative effects of the crisis on the local economy as a whole. In fact, a significant currency devaluation may also increase export firms’ unit sales and revenues abroad by lowering the foreign currency cost of their products in the rest of the world.  Thus while exchange rate crises that result in large devaluations in a country’s currency can involve significant economic disruptions for that country, one group of firms is expected to benefit from this event – export-oriented firms.  This paper examines the effects of a currency crisis that produced a large exchange rate devaluation on a cross-section of export and non-export firms.  Mexico’s Tequila crisis of Dec. 1994, where the peso fell in value by 50% in less than a month, provides an opportunity to study how this currency crisis affected the share returns of publicly traded firms on the Mexican stock market. Mexico’s economy has suffered through two major currency crises in the past two decades.  Lipsey (1999, 2001) documents the reaction of two groups of firms – US affiliates in Mexico and Mexican firms – to the 1982 Latin American crisis and the 1994 Tequila crisis. Lipsey’s results involve aggregate export versus domestic sales responses of each group of firms before and after each crisis.  The Latin American crisis of 1982 affected all the nations of Latin American, including Mexico. The Mexican peso fell in value from 56.40 pesos/US$ in 1982 to 120.09pesos/US$ in 1983. Lipsey (2001) reports that share of exports in total sales grew rapidly in the years after the crisis for both US affiliates in Mexico and Mexican firms. Both types of firms decisively switched their sales to exports from domestic demand in light of the peso’s large decline in value. U.S. affiliate firms increased exports more and decreased domestic sales by more than did Mexican firms.  The Tequila crisis of December 1994 resulted in the collapse of the Mexican peso and spread to the financial markets of other developing countries.


Improving Quarterly Earnings Estimates: The Predictive Ability of the Cash Reinvestment Ratio

Dr. Wray E. Bradley, The University of Tulsa, Tulsa, OK



Quarterly earnings per share forecasts are widely used by securities analysts, investors, management, and auditors.  One of the best mathematical models for forecasting quarterly earning per share is the Brown-Rozeff univariate ARIMA model (Brown and Rozeff, 1979).  This model has been shown to outperform other univariate models and is considered a benchmark generally specified parsimonious model (Lobo and Nair, 1990; Lorek, et. al, 1992).  In this study, the Cash Reinvestment Ratio (CRR) is used as a transfer function in a firm specific bivariate ARIMA model (enhanced model).  The enhanced model outperforms the benchmark Brown-Rozeff model in terms of Mean Absolute Percentage Error (MAPE), Mean Square Error (MSE), and Mean Absolute Error (MAE).  The implication of these findings is that the inclusion of appropriate cash flow information, that serves as a proxy for economic events that are not yet represented in the accounting earnings stream, can result in better time-series forecasting models.  This is an important finding since time series models are often a preferred tool for forecasting quarterly earnings per share.  The accuracy of forecasting models used by management and financial analysts is often superior to mathematical models.  It has been suggested that this is because management and analysts incorporate proprietary information not available to the public into their models (Collins and Hopwood, 1980; Chatfield, Moyer and Sisneros, 1989).  However, some researchers have challenged the contention that information outside the general public purview is a significant source of firm valuation.  Rather, these researchers suggest that there is fundamental accounting information in public financial statements that is not being fully utilized (Ou and Penman, 1989; Bernard and Thomas, 1990; Lev and Thiagarajan, 1993; Olson and McCann, 1994).  The premise of this article is that the accuracy of an ARIMA forecasting model will be improved by capturing additional unused or under utilized fundamental accounting information in a frugal manner.  It has been shown that financial analysts routinely use fundamental (publicly available) accounting information to revise their proprietary forecasts (Abarbanell and Bushee, 1995).  It follows that mathematical forecasting models can be enhanced by incorporating more information from publicly available financial statements.  The question is: What specific information?  This question is important since an ARIMA model can easily become overspecified and redundant.  This means that any variables must carefully be considered before they are added to the model.  In other words, the model builder needs to be stingy or frugal  with additional variables.  Having said that, this study demonstrates that greater information content can be captured by a firm specific time-series model that uses the cash reinvestment ratio (CRR) as a transfer function.  This bivariate model results in one quarter ahead forecasts that are more accurate than the premier benchmark Brown-Rozeff univariate model.  The CRR model is uncomplicated (frugal) and it uses readily available fundamental accounting information. Many different earnings forecast models have been proposed and tested, with mixed results.  Some models have been strictly mathematical, (Foster, 1977; Brown and Rozeff, 1979; Kodde and Schreuder, 1984).  Other models have been strictly econometric (Elliott and Uphoff, 1972; Chant , 1980; Eckel, 1982).  Some forecasting models have combined accounting, industry, and econometric variables (Westcott and Kumawala, 1990).  Yet other models have combined judgmental and statistical forecasts (Lobo and Nair, 1990).  To date, there has not been a specific type of model that is superior for all time horizons.  However, simple univariate ARIMA models using reported accounting earnings as primary input have proven to be quite accurate for forecasting quarterly earnings per share for short time horizons.  A primary attraction of these models is their simplicity.  Three premier ARIMA models have been suggested by researchers: 1) the Watts-Griffin model (WG) that includes seasonally differenced moving averages (Watts, 1975, Griffin, 1977), 2) the Foster model (F) that is an autoregressive model with a drift term (Foster, 1977), and 3) the Brown-Rozeff (BR) model that incorporates seasonal differences and a seasonal moving average (Brown and Rozeff, 1979).  Of the three, the BR model has been shown to be superior and is considered to be a benchmark generally identified parsimonious model (Collins and Hopwood, 1980).  Parsimonious models that are generally identified (the BR model) are based on composite results.  That is, given a specific set of firms, the model predicts for some firms quite well.  This indicates that the model has captured the earnings flow pattern of some firms with a high degree of accuracy.  For those firms that have accurate forecasts based on a parsimonious model there is little if any additional information that is necessary for an accurate forecast.


The Effects of the EC Banking Directives on Greek Financial Institutions

Dr. Cathy S. Goldberg, University of San Francisco, San Francisco, CA
Dr. Themis D. Pantos, University of San Francisco, San Francisco, CA

This paper analyzes changes in systematic risk and in shareholder wealth for banks and investment firms in Greece resulting from the introduction of the European Community Banking Directives. We examine the changes in the return structure of these institutions as the Banking Directives unfold. Our results indicate that systematic risk increased because of these particular Directives for both banks and investment firms. The effect on shareholder wealth, however, for both groups was neutral.Trading blocs have played an increasingly important role in the liberalization of trade flows during the 1990’s. In the realm of financial services these liberalization efforts within trade blocs have taken the form of the harmonization of regulatory regimes across member countries. These harmonization initiatives generally entail the opening of previously closed markets in banking and investment services to increased competition. As a result, financial institutions have been forced to abandon their previous focus on domestic capital markets and turn increasingly to the regional or international arena to survive.  This paper examines the economic implications of moving from a segmented financial system to a universal one, with special reference to how the EC’s “Banking 1992” initiatives impacted the Greek banking system.  The literature on regulatory regime changes is ambiguous as to the effect on shareholder wealth and systematic risk. Competition may increase profit volatility but also increase mean expected returns as a result of cost savings. Decreased regulation may allow institutions to diversify more effectively, reducing their systematic risk, or enter riskier areas, increasing their systematic risk. It is clear the net effect of various regulatory reforms on Greek banks and investment firms becomes an empirical question.  In this study, we address two specific questions; (i) how was the value of Greek financial institutions affected by the introduction of the European Community Banking Directives, and (ii) how did these regulatory reforms affect the systematic risk for these institutions.  Little empirical work exists on how the systematic risk of a financial system for a small open economy like Greece is impacted by regulatory changes within a larger trading bloc. The studies of Stigler (1971) and Peltzman (1976) constitute the framework for analyzing the impact on the financial institutions after various regulatory changes are introduced.  Stigler (1971) develops an economic theory on regulation and explains that economic interests among various participants are affected based on the regulatory framework and the political power that each lobby group possesses.  The chief regulator sets the rules in such a way as to benefit the party with the greatest political power at the expense of everybody else in the system. Regulation essentially imposes a tax on the wealth of economic agents and the per capita gains accrue to the party with the greatest association with the regulators.  Peltzman (1976) suggests that the introduction of various regulatory reforms may affect the systematic risk of the banks.  He argues that reduction of economic regulation and movement from segmented markets to universal ones will increase the risk of equity ownership.  This is due to the increase in competition and the resultant increased variability of banking earnings.  In the last two decades, various researchers have examined changes produced by the introduction of the Glass-Steagall Act that separated banking from underwriting/investment  business in the US.  Litan (1985) discusses how systematic risk may rise when banks diversify into riskier non-banking ventures because of the existence of moral hazard associated with government deposit insurance.  Joskow and MacAvoy (1975) on the other hand, suggest that the introduction of various regulatory reforms and barriers results in lower risk.  Brewer (1990)  claims that regulatory reforms leading to geographical diversification also decrease systematic risk. Fraser and Kannan (1990) find that the introduction of regulatory reforms increases the risk of equity for banks. Similar results were obtained by Pettway, Tapley and Yamada (1988).  They examine Japanese and American financial institutions who underwrote and managed Eurobond offerings and find that the systematic risk for these firms increases. 


Cointegration and the Causality Between the Real Sector and the Financial Sector of the Malaysian Economy

Dr. Mazhar M. Islam, Sultan Qaboos University, Al-Khod, Oman



This paper investigates the long-run  equilibrium relationship,  and the causality  between  the  financial and the   real sectors of  the Malaysian economy using monthly observations from March 1990 through May 2001. The financial variables  are interest rate, inflation rate, exchange rate, stock return, and real sector is proxied by industrial productivity. Augmented Dickey Fuller  & Phillips-Perron unit root  tests are applied to check for  stationarity in each series.   Unit root tests  show all  variables  are non-stationary in levels, but stationary in  their first differences. Johansen multivariate  cointegration test supports the long run equilibrium relationship between the financial sector and the real sector. The Granger test shows unidirectional "Granger causality"  between the(financial sector  and  real sector of the economy.  Studies on the short and the long-run relationships between  economic variables are abundant, especially with respect to developed countries.  Among others, most recently Maysami and Hui (2001) examined the short-run and the long-run relationships between stock returns and interest rate, inflation, money supply, exchange rate, real economic activities of Japan and South Korea. Using Hendry's  (1986) general-to-specific approach to error correction modeling during the period Q1 1986 to Q4 1998,  their results suggest the existence of cointegrating relationships between macroeconomic variables and the stock returns of the two countries. However, they argued that the type and extent of the relationships differ depending on each country's macroeconomic setting.  Their findings  of the positive relationship between industrial production and Korean stock returns are similar to those of Kwon et al. (1997), Kaneko and Lee (1995), and Mukherjee and  Naka (1995). Fama (1990)  argued that stock price reflects expectations of earnings, dividends, interest rates, and information about future real economic activity may be reflected in the stock price before it occurs.  Moreover, stock returns will affect the wealth of investors which in turn affect the level of demand for consumption and investment goods.  Schwert's (1981) study shows that growth of industrial production is a major determinant of long-run stock returns.   Geske and Roll (1983) showed that the real interest arte affect on stock return was significant but often small in most countries of their studies.  The findings of  Asprem (1989), Fama (1990, Bulmash and Trivoli (19991) show that there is a  negative relationship between interest rates  and stock returns in Korea.  A significant positive relationship is observed between industrial production and Japanese stock returns in the long-run by Gjerde and Sattem (1999), fama (1990) and Asprem (1989).  The Asian economic crisis of mid nineties highlights the crucial role that financial sector plays in a free  economy. Financial markets do not just oil the wheels of economic growth, they are the main drivers of the economy. Malaysia, one of the high performing economies in East Asia achieved an impressive growth over the past three decades. Her strong economic performance continued during the 1990s prior to the financial turmoil in 1997-98. GDP averaged about 8.5 per cent a year; unemployment was below 3 per cent; prices and exchange rate remained stable; and international reserves were robust. From 1970 to the mid 1990s, the country’s high investment ratio facilitates the dramatic shift in the structure of the economy from agriculture and mining to a growing manufacturing. The strong performance in the manufacturing sector contributed to the rapid growth of the country’s GDP. Given its strong linkages with other sectors in the economy, this had spillover effects which continued to support strong growth in several sub-sectors of the economy. In the mid 1990s exports declined and a large current account deficit developed in the context of a gradual appreciation of the effective exchange rate. While the investment-led growth strategy was successful in raising output and income, investment quality had deteriorated. This eventually led to  weakness in the banking and corporate sectors, exposing the economy to the contagion of the Asian crisis. As market confidence increasingly diminished large portfolio outflows took place, and equity and property values declined substantially. The ringgit came under tremendous pressure. As currency traders took speculative positions in the offshore ringgit market in anticipation of a large devaluation, the offshore ringgit interest rates increased markedly relative to domestic rates. This heightened upward pressure on domestic interest rates, intensified outflows of ringgit funds, and exacerbated banks’ liquidity problems and overall financial distress. The initial response of the authorities was to  increase  interest rates and curb fiscal deficit in an attempt to anchor market confidence in the financial system. Anticipation of further devaluation of the ringgit increased. By the summer of 1998, the stock market had fallen to its lowest level in recent history.  The  growth of real  economy recorded strong performance in 2000 with real GDP growth of 8.8%. To limit currency volatility selective capital controls were introduced, but subsequently modified. Structural reforms were implemented in the corporate and financial sectors. Moving forward, Malaysia  proposes to develop instruments for managing "boom and bust" cycles more effectively, including strengthening the risk management capacity  of the public sector, enhancing the early warning system to anticipate the vulnerability of the economy to contagion and shocks, and adopting flexible  macroeconomic policies  to absolve  the shock  of the economy.


An Examination of Cross-Cultural Negotiation: Using Hofstede Framework

Dr. Lieh-Ching Chang, Shih-Hsin University, Taiwan



A successful cross-cultural negotiation requires an understanding of others and using that understanding to realize what each party wants from the negotiation. The international negotiation experts understand the national negotiation style of those on the other side of the table, accept and respect their cultural beliefs and norms, and are conscious of personal mannerisms and how they may be viewed by the other side. In the past decade, many significant cross-cultural researches have explored differences between Chinese and North Americans in both interpersonal and organizational contexts. Citizens of both countries acknowledge they do not know much about each other, though they hold definite stereotypes of each other (Gudykunst 1993). However, the growth in international trade between Chinese and North Americans in recent years necessitates a better understanding of customs and expectations in cross-cultural negotiations. Therefore, there are many researchers have sought to examine and detail the similarities and differences between Chinese and North Americans.  This study is to explore cultural variations using Hofstede’s (1980, 1991) framework in Chinese and North Americans. Because culture is so important in the negotiation process, this paper will also review the five cultural dimensions of Hofstede (1991) and place these in both societies.  With the globalization of product markets and expansion of economic activities across national borders, cross-cultural differences are emerging as a significant factor in the management of organization (Redpath and Nielsen 1997). In recent years, an economic power has forged in China. When the economic situation becomes recession around the world, China looks like an awakening sleeping lion. Therefore, China has made economic progress and has opened itself to foreign during the last 25 years. By June 1999, China had approved more than 332,700 foreign-funded enterprises such as automobile, chemical, computer, electronics, food, beverage, retailing, banking, and insurance, with foreign investments of more than $286 billion (Che 1999). China becomes a new market-driven in the global marketplace. There are numerous U.S. and other Western companies, large and small, believe first-mover-advantage and are willing to develop China big market. But these foreign companies could have encountered problems when negotiating business ventures with their counterparts in China. U.S. business managers have complained that they have been intimidated by U.S. specialists on Chinese culture when attending workshops on how to do business with the Chinese (Zhao 2000). Many of these specialists were alert only to what might annoy the Chinese and gave little thought to ways of getting ahead of them (Pye 1982). Many people understand culture in terms of geography; however, culture does not mean to nations and countries. Rather, culture is the unique characteristic of a social group; the values and norms shared by its members set it apart from other social groups (Lytle, Brett, and Shapiro 1999). Culture concerns economic, political, social structure, religion, education, and language. Ruben (1983) makes the following observations in his definition of culture: “To the extent that members of a social system share particular symbols, meanings, images, rule structures, habits, values, and information processing and transformational patterns they can be said to share a common culture” (p. 139) Hofstede (1980) reinforces this image of the group by stating that “the essence of culture is the collective programming of the mind” (p. 25). This dynamic of sharing as a central element to culture is well supported by many experts (Kroeber & Kluckhohn, 1963; Munter, 1993; Porter & Samovar, 1994; Ronen, 1986).  The definition of culture could be broad, yet the operative words provide a robust framework for understanding differences among cultural groups in organizations and societies. People have both real and symbolic things, such as tools, weapons, other physical objects, languages, laws, music, art, material resources, technologies, and systems (Holt 1998). According to anthropological concept, culture relates to a shared system of beliefs, attitudes, possessions, attributes, customs, and values that define group behavior. Values are defined by Hofstede (1980) as assumptions about “how things ought to be” in the group. Additionally, Hofstede (1980) also indicated that culture includes values, which raises the question of what else is included. Culture is influenced by conscious beliefs (Mead 1998). A significant thought of culture is the patterns of behavior are learned. Individuals are born into a culture, and they must subsequently learn how to behave within their society (Holt 1998).  Undoubtedly, culture plays an important role in determining whether or not negotiators will employ questionable or unethical tactics, culture alone does not determine attitudes, intentions, and actions. Economic factor could influence behavior. Survival is a basic human instinct that can motivate individuals to take actions contrary to social customs and habits (Maslow 1970; Nelson 1994). These actions could include the use of questionable or unethical behaviors which promote self-interests or personal advantage at the expense of others or the collective good in times of economic hardship (Beeman and Sharkey 1987). As Thompson (2001) mentioned, culture encompasses many dimensions such as nations, occupational groups, social classes, genders, races, tribes, corporations, clubs, and social movements which should neither be led only in one direction nor be defined to a simple explanation. A famous cross-cultural study researcher, Geert Hofstede (1991), created a global model for the purpose of helping people to distinguish the culture differences for individual countries. This model is commonly called the four-dimension of culture model (Holt, 1998). The four dimensions are power distance, uncertainty avoidance, individualism-collectivism, and masculinity. Moreover, some researchers (Redpath and Nielsen, 1997) added one more dimension into Hofstede model called Confucian dynamism which is especially for differentiating Chinese from Western cultural values.


Aligning Training and Organizational Performance Goals Via Simulation

Dr. J. D. Selby-Lucas and Dr. William Swart, East Carolina University, Greenville, NC

Dr. Charles S. Duncan, Army Training Support Center, Ft. Eustis, VA



Simulation has been used for some time to forecast labor requirements, redesign facility layouts, and examine employee and customer traffic flow.  However, when industry and government consider extending the use of simulation to study the alignment of training and organizational performance goals and the impact of implementing those goals at the frontline level of activity, organizations will realize substantial benefits.  Alignment of organizational performance goals and training is not something that has always concerned us.  It seems easier to busy ourselves getting better and better at doing those things that may no longer need to be done.  Improving process can become all consuming, regardless of whether anyone is really benefiting from the products.  Changing product lines, or re-organizing can consume large amounts of energy, and "may" prove fruitful, but one really does not know until all the data is in, and then and only then can we tell if the new line is selling, and if people know how to produce the new products with consistent quality. Many a company touts a corporate goal or vision as an expression of their guarantee to the consumer.  Phrases including words as quality, service, excellence and other similar superlatives are commonly communicated guarantees from corporate advertisers.  However the same people who approve the slogans have little way of knowing whether corporate changes announced from the Headquarters will be implemented in such a way as to guarantee the company’s quality goes unaltered.  How then does one go about aligning the training programs of an organization with the organizational performance goals?  A primary key to organizational growth is the proper training and alignment of personnel, in relation to agency goals and ultimately corporate goals.  At the center of any successful company are successful employees.  Such people are those who know their jobs, do them willingly, and are committed to the goals of the corporation. However, if the goals of the organization are not aligned with programs supporting the front line employee, they might be frustrated or confused from time to time.  Proactive organizations can leverage simulation tools and technology available to examine the alignment of their organizational goals with individual and organizational performance.  Simulation can be applied to evaluate the effect of goals established at one level of an organization on the ability to achieve goals at each subsequent level. Simulation can be used to modify any misaligned goals so that satisfaction of goals at each level can support satisfaction of goals at the next level. This presents, for the first time, an approach that can follow organizations to better predict whether appropriate training is likely to lead to expected organizational performance.  A key element in aligning individual and team performance to organizational performance is the establishment of individual and team time standards to accomplish each given task. Knowing how long it will take to perform a task helps predict how much labor will be required to meet service standards as well as the resulting labor costs. These time standards must be included in training programs so that each employee or team can follow the established procedures and accomplish them within a given time.  The incorporation of time standards facilitates linking organizational goals to divisional goals, hooking those goals into each unit’s goals, and hooking those goals into team and individual employee tasks time standards. This alignment helps corporations to achieve predicted organizational performance.  For example, the Labor Management System (LMS) presented by Heuter and Swart (1998) utilized a set of three integrated models.  The models were developed to help schedule the labor required for Taco Bell. The first, was a forecasting model designed to project the number of customers that could be expected at the store anytime of day. The second was a simulation model developed to determine the minimum number of employees needed and assignments in the store to provide the desired levels of service. The third, an optimization model, scheduled employee shifts [Godward and Swart, 1994].   The LMS model is depicted in Figure 1. The modeling and simulation community is discussing that the future of the modeling [simulation] discipline will be in the ability to evaluate organizations at each level, allow for tweaking and thereby creating a tool for management to look at decisions pre- and post- implementation.  For example, the development of the Labor Management System was driven at the organizational level to meet the objective of the organization to more efficiently and effectively schedule labor.  These models did not consider how the changes would impact the frontline worker, thus leaving out the effect at the process and performer levels.


Databases, Data Mining and Beyond

Dr. Shamsul Chowdhury, Roosevelt University, Schaumburg, IL



The paper presents the uses of data mining techniques on databases and explore the possibilities of using CBR methodology to represent the knowledge mined (gained) from the databases  for developing useful system for decision support. A database on injuries was used as a front-end to develop a case-based reasoning (CBR) application for decision support. The preliminary results obtained and experiences with data mining on databases/data warehouses are satisfactory when it concerns knowledge extraction, case representation, retrieval, refinement, reuse ant retainment in CBR. These new emerging technologies (data warehousing, data mining and CBR) could be combined together for implementing application oriented information systems in different service sectors for example in health care, for improved, efficient and cost-effective  services.  Database applications can be found in many areas, such as in business, medicine and other scientific and socio-economic areas. The information content in an operational database can  broadly be classified into two main classes (Sundgren, 1981).   Operative information:  which is absolutely necessary for a certain operation to be performed in the object system;  Directive information:  which is not absolutely necessary, but could be extremely valuable in improving the quality of an operation. Data, which is an asset;  captured in different operational databases (for example: clinical databases, demographic databases, etc.) over time could further  be extracted, transported and integrated together in data warehouses (DW)/data marts for building decision support facilities/systems. Data marts are subsets of DW’s ( Sperley, 1999).  The pool of data in a data warehouse are usually being  explored by data mining tools to extract relevant information/knowledge that otherwise may be hidden and could never be taken into account in decision making processes.  Irrespective of the information context, the purpose of a database could be to derive or infer facts about the universe of discourse in general and thereby improve the quality of decision-making processes. Databases over time could be a resource for building a knowledge base for decision support.  Different types of available statistical packages are usually employed for analysis and interpretation of data generally stored in databases. Analysis and interpretation could serve to transform data into information and provide us with an understanding about a reality, hence increasing our knowledge about that reality.  The process is not very simple, and it increases in complexity from left to right. The process of gaining information/knowledge from data can be explained with the help of the following infological equation (Langefors, 1987): I = i (D,S,t) where       I is the information/knowledge gained from the data; D is the data used; S is the previous knowledge available and utilized for interpretation; i ( ) is the interpretation process; t is the time available for interpretation.  From the equation above it can be seen that both the data, and also how the data is being analyzed and interpreted, are used to gain an understanding and increase our knowledge. The data context utilizes some general knowledge of the application area to transform or structure the data into some kind of model (statistical or analytical). The domain context, in addition to the above, should have more deep and specific knowledge of the application area to be able to interpret the results appropriately and draw conclusions from the statistical analysis. Otherwise, the results produced by applying statistical analysis or any other method/s of analysis to a data set could only be interpreted as a methodological artifact – a consequence of the method/s applied. To provide a substantive interpretation – i.e. to represent actual phenomena, of any analysis, deep knowledge in the domain is necessary. One important aspect of research is to make substantive interpretation more probable than methodological interpretations (Chowdhury, 1990).  The objective of the paper is to explore possibilities that  data in databases/data warehouses  could be mined with appropriate tools for knowledge extraction. The extracted knowledge could further be represented  in CBR applications for future retrieval, refinement, reuse and retainment as an ongoing process for decision making purposes.  Data mining is a subset of knowledge discovery in databases/data warehouses/data marts. Knowledge discovery in databases (KDD) is the overall process for: - preparing data, - selecting data mining  (also known as Knowledge extraction) methods, - extracting patterns (models), - evaluating the extracted decision knowledge. KDD is the non-trivial extraction of implicit, previously unknown and potentially useful knowledge from data (Fayyad, Piatetsky-Shapiro,   Smyth,  and Uthurusamy,  1996). The identified knowledge is used to: make predictions or classifications about new data;  explain existing data; –summarize the contents of a large database to facilitate decision making.  KDD is a multi-disciplinary field of research and encompasses ( figure 1): - machine learning, - statistics, - database technology, - expert systems and data visualization.  Traditional statistical methods as well as AI-based knowledge-based methods could be employed for performing database analyses or data mining. 


Macro-Finance: Application of Financial Economic Theory for Implementing Macroeconomic Policy

Dr. Ronald W. Spahr, University of Illinois at Springfield, Springfield, IL

Dr. Mohammad Ashraf, The University of North Carolina at Pembroke, Pembroke, NC

Dr. Nancy Scannell, University of Illinois at Springfield, Springfield, IL

Dr. Yuri I. Korobov, Saratov State Socio-Economics University, Saratov, Russia



We define macrofinance as the application of traditional financial economic theory to the macro-economy postulating that macroeconomic activity results from aggregate effects of all domestic private and public saving, investment, net international trade and consumption decisions. We suggest that a single economic policy objective should be the maximization of the composite wealth of the country’s stakeholders, the country’s total population. This national welfare objective is analogous to the financial economic objective of maximizing shareholders' wealth in the case for a single firm. Maximizing owners’ wealth for a single firm involves the discounting of future cash flows (usually dividends) accruing to the firm’s shareholders. For a nation’s economic welfare, a parallel concept may be operationalized by maximizing the present value of a country’s long-run, sustainable, real standard of living, i.e., maximizing discounted future cash flows associated with the consumption component of GDP. We apply the macrofinance methodology to identify characteristics of macroeconomic policy, which may be less transparent given current objectives of economic policy. The multiple objectives of traditional monetary policy was articulated in the United States of America Employment Act of 1946, but since has become the cornerstone of macroeconomic policy for many countries.  The objectives of traditional monetary policy have been interpreted to include shorter-term stabilization of price levels, control of employment and growth levels, stabilization of money and capital markets, and balancing of trade.  These objectives and their resulting implementation by controlling money supplies are adopted almost universally by most central banks. However, because of multiple and sometimes conflicting objectives and short-term emphasis, monetary policy is difficult to delineate for various economic conditions. The thesis of this paper is that the traditional financial economic paradigm for valuation and financial decision-making within the individual firm (corporate finance theory) may, with modifications, be applied to determine policy objectives for a nation’s macro-economy. “Macrofinance” is defined as the application of financial economic theory and practice to the macro-economy, assuming that economic activity results from aggregated effects of private and public decisions regarding all domestic economic savings, investment, net international trade and consumption.  Analogous to the single firm objective of maximizing stock prices, macrofinance proposes that the primary national economic policy objective should be maximizing the composite wealth of a country’s stakeholders. We further interpret maximizing the composite wealth of a country’s stakeholders as the maximization of a population’s long-run sustainable standard of living.  Differences between standards of living resulting from differences in economic systems and level of capital formation may be illustrated by comparing the standards of living in Russian and the United States.  It is apparent from general observations and from economic statistics such as the Gross Domestic Product (GDP) per capita that the standard of living in the Russian Federation is lower than in the United States (U.S.) and other developed economies. Cursory explanations for living standard discrepancies between emerging versus more developed economies often rely on relative development stages of their banking, capital market system and legal and social infrastructures.  Developed economies are characterized by developed banking systems, efficient capital and financial markets, and a rule of law with strong individual property rights. The legal system must be reliable and consistent, always enforcing legitimate contracts.  The legal system must also enforce due diligence and provide oversight for regulations prohibiting fraud and manipulation of capital and financial markets.  However, the mere existence of efficient capital markets, a healthy banking system and a stable political and legal system does not necessarily result in the highest possible standard of living.  Other more basic economic concepts and the formulation of an efficient national economic policy may improve both the standard of living level and the speed at which underdeveloped economies can approach living standards of developed economies. As an example, a basic economic construct facilitating higher standards of living is that developed economies possess higher degrees of cumulative capital investment. Capital investment allowing for the application of modern technology, which facilitates higher labor productivity reasonably explains higher standards of living in developed countries. 


Business Intelligence: Empirical Study on the top 50 Finnish Companies

Dr. Mika Hannula, Tampere University of Technology, Finland

Virpi Pirttimäki, Tampere University of Technology, Finland



Comprehensive and timely information and knowledge is crucial in generating new products and improving business operations. Business Intelligence (BI) plays a central role in producing up-to-date information for operative and strategic decision-making. This study was carried out in order to find out what BI represents for Finnish large-scale companies in the year 2002. The study is the first comprehensive study of its kind in Finland.  Telephone surveying was used as the primary research method in this study. Individuals responsible for BI activities in the top 50 Finnish companies were telephone-interviewed. Before the actual interview the questionnaire was sent to the interviewees. The response rate reached 92 percent forming a sound basis for the study.  The objective of this study is to find out how common the BI activities are and how BI is currently applied in large Finnish companies. The study examines the initiation and organization of BI activities as well as the future prospects concerning BI activities in the companies interviewed. The research will also examine the key areas of improvement in BI activities, benefits gained from BI as well as the future outlook for the field. The companies researched in the study are categorized into three groups by industry: manufacturing, trade and services, and information and communication technology (ICT). In recent years, Business Intelligence activities have increased significantly in Finnish companies. It is obvious that the effective use of the concepts and processes of Business Intelligence is necessitated by the global business environment in which Finnish companies operate. Comprehensive and timely information and knowledge are crucial in generating new products and improving business operations. Business Intelligence and related Management Information Systems play a central role in producing up-to-date information for operative and strategic decision-making.   There are currently several firms offering BI consultancy or BI systems development in Finland. Business Intelligence terms and practices in companies have not yet become very well established, however, and several different terms are employed for this concept. Most firms think of BI activities as a process focusing on monitoring the competitive environment around them. This study was carried out in order to find out what Business Intelligence represents for Finnish large-scale companies in the year 2002. It is the first comprehensive study of its kind in Finland.  Current literature on BI has proved to be fairly sketchy and theoretical. There is no generally agreed conception of BI but, rather, each author has promoted his or her own idea of its connotations. In this study Business Intelligence (BI) concept is defined as organized and systematic processes, which are used to acquire, analyze and disseminate information significant to their business activities. With the help of BI, companies learn to anticipate the actions of their customers and competitors as well as different phenomena and trends of their market areas and fields of activity. Companies then use the information and knowledge generated to support their operative and strategic decision-making. In planning their strategy, companies need to consider the pressures and challenges caused by the business environment in order to thrive in the global digital economy. A rapidly changing business environment brings about a growing need for very timely, first-rate business information and knowledge. In addition, the amount of information available is increasing along with advances in information and communication technologies. It may be very difficult to sift what is relevant from such an overload of information. Yet, the competitive edge is only gained through the ability to anticipate the information, turn it into knowledge, and to craft it into intelligence relevant to the business environment, and to actually utilize the knowledge gained from it. Lahtela et al. (1998, p. 4) define competitiveness as the company’s ability to operate successfully in changing circumstances. Up-to-date knowledge is thus the basis for competitiveness in ever-changing business environment.  The management and decision-making of a company is most often – after all – a collective effort, involving several people with distinct expertise from different organizational levels.


New Directions for Human Resources in 2002 and Beyond

Dr. Sandra Casey Buford, Lesley University, Cambridge, MA

Dr. Maria Mackavey, Lesley University, Cambridge, MA



This paper seeks to understand the current HR environment and proposes some ways to work effectively within the present environment. We examine the HR role against Ulrich’s model that defines the roles of successful HR professionals as Strategic Business Partner, Administrative Expert, Employee Champion, and Change Agent.  Ulrich and others (Gubman (1998), Kanter, Drucker, Hammer and Becker) have made a compelling case that the competencies defined by each of these four roles contribute to successful HR practice.  The Masters of Science in Management Program in Human Resources Management at Lesley University has articulated a curriculum that is designed to develop competencies in each of these four areas of HR practice.  It is our contention that the post September 11th environment—marked by uncertainty about the future, failure of corporate boards to exert ethical oversight and the ensuing CEO scandals, together with the downturn of the economy—all are challenging HR’s role as employee champion, business partner and change agent.  Instead, in an increasingly reactive mode, HR professionals find themselves focusing on the more fire-fighting and administrative aspects of their role – firing, hiring, and acting as legal custodians of their organizations.  Our paper briefly describes the major changes the Human Resources profession has undergone since 1970 when it was known as “Personnel.”   The role of Personnel professionals was primarily reactive in nature.  Managers used Personnel professionals to get hiring, firing, payroll and benefits accomplished. These HR roles were traditionally defined as HR Generalist, Employee Relations Specialist, Compensation Analyst, and Recruiter.  The paper outlines the progress made in the 1980’s and 1990’s when thought leaders positioned HR professionals as business partners and value-added members of the organization.  The corollary changes in the workplace wrought through technological advancements, mergers and acquisitions, globalization, restructuring, downsizing, a record number of IPO’s and subsequent increased employee stock distribution facilitated the transformation of HR’s role from a bureaucratic one into a more strategic one.  Many HR professions had the opportunity to become contributors to bottom line profits through cost savings and the execution of growth strategies.  Buzzwords of the time include employee advocate, and employer of choice.  This paper is the first stage of a more in-depth study of the effects of recent political and economic events on organizations and more specifically on HR’s role in a wide variety of organizations across the country.  For the sake of this paper, we sent out a five-page questionnaire via e-mail and post to HR professionals in a wide variety of businesses holding middle to senior level positions in their companies. The returned questionnaires were then sifted through for trends, as well as for responses that stand out in some way.  We also looked are differences in the responses between mid-level and the senior-level professionals.  These findings will constitute the basis for the next stage of our study, which will not be included in this paper. 


Do Malaysian Investors Overreact?

Dr. Ming-Ming Lai, Multimedia University, Malaysia

Dr. Balachandher Krishnan Guru, Multimedia University, Malaysia

Dr. Fauzias Mat Nor, University Kebangsaan Malaysia, Malaysia



This paper provides a comprehensive examination of investors’ long run overreaction by integrating firm size, time-varying risk, and sources of profits on the monthly returns of all stocks listed in the Malaysian stock market over the period from January 1987 to December 1999. The results indicate evidence in favor of long run overreaction in both models with and without control for firm size. The results tend to suggest that the one to two- years contrarian strategy of buying loser stocks and selling winner stocks. The integrated results indicate that the contrarian profits gained are mainly due to the overreaction factor rather than firm size effect and time-varying risk. Notwithstanding the evidence of overreaction not being a manifestation of small firm size effect, the overreaction of the loser portfolios were more apparent in the smaller firms than in the larger firms after controlling for firm size. The experience of investing has always been dynamic and uncertain. In theory, financial models assume that investors are rational. Rational investors have access to all information and use all the available information in making their financial decisions. They maximize their expected utility and behave rationally. In reality, Bensman (1997) argued that not only are the species of truly rational people rare, and in fact they may not exist at all. Investors who are merely normal human beings with characteristics of sentiment, regret, and fear tend to look for familiar patterns in predicting future stock prices in a volatile financial market. Investors think that a trend will continue simply because there has been such trends in recent financial markets as studied by Tvede (1999). One could question the degree of representativeness of past price patterns and the durations over which these patterns continue to exist. It thus makes one wonder whether investors often tend to overreact. Hence, this paper is a comprehensive study of investors’ overreaction by integrating associated issues of firm size, time-varying risk, and sources of profits. The findings of this paper contribute empirical evidence of overreaction behavior in the Malaysian context. Section 2 presents a review of past empirical research in this area. Then, Section 3 focuses on a discussion of the data and the methodology employed in this study.  The results of the study are discussed in Section 4.  Finally, the conclusion of this study is presented in section 5. The discovery of the overreaction hypothesis, also known as winner-loser anomaly in 1985 which involved the application of the representativeness heuristics concept of Tversky and Kahneman (1974), was mainly attributed to the work done by De Bondt and Thaler (1985). In the case of stock investment, investors tend to overreact to the information in the stock market by overweighing the most recent information and underweighing earlier information. In subsequent periods, the prices of winner (loser) stocks will be corrected down (up) to its fundamental values when the investors realize that they have overreacted to recent information. This overreaction provides profitable opportunities to portfolio managers and investors by resorting to the contrarian strategy of buying past loser stocks and selling past winner stocks to gain above average returns. The results of De Bondt and Thaler (1985) indicated that the loser portfolios tended to outperform the past winner portfolios after thirty-six months of portfolio formation. Interestingly, the loser portfolios gained 25% returns higher than the winner portfolios. This provided evidence of the overreaction phenomenon in the stock market where price reversal patterns were found. De Bondt and Thaler (1987) documented further support of the investors’ overreaction hypothesis by examining firm size and risk.  However, Chan (1998), and Ball and Kothari (1989) argued that the overreaction hypothesis as found by De Bondt and Thaler (1985) would disappear if they did control well for the time varying risk. They argued that the abnormal returns earned by investors were just to compensate for the additional risk they undertook by opting for the contrarian strategy.  In addition, the well-known small firm effect has been identified as one of the reasons for the existence of the contrarian profit. However, the literature surveyed indicates mixed results. Zarowin (1990) and Clare and Thomas (1995) argued that the reversal patterns of loser and winner portfolios found were actually the manifestations of firm size effect. In contrast, Chopra, Lakonishok, and Ritter (1992) had controlled for firm size and still found evidence in favor of the overreaction hypothesis.  The contrarian profits and sources of profits had been of considerable interest to portfolio managers and investors. Lo and MacKinlay (1990) decomposed the contrarian profits in the US stock market as due to three main sources. The study used weekly returns on individual stocks traded on the New York Stock Exchange (NYSE) and American Stock Exchange (AMEX) for the period spanning from July 6, 1962 to December 31, 1987. The results indicated that less than 50 percent of the contrarian profits might be attributed to overreaction. The majority of the contrarian profits were found to be due to cross effects among the securities examined. However, in sharp contrast, Jegadeesh and Titman (1995) examined 260 weekly returns on all firms traded on the New York Stock Exchange and the American Stock Exchange from 1963 to 1990. They showed that most of the contrarian profits was attributed to investors overreaction rather than the lead-lag effect as found by Lo and MacKinlay (1990).


Competitive Strategies, Strategic Alliances, and Performance in International High-Tech Industries: A Cross-Cultural Study

Dr. Minoo Tehrani, Western New England College, Springfield, MA



This research empirically investigates the type of association between utilization of distinct patterns of competition and superior performance across sixteen segments of high-tech industries in the U.S. and European Union.  In addition, the link between strategic alliances and performance differentials is explored.  The results provide insight into similarities and differences in gaining competitive advantages for high-tech firms in a cross-cultural setting.  Identification of factors that give rise to competitive advantage has been a central theme of strategic management research.  Among the salient factors contributing to the superior performance of firms, two constructs, competitive strategies and strategic alliances, have been the focus of several studies.  Utilization of distinct patterns of competition along differentiation, low cost, and focus strategies and positive association to performance of firms have empirical support in strategic management literature (e.g., Baum, Locke, & Smith, 2001; Dess & Davis, 1994; Dowling & McGee, 1994; Porter, 1980, 1985).  However, the findings of research on the relationship between engagement in alliances and superior performance are contradictory.  Several studies report general support for the positive significant association between strategic alliances and performance (e.g., Astley & Fombrun, 1983; Bresser & Harl, 1986; Dunford, 1987; Powell, Koput, & Smith-Doerr, 1996).  Other researchers report the existence of mediating and moderating factors governing this relationship (e.g., McGee, Dowling, & Megginson, 1995; Doz, 1996; Steensma & Corley, 2000).  There are also studies that have found negative or no association between engagement in strategic alliances and firm performance (e.g., Shrader, 2001).  The primary objectives of this article are three-folds: first, find out if utilization of distinct patterns of competition has a similar impact on firm performance across different industries and geographical borders; second, to explore the relationship between engagement in strategic alliances and performance for any direct positive association between the two constructs; third, to highlight similarities and differences regarding these linkages among high-tech firms in the U.S. and European Union (EU), providing insight into successful competition in international high-tech industries.  The large cross-cultural sample data from across sixteen segments of high-tech industries that are utilized to test for the presence and type of association between competitive strategies, strategic alliances, and performance enhance the predictive power and generalizability of the findings of this research. An organization’s competitive strategy and its relation to performance have been the focus of attention in strategic management literature for the past two decades.  According to research (e.g., Baum, Locke, & Smith, 2001; Dess & Davis, 1984; Dowling & McGee, 1994; Kim & Lim, 1988; Porter, 1980, 1985), firms that indicate engagement in distinct patterns of competition (e.g., differentiation, low cost leadership, focus) outperform those that do not employ any distinct form of competitive strategies.  The distinct patterns of competition have been identified along several strategic dimensions.  Product differentiation is characterized by strategies to create brand name recognition through either perceived or real quality.  Marketing differentiation strategies are based on creation of a recognized image through unique diversified marketing and advertising campaigns.  Low cost leadership is represented by cost reduction strategies across the activity cost chain of the organization.  Focus strategies refer to product differentiation or low cost competitive position based on limited scope of product line, geographic and/or consumer market.  These types of competitive orientation of organizations and their positive impact on enhancing firm performance have found theoretical and empirical support in different studies (e.g., Baum, Locke, & Smith, 2001; Dess & Davis, 1984; Kim & Lim, 1988; McGee, et al., 1995; Porter, 1980).   Dess and Davis (1984) argued that the underlying dimensions of Porter’s (1980) generic strategies (product differentiation, low cost, and focus) were related to industry performance and would provide a broad and comprehensive base for identifying competitive strategies.  They proposed 21variables (e.g., new product development, customer services, competitive pricing, advertising) to test for the distinct patterns of competition.  According to the results of their study, firms with distinct patterns of competition along the hypothesized dimensions outperformed other firms. Kim and Lim (1988) also adopted the key strategic dimensions of Porter’s (1980) generic strategies, cost leadership, focus, and differentiation, and selected fifteen variables related to these strategies (e.g., new product development, product differentiation, operating efficiency, advertising, image building).  The results of their study in the electronic industry in Korea indicated three distinct patterns of competition, product differentiation, low cost, and marketing differentiation strategies.  They also found positive association between engagement in these strategies and superior performance.  McGee et al. (1995) studied the performance of new high-tech ventures and the choice of competitive strategies using Porter’s (1980) strategic typology with some modifications.  The selected competitive strategies were characterized as technical differentiation, marketing differentiation, and low cost production.  Baum et al. (2001) research empirically tested Porter’s (1980) competitive strategies, focus, low cost, and differentiation, and their direct effects on the performance of firms in the architectural woodworking industry.  They defined the competitive position of differentiation of firms as the creation of high quality product services and marketing innovations.  The low cost companies were defined as the firms following cost cutting, efficiency and price reduction strategies. 


An Evaluation of Consumer and Business Segmentation Approaches

Dr. Turan Senguder, Nova Southeastern University, Ft. Lauderdale, FL



The word "market" is specific location where products are bought and sold.  There are few market segments but the most common ones are consumer and business market segments.  Consumer markets include individuals in their household who intend to consume or benefit the purchased products.  Business markets consist of individuals, groups or organizations that purchase specific kinds of products for the purpose of using them to produce other products, to resell or to facilitate the organization's operations.  Marketers use two general approaches to identify their target markets-total market approach and the market segmentation approach.  The total market approach: An organization sometimes defines the total market for particular products as its target market.  When a company designs a single marketing mix and directs it at an entire market for a particular product, it is using a total market approach.  The total market approach can be effective under two conditions.  First, a large proportion of customers in the total market must have similar needs for the product. For example, one size shoe fits everyone.  Second, the organization must be able to develop and maintain a single mix that satisfies customers' needs.  When a company takes the total market approach, it sometimes employs a product differentiation strategy.  Product differentiation is a strategy by which a firm aims one type of product at the total market and attempts to establish in customers' minds the superiority and preferability of this product relative to competing brands.  When product differentiation is used, the firm does not actually differentiate or alter the physical characteristics of the product to its other products. Mainly through promotion, the firm attempts to differentiate in consumers' mind, its products from competitor’s products. Not everyone wants the same type of car, house, furniture, or clothes.  If we were to ask fifty people what type of home each would like to have, we probably would receive fifty different answers, many of them quite distinctive. Markets made up of individuals with diverse product needs are called heterogeneous markets.  In such markets, a marketer should use the market segmentation approach.  Segmentation is the process of dividing markets into segments of customers with similar purchase behavior.  Segmentation emerged as a primary market planning tool and the foundation for effective overall strategy formulation in a variety of companies throughout the U.S.  The objective of segmentation research is to analyze markets, find a niche, and develop and capitalize on this superior competitive position.  This can be accomplished by selecting on or more groups of consumers / users as targets for marketing activity and developing a unique marketing program to reach these market segments.  Examples, new coke, coke classic, cherry coke diet coke, caffeine-free coke, caffeine-free diet coke etc.  There are many ways for segmenting markets.  Many of these approaches are derived from the consumer behavior field.  Consumer decision making is an objective yet emotional process whereby various factors influence the purchase decision.  Motivations and needs, perceptions, demographics, product awareness levels, and purchasing habits are components of an individual's total lifestyle.  Let's assume that a major oil company wants to segment the gasoline market.  There are many alternative methods for segmenting market.  A geographic sales analysis of their dealers might be conducted.  Demographic and socioeconomic measures (age, sex, income, etc.) could be studied.  Product consumption (regular vs. unleaded vs. premium vs. diesel grades) could be evaluated.  Additionally, credit card utilization, loyalty, and price sensitivity by customers are some of the other bases that can be used in segmenting this market.  Markets can be segmented in a variety of ways.  There is no one clear, best method.  A segmentation base is a dimension for segmenting a market, and in most cases, several bases will be considered simultaneously to provide the best possible customer profile.  Some markets are comprised of people or business who have very similar needs, preferences, and desires.  These homogeneous markets are easy for marketers to please.  A single marketing mix can be developed to satisfy every one in the market.  Most markets are heterogeneous.  The consumer in those markets differ from one another in buying characteristics, preferences, and the ways they respond to any given marketing mix.  Meeting the needs of heterogeneous market is a considerable challenge to marketers.  One common way this challenge is met is through market segmentation.  The goal of market segmentation is to identify market segments comprised of people or business with similar characteristics and therefore similar needs.  Both Physical and behavioral bases can be used to explore and exploit market niches.


Contact us   *   Copyright Issues   *   Publication Policy   *   About us   *   Code of Publication Ethics

Copyright 2000-2017. All Rights Reserved