The Journal of American Academy of Business, Cambridge

Vol.  7 * Num.. 1 * September 2005

The Library of Congress, Washington, DC   *   ISSN: 1540 – 7780

Most Trusted.  Most Cited.  Most Read.

Members  / Participating Universities (Read more...)

All submissions are subject to a double blind peer review process.

Main Page     *     Home     *     Scholarly Journals     *     Academic Conferences     *      Previous Issues     *      Journal Subscription


Submit Paper     *     Editors / Reviewers     *     Tracks     *     Guideline     *     Standards for Authors / Editors

Members / Participating Universities     *     Editorial Policies      *     Jaabc Library     *     Publication Ethics

The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Journal of American Academy of Business, Cambridge will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work. All submissions are subject to a double blind peer review process.  The Journal of American Academy of Business, Cambridge is a refereed academic journal which  publishes the  scientific research findings in its field with the ISSN 1540-7780 issued by the Library of Congress, Washington, DC.  The journal will meet the quality and integrity requirements of applicable accreditation agencies (AACSB, regional) and journal evaluation organizations to insure our publications provide our authors publication venues that are recognized by their institutions for academic advancement and academically qualified statue.  No Manuscript Will Be Accepted Without the Required Format.  All Manuscripts Should Be Professionally Proofread Before the Submission.  You can use for professional proofreading / editing etc...

The Journal of American Academy of Business, Cambridge is published two times a year, March and September. The e-mail:; Journal: JAABC.  Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via the e-mail address above. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above. Address advertising inquiries to Advertising Manager.

Copyright 2000-2017. All Rights Reserved

Earnings Predictability: Do Analysts Make Coverage Choices Based on Ease of Forecasts?

Dr. Bruce Branson, NC State University, Raleigh, NC

Dr. Donald Pagach, NC State University, Raleigh, NC



This paper investigates determinants of security analyst following.  It builds on prior research in this area by investigating two important earnings-related variables that should be of interest to the analyst community—earnings persistence and earnings predictability.  We find that earnings persistence is significantly positively associated with the level of security analyst coverage while the predictability of earnings is found to be negatively associated with analyst following.  We also include control variables for firm size (market capitalization) and include a group of firms that have escaped research attention in prior studies—those that have an analyst following of zero.  The use of the tobit model allows for inclusion of these firms. Bhushan (1989), O’Brien and Bhushan (1990) and Brennan and Hughes (1991) have identified specific firm characteristics that they find to be associated with either the level of security analyst following or year-to-year changes in analyst following or both.  This paper extends this research by examining additional earnings-related firm-specific variables that are found to be significantly associated with aggregate analyst coverage decisions.  Drawing upon findings in the so-called “earnings response coefficient” literature, we argue that analysts have an incentive to identify and follow those firms whose time-series properties reveal historically persistent earnings innovations and less predictable earnings patterns.  Controls for firm size are included based on prior research. Security analysts provide a variety of services to a broad base of clients, both individual and institutional investors.  Specifically, the "sell-side" analysts investigated in this study are employed by brokerage/investment firms to provide recommendations to clients pertaining to the acquisition or liquidation of ownership positions in the companies they cover.  Prior research has provided evidence that such recommendations conveyed to the market by these analysts have information content (Stickel, 1991).  In this paper we investigate whether firm-specific factors are associated with the extent of security analyst coverage.  This study extends research by Bhushan (1989), O'Brien and Bhushan (1990) and Brennan and Hughes (1991) that provide evidence linking levels of and changes in analyst coverage to factors such as firm size, institutional ownership, market-adjusted returns, return variability and industry affiliation.  The extension relies heavily on 1) the "earnings response coefficient" literature that describes the price-informativeness of earnings in terms of earnings persistence and earnings predictability proxies (Kormendi and Lipe,  1987; Easton and Zmijewski, 1989; Collins and Kothari, 1989; and Lipe, 1990) and 2) the time-series literature that derives measures of persistence and predictability independent of the price-setting process. In this research, we develop a model expressing the extent of security analyst coverage of a given firm as a function of two earnings-related variables: persistence and predictability.  These concepts, while related, are distinct.  Persistence involves the degree to which an earnings innovation is permanently impounded in future income.  Predictability, on the other hand, involves the variability of the earnings stream.  Analyst following may be viewed as one proxy for the information environment facing a firm.  The degree to which a company is closely scrutinized by the investment community is directly related to the amount of earnings "surprise" that can occur at an earnings announcement date.  The persistence of the earnings series provides information concerning the value of private search activity and thus, should be a focus of analyst attention.  Likewise, earnings predictability has been shown to be associated with differential price response and it too should be of interest.  Documenting a direct link between these earnings variables and analyst coverage will increase our understanding of this important pathway by which accounting information affects firm value.  Schipper (1991) remarks that: Given their importance as intermediaries who receive and process financial information for investors, it makes sense to view analysts--sophisticated users--as representative of the group to whom financial reporting is and should be addressed.  Under this perspective, accountants have a policy-based stake in understanding how analysts actually use financial information. Several design issues are addressed in this research.  For example, samples in prior work include only those firms for which at least one security analyst provides research reports, buy/hold/sell recommendations and/or earnings forecasts.  By systematically excluding noncovered firms, information is lost that may potentially improve the efficiency of the coefficient estimates.  In other words, if characteristics of covered firms have explanatory power, it seems unreasonable to treat characteristics of noncovered firms as uninformative.  This study includes noncovered firms and features an appropriately specified statistical estimation procedure--the tobit model. The censored regression model developed by Tobin (1958) will accommodate the cluster of zero-observations of the dependent variable--analyst coverage.  Ordinary least squares regression applied to this sample data leads to biased estimates of the model parameters.  Prior studies include several "independent" variables such as institutional ownership, price appreciation and return variability to explain variation in analyst following.  Strong arguments can be made that these variables are not exogenous to the analyst coverage decision; that is, the degree of analyst participation in the dissemination of information concerning a firm is likely to have an impact upon such market-derived variables as current returns and return variability.  Exogenous proxies for some of these variables are developed in this study based on the earnings response coefficient and time-series literatures.  Specifically, proxies for persistence and predictability have been developed based on the time-series properties of the annual earnings series.  This study develops these proxies based on quarterly earnings realizations to avoid the well docu­mented problem of structural change associated with time-series work on annual data (Watts and Leftwich, 1977). This research endeavors to establish an association between analyst coverage and certain firm-specific variables.  Specifically, the number of analysts providing research attention to a given firm is expressed as a function of four firm characteristics as follows:


Nations’ Socio-Economic Potentials: Development, Growth, Public Policies, and Welfare

Dr. Ioannis N. Kallianiotis, University of Scranton, Scranton, PA



In this, mostly general equilibrium and philosophical work, the paper tries to point out some of the existing problems in our world, today. It considers the social and economic potentials for our governments and our people, together with their conflicting objectives and suggests a few long-term remedies, which will contribute to nations’ development and growth and persons’ well-being. The countries’ growth objective, their long-term development and growth and society’s welfare and individuals’ utility functions are emphasized, subject to the endowments of factors, technology, tastes, risk, moral, ethical, and just social constraints. At the end, a lot of discussion is made about governments, public policies, control and regulations, value system, and a few suggestions on new humanitarian, social, political, economic, financial, and philosophical frontiers are given.  The problems of our world today are not strictly economic, as many try to present them, but are mostly social, political, philosophical, moral, and ethical ones.2 The magnitude of these problems is so enormous and their solution so hard because we do not depend on our leaders to solve them dynamically, but on some “invisible powers” (we do not even have a name for them) who ignore humans, nations, values, virtues, justice,3 and purpose of our existence and independently of these social constraints (values), they try with all their inhumane means to maximize some “social values” and minimize some “social costs”. Of course, together with the above problems follow some narrow economic problems, like, low development and growth (even lower net economic welfare), inequality in distribution of earnings, income, and wealth,4 high uncertainty and risk,5 low confidence and expectations by all individuals,6  high unemployment,7 high inflation, low money market rates for the small investors8 (negative real risk-free rate of interest), high borrowing rates and credit cards rates9 (unfairly very high risk premium), high liquidity (huge growth of money supply and money creation in the banking industry), high government taxes and even higher spending,10  low savings11 and low capital inflows, due to the low money market rates and the devaluated dollar,12 huge imports, low exports,13 very high prices of oil,14 too many refugees and illegal immigrants,15 and corruption everywhere.16  Of course, there are many factors that have affected countries’ development and growth17 and consequently the financial markets and the social welfare. Some of them are: New inventions, which have created enormous profits and exaggerated hopes (like, radio, talking pictures, and passenger aircraft in 1920s, lately, computers and the internet, etc.). Then, wars, particularly the reconstruction of the destroyed cities, infrastructures, plants, etc. increase production, employment, profits, and expand markets; hopes following the peace agreement, too. Also, periods of prolong prosperity, when the future seems bright, the past and history have been forgotten, investors are overoptimistic, a very easy access to money and credit exist, and every one is benefited and hopes that the economy will continue to be at this boom and accounting wealth will be accumulated until the first new market crash and its consequent recession comes back, and bankruptcy lawyers and psychologists start thriving again.  Furthermore, the innovative last decade’s internet stock market boom (high tech, e-commerce, etc.) was led by speculators and inexperienced investors to the artificial growth of the 1990s and to the bust at the end of that decade (2000-2002).18 In 1990, we experienced the end of the Cold War, but unfortunately, there are so many Hot Wars since that year. Some people became overoptimistic, that the peace on earth will last forever (there are some that they still expect this earthly peace, but this is a utopia for our current civilization). Also, the stock markets around the world became very popular, due to huge privatizations, private pensions, high return of stocks and very low return on bank savings (disintermediation). People were investing in them and a new “stock market subculture” had been created. The information era with all these media, publications, internet, and the encouragement from politicians and experts made the simple layperson to get on board on this “information superhighway” without having the necessary experience. The business wrong reporting, the accounting misleading techniques, and the investment bankers with their intentional false research had contributed to this irrational bubble of the 1990s, too.  Certainly, the mood of the U.S. (and of the world) changed sharply (January 2000)19 without an explanation. Real fixed investment in Information Processing Equipment and Software (IPES) was very high in 2000 ($75.2 billion),20 but the real spending for purchasing of hardware and software in preparation for the century date change (Y2K) had declined to $6.8 billion. For the year 2001, the real investment in IPES became negative (-$34.8 billion) and the real Y2K very small ($0.8 billion), which show their pro-cyclical trend. Also, dozens of day-trading firms closed down, millions of investors lost their savings (some even committed suicide) and millions of workers became unemployed, while online firms such as E*Trade and Charles Schwab suffered big falls in turnover. Many lined up for unemployment compensation and others gave up their dreams for early retirement. In Washington, instead of increasing government spending in public investment and reducing taxes to create jobs and help the economy to recover, they started a tremendous spending on war, which is having a huge and unpredictable negative impact on domestic and global economy and stability; and at the same time an increase in anti-Americanism, around the globe. These prospective budget deficits are lowering the expected level of future national savings and put upward pressure on expected short-term interest rates and consequently on the long-term interest rates, which can dampen investment and lead to lower real GDP and higher unemployment in the future.21 Also, peace is in trouble and uncertainty is growing.22 Is capitalism really in trouble? How are we going to bear this overall cost? The price level of our goods and services shows to us that we have become very poor and we are becoming poorer every day. This thesis contains six sections. Section 1 introduces the problems that our economy has, lately, and discusses their effect on our development and growth. Section 2 gives nations’ growth objective. Section 3 analyzes the long-term development and growth of our economy. Section 4 develops society’s welfare and individual’s utility functions. Section 5 offers some philosophical ideas on the new economic potential and social frontiers. Lastly, section 6 yields the conclusion of this work.    


Decolonization and International Trade: The Cote D’ivoire’s Case

Dr. Albert J. Milhomme, Texas State University– San Marcos, TX



Many countries, former colonies of some colonial powers, have acceded in the past century to their political independence.  What about their economic independence?  A measure of this independence could be reflected in the evolution of their international trade, exports and imports.  This study is centered on the evolution of the international trade of Cote d’Ivoire, a former colony of France. For more or less half a century, many countries, former colonies of some colonial power like Great Britain or France, have acceded to their political independence. What about their economic independence? A measure of this economic independence could be in the today pattern of their international trade, exports as well as imports. This study, centered on Cote d’Ivoire (also known unofficially as Ivory Coast for some English speaking people), a former colony of France, might put some light on the rate of the evolution and the achievement or non-achievement of this economic independence. In 1960, as a colony of France, Cote d’Ivoire did import 65% of its imports from France and did export to France 67% of its exports. France had then at that time, a dominant position, a position which was the result of a century of effort to create and protect trade. Cote d’Ivoire was a main customer of France in term of imports and a main supplier of France in term of exports.  Has France kept a dominant position in Cote d’Ivoire today, 43 years after the independence? This is the type of question some people have definitely answered by “yes”.  French companies are still very active in many formerly colonized countries and do a majority of their “International Business” in their old colonies. The reasons are basically to be found in the cultural ties and traditions established during colonial rule. The colonial language used for business and daily life, the educational system of the country, the financial connections with the outside world, the newspaper read, and numerous expatriates staying in the country after independence are all acculturation’s factors which contribute to a paradoxical degree of dependence upon the previous colonizers on the part of many newly dependent countries.  Other people have different feelings.  Because of historical events preceding independence, they believe that many formerly colonized countries would spurn companies from the colonial powers. If dependence may have existed for a short while, it did not last, a former colonizer losing very quickly its historically acquired economic advantages. France, a former colonial power, and Cote d’Ivoire, a former colony, have been selected as an interesting pair of trade partners, the independence of Cote d’Ivoire having been realized as a surprising political decision by France in the middle of 1960, so without any apparent reason for resentment from the colony’s people.  One can not negate some changes in their relationship. There has been change, and the Cote d’Ivoire of today is not anymore the Cote d’Ivoire of 1960!  This paper does intend to quantify the evolution of this relationship, looking at the evolution of the international trade between a former colonial power and a former colony as a means to measure if the weaning period is over and the adulthood is there! Study of the evolution of the Cote d’Ivoire international trade with its former master, with some individual industrial countries, with the industrial countries as a whole, with the African continent or with the world for the past 43 years might provide some interesting information on the decolonization process and the achieved, if any, economic independence.  Independence will be determined by a more diversified portfolio of customers (exports). A crisis in one client country will not have then a large impact on the economy of the supplier. This is the basic concept of the geographical concentration index. Cote d’Ivoire is a sub-Saharan country located on the gulf of Guinea, a tropical country of 322,460 square kilometers with a population of 17,000,000 inhabitants. France made its initial contact with Cote d’Ivoire in 1637, when missionaries landed at Assinie near the Gold Coast (now Ghana) border. In 1843 Admiral Bouet-Williaumez signed treaties with the kings of the Grand Bassam and Assinie regions, placing their territories under a French protectorate. Cote d’Ivoire officially became a French colony in 1893. From 1904 to 1958, Cote d’Ivoire was a constituent unit of the Federation of French West Africa. French policy in West Africa was reflected mainly in its philosophy of “association”, meaning that all African in Cote d’Ivoire were officially French “subjects”. In 1946, after the Second World War, French citizenship was granted to all Africans “subjects” with the right to political organization.  The 1956 Overseas Reform Act did transfer a number of powers from Paris to elected territorial governments in West Africa. In 1958, Cote d’Ivoire became an autonomous republic within the French community and became an independent republic on August 7, 1960. Cote d’Ivoire has kept since the independence some close political ties with France, but what about its economical independence?  The study of the exchanges between Cote d’Ivoire and its former master, the former master’s foe (Great Britain), the African Continent and the World might give some measurement of the achieved independence, if any! The evolution of the value of exports from Cote d’Ivoire from 1952 to 2001 (Chart 1) and from the Independence 1960 to 2001 (Chart 2) is shown here below. 1952 was retained as the initial data of the study for two reasons: to measure the evolution for half a century period, and to check if any trend was not already perceivable some years before the independence. One has plotted the amount of exports from Cote d’Ivoire in current dollars and adjusted the value of the exports in constant dollars.  From the two charts it appears that Cote d’Ivoire is exporting more and more products on a 50 year span (Chart 1) as well as on a 42-year span (Chart 2).  The increase of total exports is at a rate of 46.5 million of constant dollars per year (dollar value base 1996) since 1960, leading to a total  increase in constant dollars of more than 200 % since 1952, with however a sharp fall during the first years following the independence, a period of adaptation. The evolution of the value of imports by Cote d’Ivoire from 1952 to 2001 (Chart 3) and from the Independence 1960 to 2001 (Chart 4) is shown here below.  One has plotted the amount of imports by Cote d’Ivoire in current dollars and adjusted the value of imports in constant dollars.  From the two charts it appears that the constant value of the total imports has more moderately increased since 1960 at a rate of 35 million of constant dollars per year (dollar value base 1996) since the Independence after a sharp fall during the first years following the independence.


How Corporate Sport Sponsorship Impacts Consumer Behavior

Dr. Kevin Mason, Arkansas Tech University, Russellville, AR



Corporate sport sponsorship is one of the many tools marketers have at their disposal to try and reach consumers and influence them to buy their products and yet one of the least discussed forms of marketing communications addressed in the marketing literature. A key to effective sponsorship is the understanding of how consumer attitudes are formed and change. It is the purpose of this conceptual piece to examine the relationship between sponsorship and attitudes. Attitudes are comprised of enduring cognitive (beliefs), affective (evaluative emotional attachments), and behavior tendencies towards an object. As such, attitudes have a strong impact on consumer behavior. Attitudes can then be changed by altering one or more of the three components. Sponsorship seems to affect the affective component of an attitude by creating a positive association between the consumer’s sport team and the company’s product. However sponsorship can also affect the attitudinal cognitive component by altering brand beliefs/perceptions. It should be noted though that leveraging activities are helpful when dealing with cognitive changes. Regardless, the ultimate goal of corporate sponsorship is to change the entire attitude resulting in positive behaviors (e.g., shopping and purchases). Marketers strive to make positive connections with consumers via numerous “tools” such as advertising, public relations, promotional tie-ins, and sponsorship. At present corporate sport sponsorship is becoming a very prominent marketing vehicle. Sponsorship occurs when a corporation funds a program (e.g., television or radio) or event whereby the sponsoring corporation has promotional material included into the program or event. Originally, advertising for radio and T.V. programs occurred in the form of corporate sponsorship (Harvey, 2001). Over the years, corporate sponsorship has grown to become a huge promotional tool.  For example, in the United Kingdom, sponsorship expenditures increased from 4 million dollars in 1970 to 107.5 million dollars in 1997. Likewise, sponsorship expenditures in the United States increased from 850 million dollars in 1985 to 8.7 billion dollars in 2000. In 1994, 4500 companies spent around 4.2 billion dollars on sponsorship rights in North America and 67 percent of the rights purchased were sport related (McDaniel, 1999). Anheuser-Busch and Phillip Morris are some of the more active companies involved with corporate sponsorship with each spending in excess of 135 million dollars on sponsorship in 1998.  In particular, corporate sporting event sponsorship has become increasingly popular.  For example, Coca Cola spent at least 650 million dollars on the Atlanta Olympic Games. MasterCard spent around 100 million dollars on the World Cup. North American corporations in 1999 invested 7.6 billion dollars in sponsorship with 67 percent of the money going on sports (Meenaghan, 2001; Madrigal, 2000).  Corporate sport sponsorship is becoming increasingly attractive in the United States and Europe because of the value that these cultures place upon entertainment, competition, and accomplishment (McCook, 2004).  As a result, the sports industry is worth roughly 320 billion dollars in the United States.  In addition, Westerners are highly motivated to achieve healthy lives.  For example, forty percent of Americans participate in an athletic activity at least once a week (Douvis, 2004).  Problems facing marketers include how to assess the effects of sport sponsorship on consumer behaviors and how to determine its business value (Harvey, 2001; Meenaghan, 2001). In the past, researchers have relied upon theories from various social sciences disciplines (Douvis, 2004) to explain possible effects of sponsorship on consumer behavior.  However, the research findings are antidotal, not empirical. For example, in California, a Federal Bank offered team themed checking accounts as part of sponsoring the NHL’s San Jose Sharks (Madrigal, 2000). The bank reported 2000 new checking accounts, 4 million dollars in deposits and a 300 percent return on its investment.  It is unclear how much of the increase in new accounts can be attributed to the promotional sponsorship. In short, while it appears sport sponsorship might be effective; the mechanics by which it works need to be better understood to determine its value as a marketing tool and to enhance its effectiveness.  It is clear that sponsorship has some impact on consumer behavior, but how much and why is not well understood.  The purpose of this conceptual research is to explore the effects of corporate sport sponsorship on consumer behavior. Specifically, this paper explores how corporate sport sponsorship impacts consumer behavior via its effect on their attitudes. To understand how sponsorship affects a consumer’s attitude, it is first necessary to understand what an attitude is and how it functions. An attitude may be defined as an idea charged with emotion which predisposes a class of actions to a particular class of social situations (Triandis, 1971). An attitude can also be described as an enduring evaluative disposition toward an object or class of objects (Chisman, 1976). All attitudes include affective, cognitive, and behavioral components. According to Chisman (1976), the cognitive component is merely the knowledge, belief, or idea one has about the object of the attitude (e.g., beliefs about a given brand). Triandis (1971) describes the affective component as the emotional attachment one has towards the object of the attitude (e.g., the degree to which one likes/dislikes a given brand).  The behavioral component refers to how one reacts towards the object (Triandis, 1971).  For example, does the person purchase the brand? While the attitude components are consistent with each other, they have separate measures (Madrigal, 2000). When people are forming attitudes, stimuli are generalized and many different objects are placed into the same category of associations in their minds (Triandis, 1971). Once a category is formed through cognition, it can be associated with pleasant or unpleasant affective state (Lardinoit, Derbaix, 2001). When assigning the attitude, there is a prediction being made from previous observations of how a person acts at certain times towards an object. Attitudes are not perfect in this, since it is possible for people to have beliefs that are inconsistent with their feelings, but people will usually “select” consistent beliefs (Chisman, 1976). For instance, if a person changes their attitude toward one related thing, others will fall in line. But generally, attitudes are consistent if a person’s beliefs and actions toward an object reflect their feelings about it in some way, which leads to attitudes being assigned according to the affective component.


An Artificial Intelligence Stock Classification Model

Dr. Probir Roy, University of Missouri, Kansas City, MO



Using MATLAB's Perceptron model, this paper presents an attempt to train a neural network distinguish between   acceptable and   unacceptable purchases of   publicly traded stock. In the past, Perceptron models have been used, quite successfully, in similar classification exercises. The input vectors used in training the network and in making the classifications in our model, involve readily available financial data like current ratio; quick ratio; gross margin as a percentage; sales/asset turnover; and earnings per share. The initial results of our analysis were quite encouraging insofar as the model had a ninety percent prediction accuracy using held-back test data. On the basis of our initial success, we are currently trying to extend this model to a "forward-looking" investment decision process model. Neural networks are constructed using a large number of simple processing units (analogous to neurons in the brain) that are interconnected at various levels. The behavior in these networks emerges, iteratively, using parallel processing. In some more complicated networks this development of “intelligence” may involve massive parallel processing. A single input neuron is shown in Figure 1. The scalar input p is multiplied by the scalar weight w to form wp. This term is sent to the “summer”, å, where it is coupled with a “bias” input b. The output from the summer, n, is referred to as the net input into a transfer or activation function ¦ which produces a scalar output a.  Construction of neural networks involve the following tasks: (LiMin Fu 1994):  Determine network properties by defining the topology;  Determine node properties: - Determine system dynamics. Network Properties: The topology of neural networks refers to its framework as well as its interconnection scheme. The framework is specified by the number of layers and the number of nodes per layer. Typically, neural networks consist to two to three layers. All neural networks must have an input and an output layer. Some have an intermediate or hidden layer. The input layer consists of nodes called input units. These units encode very simple or basic attribute values for example attribute like P/E ratio, Asset Turnover etc. of the various publicly traded stocks in our study.  The output layer consists of nodes called output units that encode basic output values. For example, in our study the output units were encoded 0-1 to represent unacceptable vs. acceptable purchases. The hidden layer contains nodes called hidden units. These units are neither directly observable nor can they be described in meaningful behavioral terms. The Perceptron Model, used in this study, was a two layer neural network and thus did not use a hidden layer. Hidden layers are extremely useful in modeling complex nonlinearities.  Network properties also address the nature of the interconnection scheme. Such schemes are delineated on the basis of whether the interconnections are feed forward or recurrent and also on the basis of whether the connections are either symmetrical or asymmetrical. In feed forward networks, all the connections point in one direction from the input layer to the output layer. If all the input units are connected individually to each and every output unit then this network is called a fully feed forward network. Recurrent networks, unlike feed forward networks, contain feedback connections or loops between units in different layers. A multilayer feedforward neural net is shown in Figure 2. Symmetrical connections represent neural networks in which if there is a connection from node i to node j, then there is a corresponding connection from node j to node i and the weights associated with the two connections are equal. If connections are not symmetrical as defined above, then they are considered asymmetrical. Connections between nodes may be further classified as to whether they are interlayer connections (different layer nodes) or intralayer connections (nodes in the same layer). Node properties pertain to whether a node is activated or disabled. The node activation level can either be discrete (0, 1) or continuous (either across a range like 0-1 or unrestricted). Activation characteristics depend on a transfer function


Managing Stakeholders Interests in Turbulent Times: A Study of Corporate Governance of Canadian Firms

Dr. Peter A. Stanwick, Auburn University, Auburn, AL

Dr. Sarah D. Stanwick, Auburn University, Auburn, AL



The focus of this paper is to examine whether Canadian firm are following strong corporate governance programs during these turbulent times. A sample of 32 firms was taken from the largest publicly traded firms in Canada. The corporate governance disclosures of these firms were compared with the 14 guidelines required by the Toronto Stock Exchange. Those results showed that the vast majority of firms followed the 14 guidelines. However, some industries groupings held higher compliance rates than other industries. In addition, fewer firms overall and across each industry went beyond the required standard to of corporate governance compliance.  Due to the turbulent nature of the global economic marketplace, the role of corporate governance has changed significantly over the past two decades. Although originally established as a legal requirement for incorporation, corporate governance has become a valuable connection between firms and various stakeholders (Vinten 1998). Corporate governance is required to guarantee that the interests of both public and private sector organizations who have a vested interest in the firm are satisfied. Corporate governance helps in enhancing the confidence level for all relevant stakeholders including stockholders, customers, suppliers, employees and the government. The major focal point for corporate governance has been and will continue to be the Board of Directors. If a company has implemented a strong corporate governance framework, it allows the firm the ability to enhance their competitive advantage. It addition, it also allows the firms to be able to formulate and implement more effective strategic decision based on accurate and objective corporate information. The execution of a comprehensive corporate governance framework also ensures the shareholders a higher level of confidence about their investment decisions. A comprehensive corporate governance framework also can be a useful management tool to supervise the overall check and balance system used to evaluate the overall operations of the firm.  The responsibilities and duties of the Board of Directors are directly lined to the long term survival of the firm. The Board is responsible to ensure that their decision not only enhance the overall value of the firm, but, properly serve the needs of the interested stakeholders.  Previous research on the effectiveness of the Board of Directors has yielded mixed results. studies have shown the Board to be ineffective and are considered mere “rubber stamp” of the self interests of the firm’s managers. (Vance, 1983; Wolsfon, 1984).  However, more recent studies including Stanwick and Stanwick (2002) have shown a direct positive relationship between a strong Board of Directors and the financial performance of the firm. The responsibilities of the Board of Directors have evolved into three major categories over time which are: (1) legal responsibilities, (2) resource dependence responsibilities, and (3) agency theory responsibilities. The original duty of the Board of Directors is for the board to be legally bound to represent the vested interests of the stockholders. This fiduciary duty of the Board ensures that stockholders’ interests are represented within the company (Molz, 1988; Bainbridge, 1993; Cieri, Sullivan & Lenox 1994). An example of some of the legal responsibilities of the Board would include the evaluation of the financial performance of the firm and the selection and evaluation process of the Chief Executive Officer and other Board members.  The responsibility of the Board of Directors to guarantee that the relevant interests of the stockholders are protected is based on agency theory Concept (Baysinger and Butler, 1985; Kosnik, 1987; Eisenhardt, 1989; Gilson and Kraakman, 1991). Under the constructs of agency theory, the Board of Directors are considered agents of the interests of the stockholders who has invested financial capital into the firm. Therefore, monitoring and control systems need to be adopted by the firm and the Board of Directors to ensure that the interests of the stockholders are protected (Alchian and Demsetz, 1972; Fama and Jensen, 1983). Therefore, the members of the board and the board collectively are responsibilities to guard the interests of the stockholders and stockholders’ wealth (Mizruchi, 1983).  The third major area of responsibility is the Board’s duty to effective control and allocate the firm’s resources which is the basis of resource dependent theory. (Pfeffer 1973; Pfeffer and Salancik, 1978).  Resource dependence theory is based on the belief that one of the benefits the Board can provide to the firm is the type of formal and informal   relationships the Board members have with other organizations (Zald, 1967; Pfeffer, 1972; Provan, 1980). By acquiring not only capital resources, but also knowledge based resources, the members of the board can play a role in reducing the level of environmental turbulent which the firm must constantly address (Thompson, 1967; Pfeffer, 1972; Burt, 1983). By being able to boundary spanning their capabilities to external organizations, Board members are able to help control the instabilities of resources flowing into the firm (Dooley, 1969; Pennings, 1980). Furthermore, imbedded within the resource dependence theory is the ability of the Board members to aid the firms be presenting objective and “alternative” viewpoints when the firm considers the overall strategic decisions. (Westphal and Fredrickson, 2001).  It is based on these three critical responsibilities, the responsibilities of the Board of Directors can enhance the overall value of the firm while serving the needs of all the relevant stakeholders.


The Self-Fulfilling Prophesy of the Tenure/Promotion Policies at Business Colleges and Schools

Dr. Nessim Hanna, Roosevelt University, Schaumburg, Illinois

Dr. Ralph Haug, Roosevelt University, Schaumburg, Illinois

Dr. Alan Krabbenhoft, Roosevelt University, Schaumburg, Illinois



This paper is an empirical study of tenure and promotion policies at institutions of higher education that offer postsecondary and graduate degrees in business.   The sample was from business professors attending the Midwest Business Administration Association (MBAA) conference in Chicago in 2003.   The findings suggest that the presence or absence of faculty research support systems at business schools and colleges is the cornerstone in determining a school’s prominence.  The research also suggests that the degree of satisfaction or dissatisfaction that a faculty member has towards the policy of promotion/tenure at his/her institution as well as towards the institution itself is highly correlated with the extent of the research support systems present at a school. This paper investigates the role of the tenure/promotion philosophies maintained by administrators in institutions of higher education, and the long-term impact of such philosophies on a school’s reputation, as well as the satisfaction of its faculty members. This research was undertaken with two objectives in mind.  The first was to empirically demonstrate that the presence or absence of faculty research support systems that administrators may implement at an institution of higher education are the cornerstone in determining a school’s prominence.  Such administrative actions or lack of them, create one of two scenarios; either a team of satisfied faculty members who puts forward a respectable publication record, leading the school to be perceived as a “research” institution, or conversely, leads to a group of dissatisfied faculty members who feel helpless, and blame the system for their lack of progress.  In this latter case, scarcity of published work results mainly in a “teaching” school status.  These outcomes seem to suggest that the principle of self-fulfilling prophecy is at work where a school’s distinction and reputation are based on its administrative philosophy regarding the extent of faculty support provided. The second objective of this paper was to investigate how dissatisfaction, as an emotion, can affect faculty members who happen to work under minimum or no support conditions.  The expected negative emotions will impact not only a school’s academic environment, but often translate into a number of negative actions initiated by dissatisfied faculty. The criteria for tenure/promotion in most schools of higher education center around an evaluation of faculty numbers on the basis of performance in three key areas of activity: teaching effectiveness; scholarly activity; and service to the university, their academic discipline, or to the broader community.  Of special importance in the evaluation of a faculty member for tenure/promotion is his/her research and publication record.  In judging such research, emphasis is heavily placed on quality as well as quantity of publications.  A faculty member seeking tenure/promotion status is required to demonstrate continued growth as an established scholar, evidenced by a development of a significant program of research and scholarship.  The pressure for publishing is coming from administrators who attempt to raise the status of their institutions through professional recognition usually attained through research and publishing.  Therefore, research has now become a key factor for attaining tenure/promotion at many institutions that previously focused mainly on teaching and service.  Now, faculty must publish early and publish frequently.  While such publication requirements vary somewhat from one school to another, these set standards present a challenge to faculty members, many of whom are young with excellent teaching and service records, but lack somewhat the expertise and confidence needed for undertaking the many research and writing challenges. In view of these rising demands placed on faculty for publishing, administrators at various institutions of higher education normally take one of two stands.  The first group feels that it is necessary to assist faculty in their drive towards tenure/promotion goals by establishing various faculty research support systems.  This group of schools adopt a number of strategies: among them are released time or leaves for research, mentors to help faculty publish, making available vast libraries, providing research assistants, research funds, summer research grants, sending faculty to academic conferences, and encouraging and rewarding various forms of research activities. The second group of school administrators, due either to lack of funds at their institutions, or due to a view that it is a faculty member’s sole responsibility to pursue research, provide minimum or no support towards faculty research efforts.  The depressed job market in some disciplines has made it even more possible for this latter type of school to demand more publishing and provide less support.  This situation has resulted in two types of schools; the first, where research help and support exist, a group of satisfied faculty members who excel in writing as well as in career advancement; and the second, where a group of dissatisfied faculty feels the pressure to publish but receives no helping hand towards accomplishing their academic goals. Since the purpose of this paper was to investigate the relationship between faculty productivity and schools’ research support policies, a number of hypotheses were developed to test the antecedents of faculty productivity against the presence of faculty support systems. 


The Trade-Off Between R&D and Marketing Spending for High-Technology Companies

Dr. Kenneth Ko, Pepperdine University, CA



Significant work has been done to show the importance of R&D spending to sales.  In this paper, I further this discussion through focusing on how the trade-off spending decision between R&D and marketing should be done.  I focus on high-technology companies where R&D is of critical importance.  I introduce a simple mathematical model which analyzes the impact that R&D and marketing spending have on sales.  I present the strategy that, relatively speaking, a company should spend more than its competitors on R&D (as opposed to marketing).  The model and the effectiveness of the strategy are demonstrated and verified through three case studies involving six high-technology companies:  1.  Intel and AMD  2.  Cisco Systems and Nortel Networks  3.  Xilinx and Altera.  R&D managers can use the mathematical model, strategy, and case studies to show the need for and thus motivate increased R&D spending within their companies.  Every year, companies need to make important budgeting decisions that will affect future sales, and thus the future success of the company.  These decisions are never easy to make, and always involve trade-offs.  Because a dollar spent somewhere is a dollar not spent somewhere else.  For high-technology companies, which are highly dependent on R&D, perhaps the key budgeting trade off is between R&D and marketing.  Of course, another key question is how much should a company allocate in total to R&D and marketing.  In this paper, I will examine the trade-off spending decision between R&D and marketing.  I focus on high-technology companies where R&D is of critical importance. There has been significant research done that demonstrates the link between R&D and sales.  Morbey (1) has done a study that showed, in general, successful companies have a higher R&D intensity than their competitors.  (Bean and Russo (2) define R&D intensity as “the ratio of R&D expenditures to sales revenue over the same period expressed as a percent.”)  Brenner and Rushton (3) showed a positive correlation between R&D spending and sales growth.  Gilman and Miller (4) performed a research study which showed a positive correlation between R&D spending and a firm’s sales, and also between R&D spending and a firm’s price/earnings (P/E) ratio (which reflects the opinions of analysts on the future prospects of companies).  Branch (5) discusses the importance of R&D in increasing total profits.  Leonard (6) wrote about how R&D spending relates significantly to the growth rates of sales, assets, and net income.  From the literature, it is clear to see that R&D has a positive influence on sales.  Of course, marketing has a positive influence also.  So, the question remains as to how high-technology companies should make the spending trade-off decision between R&D and marketing.  In order to help answer this question, I have developed a simple mathematical model, the R&D-Marketing Spending Model.  I will also present a fundamental strategy, the R&D-Marketing Spending Strategy, that high-technology companies should use in regard to the R&D and marketing trade-off decision.  Finally, I will present three case studies involving six high-technology companies:  1.  Intel and AMD  2.  Cisco Systems and Nortel Networks  3.  Xilinx and Altera that will both illustrate the use of the model and demonstrate the effectiveness of the strategy. All else being equal, we would expect that the sales level of a company is proportional to the amount that it spent on R&D and marketing.  In other words, if we let: S = salesz: R = R&D spending: M = marketing spending: We would expect: S = k(R+M) where k is a constant. Things get more complicated when we take competition into account.  If there are two (or more) firms competing in the same market, then the relative R&D and marketing spending levels are more important than the absolute levels.  For example, if I am a company spending $1,000,000 on R&D.  The impact this $1,000,000 will have will vary greatly depending on how much my competitor is spending on R&D.  If my competitor is spending only $100,000 on R&D then my $1,000,000 should have a great impact.  However, if my competitor is spending $10,000,000 on R&D, then my $1,000,000 should have much less impact.  So, the relative, not absolute, spending on R&D and marketing is what really matters.  To keep the model simple, we will consider two companies. Based on the above definitions, we can modify (equation 1) to incorporate relative spending.  We would expect S1% to be proportional to the amount that company 1 spends on R&D and marketing relative to company 2:  S1% = S1/( S1 + S2)    RM1% = (R1 + M1)/(R1 + M1 + R2 + M2)     S1% = RM1% + A% where A% is an adjustment factor   The R&D-Marketing Spending Model is expressed in equation 4.  If companies 1 and 2 were identical, then A% would be 0.  The adjustment factor depends on several things including market position and big events (layoffs, new product introductions).  We will consider the company with a higher market share as company 1.  In general, the higher the market share of company 1 relative to company 2, the higher we would expect A% to be.  The higher the A% is, the higher is the sales of company 1 relative to the sales of company 2.  In general consumers prefer to buy from companies with higher market share as the following table indicates:


The Grey Relational Evaluation of the Manufacturing Value Chain

Dr. Yuan-Jye Tseng, Yuan Ze University, Taiwan

Yu-Hua Lin, Yuan Ze University & Hsiuping Institute of Technology, Taiwan



As the market is changing more rapidly, a corporation is required to speed the rollout of new products within the shortest time to capture market shares. To meet the demand for diverse and small quantity production, a corporation needs to spend much effort in creating a collaborative commerce model of the manufacturing value chain for cutting product design time as well as enhancing the production capability and cost competitiveness. This study, which is designed to address the decision-making problems with the production chain value and integrate deployment of manufacturing resources, can be distinguished into three stages. The corporation, in the first stage, should be willing to join suppliers for preliminary screening of production capacity and technology. And, in the second, it demands several samples test-produced and sets inspection items for retrieved samples to acquire evaluation data. In the third stage, the corporation selects beneficial suppliers, from analysis results using the evaluation data related to the grey relational analysis, to build an efficient manufacturing value chain. The final section of the article, we demonstrate a case study to explain the operating procedures for the evaluation model. The study result finds that the model is effective for supplier management economic analysis and is available for creating a collaborative commerce model based on the manufacturing value chain. Under the pressure of shortened product life cycle and constantly changing market, a corporation has to speed the rollout of new products within limited time. For this reason, a corporation may take every possible measure to meet customer needs. In the past the evolution of business competition was a productivity-driven approach, underscoring process improvement and lower production cost; the product competence was developed based on the operating philosophy of limited resources and suppliers highlighted their own manufacturing resource management. As time changes, so does this concept. A new concept of ‘Self-Advancement’ replaces the previous one. What the conception of ‘resource limit’ emphasizes is R&D capacity; production capacity and sales channels should be created on your own. This approach is supposed to be free from external control and constraint, however, lacks flexibility. ‘Self-Advancement’ puts great stress on ‘market orientation, customer orientation and value orientation’. It prioritizes customer requirement characteristics. It builds up productivity through fulfillment of customer genuine value needs. In this case, vendors’ value management has become a primary focus of today’s production capability. As operating concepts and theories based on ‘supplier value’ are found everywhere, a number of scholars indicate that supply management will be the source of competitive advantages (Cavinato, 1992). In recent years, the production model has been transformed from standard mass production into diverse, small-quantity and customized production to meet various customer demands. To rectify the defects of conventional production resource deployment, many specialists and scholars present the collaborative production chain. This concept advocates that all relevant staff associate with product development procedures during the production stage such as designers, producers, suppliers and salespersons can participate in production and the final assembly process simultaneously. By virtue of mutual communications between these people, the production schedule can be decreased, time to market will be accelerated and production costs can be reduced. Without limits of geographic location, the concept, known as ‘simultaneous engineering’, allows relevant staff situated in different places may perform the adjustment of the tolerable error and fitted value via networks (Kao & Lin, 1996; Emmel, 2000; Isenhour etc., 2001). Besides, following more sophistication of products, vendors may cut down their investment in manufacturing equipment and share surplus capacity mutually to acquire competitiveness with the collaborative production chain. Thus this study is driven to discuss how to adjust the tolerable error and fitted value and make the adjustment as a reference for administrators in properly planning supplier design and manufacturing resources. In this perspective, this study proposes a conceptual infrastructure of supplier manufacture production value for evaluation of supplier performance of production fitted value. The study approaches and procedures can be summarized as (1) Regulate original data to address the comparison: Because the data in the original sequence may be presented in different units and types, it needs to assimilate various sequences into a same status for data comparison. In this case x0(k) denotes a standard sequence after initializing, and xi(k) denotes a comparability sequence after initializing. (2) Find the difference sequence when; return the absolute values of each comparability sequence data (xi(k)) and standard sequence data (x0(k)), and then find the difference between them. (3) Find the min. difference and max. difference; find the min. and max. values from the specified difference sequence and evaluate them by means of distinguishing coefficient (ζ). It is mainly functioned as a contrast between Background value and the to-be-measured object, where the value can be regulated according to practical needs. (4) symbolize the min. difference and max. difference. (5) Calculate the grey relational grade; calculate the mean value of each relational coefficient acquired from the grey relational grade between these two relational analyses. (6) Sort the grey relational ordinal: sort the relational grade between each comparability sequence and standard sequence, in which the max. grade represents the corresponding comparability sequence is at the greatest extent influenced by the reference sequence. The final section of this article demonstrates a case study explaining the evaluation procedures for the supplier value of the grey relational analysis model. With the supplier management economic analysis, we create a collaborative commerce model based on the manufacturing value chain. This study is focused on effective operations of production resources.


Negotiation Process Improvement Between Two Parties: A Dynamic Conflict Analysis

Ching-Chow Yang, Chung Yuan Christian University, Taiwan, R.O.C.

Mo-Chung Tien, Chung Yuan Christian University, Taiwan, R.O.C.



The conflict process is a dynamic phenomenon. Negotiation is used to solve conflicts between parties. It is a “solution process” and reaching an optimal agreement is the ultimate goal for negotiation between two parties. A rational conflict resolution will be achieved if the decision maker could understand the probable evolution process for the conflict situation under a given option. In this paper, a dynamic conflict analysis (DCA) approach between two parties was developed and used to analyze two cases. Analysis reveals that the players could grasp at the trajectory of the option changes between the two parties by observing the negotiation process, and could quick knowing the possible outcome for different options before the negotiation or at the duration of negotiation at once and timely adjusting the options if can’t reaching an optimal agreement because it could be simulated repeatedly. Negotiation is used to solve disputes or conflicts between parties and it’s a solution process. Many types for the conflict as armed conflict are likely to break out between the two countries, the political affairs between two parties, the dispute between labor and capital, the conflict between natural resources exploiting and environmental protection, and such business problems as a patent conflict, trade agreement between the two companies or two countries are also important domain for the conflict in global competition environment. Any sort of the problem as the competition, antagonism, different opinions are likely to bring the crisis of the conflict. Therefore, to reconcile the conflict between the two parties is essential in order to achieve an agreement. Global interdependence has increased the number of necessary transactions between governments (as WTO), enterprises (as international joint ventures), and international organizations. Negotiations form a substantial part of organizational activities. Negotiation is a solution process for two parties without rule, conventions or rational methods. Negotiation is a process, not an event. Conflicts are to be resisted and avoided if possible, because the negotiation process is a potential source of conflicts. Negotiation is used to resolve conflicts between parties in a dispute (Lewicki, 1985). Negotiation is an interactive, competitive decision-making process in which it is necessary to try and achieve a balance between two parties in any negotiation type. Although both parties are opposed to each other because they hopes that the maximum benefit itself (no matter what the antagonistic relationship or cooperation relationship), their objective is to reach an agreement (the agreement came to compromise) that will be implemented rather than aborted or avoided. Negotiation research has a long history of identifying factors determining the negotiated outcome (Pruitt, 1993; Thompson, 1998). Researchers and practitioners alike want to explain these factors, predict and influence the negotiated result. Thompson (1990) pointed to the multiplicity of negotiation contexts and the plurality of outcomes as hurdles in synthesizing the research results. However, it is hardly known to what extent one can predict the result of a specific negotiation and what factors must be taken into account.  From the point of view of decision-making, conflict provides particular challenges. The “Markov process” could used to obtain the final outcome for a conflict event using a “State Transition Matrix” (Fraser, 1984). The negotiator can know the outcome and timely adjust the options if the negotiator can grasp the trajectory of the option changes between the two parties. This paper quoted the matrix convergence issue on the mathematical model (Wang, 1991) and developed the dynamic conflict analysis mathematical model between two parties to analyze the Cuban missile crisis of 1962 and the Silicon Wafer Materials Joint Venture Case 1989 and demonstrated a perceivable negotiating evolution process. There are many options to beneficially resolve conflict between two parties. The negotiation activity should be an activity of rational control. If rationality is lost, the negotiation base is also lost. The negotiator is faced with rational control issues involving conflicting characteristics (Yi, 1995). There are two methods (Static and Dynamic Conflict Analysis) used to deal with conflict problems in the development of conflict analysis. The conflict analysis method is based on “Game theory”. Game Theory makes an analogy between interpersonal interactions in situations whose outcome is determined by several parties, as with the interactions that occur in games (Bryant, 1997). Game Theory has been used for the analysis of real-world conflicts. Brams and Muzzio (1997) used classical game theory to analyze an aspect of the Watergate tapes conflict. Zagare (1977) used game theory to analyze the Vietnam negotiations. Game theoretic approaches to conflict resolution are introduced in ecosystem management (Shields, 1999). Because it is not easy to evaluate a numeral reward matrix in Game Theory, Howard (1971) avoided estimating these numerals in the reward matrix, and used “0” and “1” to analyze competitive strategy. Drama theory was advanced to provide an integrating and liberating framework, while still retaining the logical core of game theory as an analytical engine (Howard, 1992). Drama theory has already been used in conflict analysis (Howard, 1996). The Metagame analysis approach was developed by Schlange in 1995. In a practical application for the metagame analysis model, the conflict must be formulated as a game. Each player (the players could be individuals or groups, engaged in some specified issue) in the game has options.


Spirituality in the Workplace: Developing an Integral Model and a Comprehensive Definition

Dr. Joan Marques, Woodbury University, Burbank, CA

Dr. Satinder Dhiman, Woodbury University, Burbank, CA

Dr. Richard King, Woodbury University, Burbank, CA



A new awareness has been stirring in workers’ souls for at least 10 years now: a longing for a more humanistic work environment, increased simplicity, more meaning, and a connection to something higher.  There are many reasons for this mounting call, varying from the escalating downsizing and layoffs, reengineering, and corporate greed of the 1980s to the enhanced curiosity about eastern philosophies, the aging of the baby boomers, greater influx of women in the workplace, and the shrinking global work village.   Straight through the varying opinions about what spirituality at work really entails, there appear to be a set of common themes that almost all sources seem to agree upon.  This paper presents a list of these themes; a comprehensive definition and an integral model of spirituality in the workplace, for consideration of future researchers in this field; and some practical strategies for corporate leaders interested in nurturing the spiritual mindset. This paper presents a brief exploration of a new paradigm that is emerging in business – Spirituality in the Workplace. This new awareness has been stirring in workers’ souls for at least 10 years now: a longing for a more humanistic work environment, increased simplicity, more meaning, and a connection to something higher.  Although there is diversity in opinions about what spirituality at work really entails, there appear to be a set of common themes that almost all sources seem to agree upon.  After surveying the current literature in search of these common themes, this paper presents a comprehensive definition and an integral model of spirituality in the workplace, as well as some practical strategies for corporate leaders interested in nurturing the spiritual mindset. Too many people feel unappreciated and insecure in their jobs.  According to Morris (1997, p.  7), “Overall job satisfaction and corporate morale in most places is all time low.”   Many re-engineering gurus have come to realize that, in their bid to make processes more efficient, they forgot the most essential element of the equation: the people. According to a recent survey of more than 800 mid-career executives, unhappiness and dissatisfaction with work is at a 40-year high. Four out of ten of those interviewed hated what they do. This proportion is double than that surveyed four decades ago. (Cited in Barrett, 2004, p. 1) Indeed, the emerging paradigm called “spirituality in the workplace” is conveyed in multiple ways: While Schrage (2000) finds that  “A fundamental tension between rational goals and spiritual fulfillment now haunts workplaces around the world,” (p. 306), and that “Survey after management survey affirms that a majority want to find ‘meaning’ in their work” (p.306), Oldenburg and Bandsuch (1997) state that for quite some time now, something has been stirring in people’s souls: a longing for deeper meaning, deeper connection, greater simplicity, a connection to something higher.  And as Bruce Jentner, president of Jentner Financial Group in Bath, Ohio underscores the above-mentioned observations by saying, “I have a deep conviction that everybody has a need for something bigger in life than just making money and going to work” (Goforth, 2001, p.  k-2), Kahnweiler & Otte (1997) affirm that “work is a spiritual journey for many of us, although we talk about it in different ways” (p.171).  Ashmos & Duchon (2000) contribute to the awareness of this paradigm by claiming that there is increasing evidence that a major transformation is occurring in many organizations, and Stewart (2002) exclaims that a survey conducted last spring by the Torrance, California-based human resource strategists Act-1 found that “55 % of the 1,000 workers polled consider spirituality to play a significant role in the workplace.   In addition, more than a third of that number (34%) said that the role had increased since the September 11, 2001 terrorist acts” (p.  92).   A 1999 issue of ­ U.S. News & World Report reveals that “In the past decade, more than 300 titles on workplace spirituality – from Jesus CEO to The Tao of Leadership – have flooded the bookstores…. Indeed, 30 MBA programs now offer courses on this issue.  It is also the focus of the current issue of the “Harvard School Bulletin.”  Signs of this sudden concern for corporate soul are showing up everywhere: from boardrooms to company lunchrooms; from business conferences to management newsletters, from management consulting firms to business schools.  Echoing Andre Malraux – who said that the next century’s task will be to rediscover its Gods – some management thinkers are prophesizing that the effective leaders of the next century will be spiritual leaders (Bolman & Deal, 1999).Organizations are increasingly realizing the futility of achieving financial success at the cost of humanistic values.  At the beginning of the new millennium, organizations have been reflecting upon discovering ways to help employees balance work and family, and to create conditions wherein each person can realize his/her potential while fulfilling the requirements of the job.  One writer has called such enlightened organizations "incubators of the spirit."  Cash, Gray & Rood (2000) confirm the emergence of this organizational transformation by asserting that there is little doubt that American society and its political and legal institutions are moving toward a more open, value-expressive environment that will put even greater pressure on companies to honor employees' requests for religious and spiritual accommodation.  Cavanagh (1999) stresses this issue even further by stating that there has been a dramatic upsurge in interest in spirituality among those who study, teach, and write about business management.  This new interest is also apparent among practicing managers. Work has ceased to be just the "nine-to-five thing," but is increasingly seen as an important element in fulfilling one's destiny.  As James Autry has observed, "Work can provide the opportunity for spiritual and personal, as well as financial, growth.  If it doesn't, we are wasting far too much of our lives on it."   "Leading others" is being seen as an extension of "managing oneself."  The implications of these changes are clear.  On one hand, these changes present opportunities on how to work collectively, reflectively, and spiritually smarter.  On the other hand, they invite organizations to structure work in ways that are mind-enriching, heart-fulfilling, soul-satisfying, and financially rewarding.


The Case Analysis on Failures of Enterprise Internal Control in Mainland China

Ta-Ming Liu, Hsing Wu College, Taiwan



The establishment of a well-designed internal control mechanism has become a legislative requirement for enterprises and has won a widespread support and participation from enterprises around the world.  A proper and complete mechanism of internal control can not only insure the adding and retaining of asset value, boost the economical efficiency for the enterprises, but also help achieving strategic goals of the enterprises.  In Mainland China, however, the reality of enterprise internal control does not quite measure up to the standard.  Most of the enterprises in China have not yet accomplished the building-up of an effective internal control mechanism, while some of the enterprises do not even have one.  The lack of internal control leads to the destruction for some of the enterprises eventually.  Through the case analysis of enterprises in Mainland China, the revealing evidence shows a worsen situation in which that fiscal misconduct, fraudulent financial reporting, illegal activity and law-breaking behavior exist far more commonly in enterprises and business sectors of Mainland China’s.  Thus, how to establish and implement internal control in enterprises and boost the efficiency and effectiveness of its operation are the subjects that require further and detailed study.  This paper uses Asia Enterprise as the example to discuss the reasons of internal control failures and proposes suggestions on improving them. The concept of enterprise internal control in Mainland China is still at its embryotic stage.  Although the government has established regulations for enterprise internal control, feign compliance is still too common a situation to be seen inside the Chinese enterprises.  Thus, knowing how to carry out enterprise internal control demands immediate attentions since establishing effective and exact standards of internal control improves the quality of economical activity and management efficiency.  The action has a significant urgency for China in facing the challenge of becoming a member to the World Trade Organization. The author of this paper have had the honor to be acquainted with Chairman Hu of the Finance and Account Information Technical Research Institute in Shanghai and some of the related friends.  In addition to the author’s personal interest on the subject of enterprise internal control in Mainland China, supports and encouragements from these Chinese experts and scholars have also contributed to author’s study on the subject.  The objective of this research is to encourage the sharing of experiences on the subject of fiscal internal control from both sides of the Taiwan strait. There are great differences on fiscal internal control and managerial requirement between business sectors in Mainland China and Taiwan.  Along with the more and more intent business collaborations between the strait, probing how to avoid conflicts on the concept of internal control presents an important facet for future studies on analyzing enterprise internal control for both China and Taiwan. Due to the limitation of both time and space, this paper presents only the analysis on the literature and data review of the proposed case, which is the Asia Enterprise.  Secondary data are also provided to support the analysis.  There will be cross-examination on Taiwan’s enterprise internal control to expand the scope of the study on internal control.  Furthermore, the proposal on solving the problems of internal control in Mainland China has also been obstructed by the limitation of time and space.  Thus, only cases with major abuse of internal control are discussed in the paper. During the process of case interview, the researcher have indicated toward Asia Commerce Group’s manager of the financial department (MR.A) and the employee’s representatives (Mr.B & Mr.C) engaging individually deep interview; then, utilizing the descriptive method to analyze and review the data and providing the research conclusion. Internal Control was proposed by the American Institute of Certified Accountants (AICPA) in 1949.  The four main objectives are defined as follow: A. Safeguarding of assets (security objectives); B. Reliability and completeness of accounting/financial and management information (information objectives); C. Efficiency and effectiveness of operations (operational objectives); and, D. Compliance with organizational policies and procedures as well as applicable laws and regulations (compliance objectives). Through years of evolution and development, an expansion on the idea of internal control was issued on 1988’s AICAP Accounting Standard Communiqué No. 55.  Under the system of financial report auditing, there are three elements for Internal Control Structure in enterprises: 1. Control of environment. 2. Accounting system. 3. Control of procedure. In 1992, the committee of sponsoring organizations of the Treadway Commission (COSO) (note 2) issued a research report on internal control:Integrated Framework.  In 1994, COSO again issued the addendum to supplement the original Integrated Framework.  These two reports are the so called COSO Report.  A wider range of definitions on internal control has been established and it includes not only the control of accounting system, but also the control of management.  According to the report, the objectives of internal control include: 1. Effectiveness and efficiency of operation. 2. Reliability of financial reporting. 3. Compliance with applicable laws and regulations


Five Competitive Forces in China’s Automobile Industry

Zhao Min, University of Paris I Panthéon-Sorbonne, France



China’s automobile market posted very rapid growth in recent years, and it was the third biggest automobile market in the world in 2003. Because China’s large market draws many foreign automobile actors, how to be successful in the competition in China is an essential question for Multinational Enterprises (MNE). This paper will attempt to define the conditions of competition for MNEs in China through the industrial competitive framework of Porter, and to demonstrate how it influences the MNE strategy and competitive position. In particularly, this paper provides a comparison of the competitive position of American, European, and Japanese automobile multinationals in China. In the past ten years, the production of motor vehicles in China has seen an average annual growth rate of 15 percent, compared to a world average of 1.5 percent in the same period. China produced 4.4 million vehicles in 2003 (OIAC 2004) with a growth of 35 percent from 2002, becoming the fourth biggest vehicle manufacturer in the world, just after United States, Japan, and Germany. Rising consumer wealth levels have been a major contributory factor to the sudden explosion in the car market. According to the world market research center, Chinese consumers' purchasing power has risen to $5,500, which has historically been the level of car consumption in other markets. With 4.5 million vehicles sold in 2003, China now is the fourth biggest automobile market in the world (WMRC 2004). Several institutes argue that China’s vehicle market is set to almost double until 2008, challenging Japan for its position as the world's second-largest auto market.  China’s big market draws many of foreign automobile actors. Almost all of the world’s top automobile assemblers and suppliers have invested in China, with Volkswagen, PSA, General Motor, Delphi, Visteon, Valeo, and Man as early entrants, and Honda, Toyota, Nissan, Hyundai, and Denso coming in later. The competition is becoming increasingly fierce. With all the world’s leading global automakers ramping up production in a bid to dominate the local market, the tensions have begun to mount among foreign automobile enterprises. In this context, it seems important to know the environment of the Chinese automobile industry with the view of establishing an appropriate strategy for automobile MNEs to achieve success in China. We analyze the China’s automobile industry through the Porter’s industrial competitive framework because it not only offers insights into the environment of the automobile industry, but also influences the MNE’s strategy and competitive position. We start by presenting this theoretical framework (section 2). We then apply it to China’s automobile industry (section 3) before our tentative conclusion in comparing the competitive position of American, European, and Japanese automobile multinationals in China (section 4). Strategy is the creation of a unique and valuable position, involving a different set of activities (Porter, 1998). The success of competitive strategy is a function of the attractiveness of the industries in which the firm competes and of the firm’s relative position in those industries (Porter 1980). According to Porter (1982), the competitive game in an industry entails five forces: Context of strategy and of rivalry of enterprises. Threat of potential entrants. Threat of substitute products. Bargaining power of customers. Bargaining power of suppliers. The strategy of automobile MNEs in China is to create a valuable position in the automobile industry; above all, it depends on the nature of China’s automobile industry in which exists the international competition. We will analyze the environment of the Chinese automobile industry through the five forces of an industrial competition of Porter. The context for strategy and rivalry relates firstly to the automotive industry policy. The Chinese government declared the automotive industry a “pillar” industry in 1985, targeted for financial and developmental assistance. The automobile industry is the first among Chinese industries to be backed by a formal state industrial policy. This policy was first formulated in 1987 and modified in 1994, with emphasis on three points: to shift the product mix of the industry from commercial vehicles to passenger cars; to boost economies of scale by restructuring the industry from a situation of fragmentation and miniaturization towards concentration; to seek technology transfer by inviting the participation of foreign companies. However, operational practices involve a set of limitation measures for foreign investment. The most important obstacles are high tariff and non-tariff barriers, foreign investment limits, and local content requirements.  China’s WTO membership (in December 2001) favors the liberalization of trade and the establishment of foreign corporations in terms of customs rights, property rights, distribution, finance, etc. Tariffs for automobiles will be reduced to 25 percent by 2006 from about 50 percent currently. With China’s WTO accession, foreign-invested companies may currently distribute all products manufactured in China. Within one year, they will be able to distribute both domestic and foreign products. However, the form of investment for automakers remains the major limitation. Automobile assembly firms have to enter China in cooperation with local partners, and foreigners are limited to a maximum 50 percent shareholding, although there is a possibility for foreigners to constitute a majority in joint ventures (JVs) for engine construction. Local content requirements are encouraged in China’s automobile industry, in order to achieve the complex industrial development and self-reliance requirement. Local contents of assembled vehicles vary with different tariff rates; for example, the tariff on Completely Knocked Down (CKD) kits is reduced if local contents are increased. For the first three years, the tariffs on imported CKD parts is 50 percent; after the fourth year, the local content rate between 60–80 percent corresponds to 48 percent tariff, 40–60 percent to 68 percent tariff, and under 40 percent to 80 percent tariff (Lee et al. 1996).


Trademark Value and Accounting Performance: Analysis from Corporate Life Cycle

Dr. C. L. Chin, National Chengchi University, Taiwan

Dr. S. M. Tsao, National Central University, Taiwan

H. Y. Chi, National Central University, Taiwan



The objective of this study is to examine whether the association between trademarks and a firms’ performance is a function of firm lifecycle stage. Following Anthony and Ramesh (1992), this paper classifies year-firms into various lifecycle portfolios using dividend payout ratio, sales growth, capital expenditure, and the firm age. Consistent with our prediction, the results document a monotonic decline in the response coefficients of trademarks from their growth to the stagnant stages. This paper also used the Seethamraju (2000) model to estimate trademark value and find that estimated trademark values also monotonically decrease from the early to later lifecycle stages. Understanding the value of intangible assets has become increasingly important in the “new economy”. Accounting literature on intangible assets focuses primarily on valuation, value relevance, and recognition of intangible assets on financial statement. Under generally accepted accounting principles (hereafter GAAP), most intangible assets, which are of substantial economic assets, are typically unrecognized in the financial statement as accounting assets. However, a growing body of literature documents that non-financial information about intangible assets, such as R&D (Lev and Sougiannis, 1996; Chan and Lakonishok, 2001), advertisement expenses (Landes and Rosenfield, 1994), patents (Griliches, Pakes and Hall, 1987), customer satisfaction (Ittner and Larler, 1998), and brand (Barth er al., 1998; Kallapur and Kwan, 2004), plays a significant role in determining a firm’s performance or value.  This paper examines a previously little explored source of intangible assets: trademarks. To strengthen competitive force and increase market share, enterprises will make every effort to leave an irreplaceable image in the minds of consumers. Therefore, trademarks not only are an important factor that affect corporate value, but also become a key point that decides whether a business can succeed or not. This study examines whether the association between trademarks and firms’ performance is a function of corporate lifecycle stages. Specifically, we expect a monotonic decline in the effect of trademarks on a firm’s performance, from its growth to the stagnant stages. Conceptually, a trademark is used to identify the source of a product (service) and distinguish that product (service) from those deriving from other sources. The Intellectual Property Office (TIPO) governs the registration of trademarks in Taiwan. Application for registration of trademarks must be filed with the TIPO. Taiwan Trademark Law governs the registration of trademarks. As defined in the Taiwan Trademark Law: “A trademark may be composed of a word, figure, symbol, color, sound, three-dimensional shape, or a combination thereof.” Therefore, a trademark as defined above can be distinctive enough for relevant consumers of goods or services to recognize it as an identification of those goods or services and to differentiate them from others.  The value of trademark is created in two different ways. First, trademarks protect a firm’s innovation from pirates and duplication, process of which goes into creating a new product or service. Second, trademarks reduce consumer search costs by conveying a snapshot of the information containing a product’s features and its level of quality (Landes and Posner, 1987; Seethamraju, 2000). Therefore, articles in both the business press and academic papers document trademarks’ value-relevance. For example, the Chinatimes News reports (2004/10/01) can also be used as anecdotal evidenceTom Blackett, the board chairman of Interbrand Company, remarked that brands are businesses’ commitments to product quality and customer satisfaction. Taiwan has a powerful IT industry and excellent designed products, so it has necessary conditions to create powerful brands. However, it is necessary to reinforce brand management before turning out brands in to famous international brands. According to the company’s evaluation, the Taiwan’s first three brands are the TrendMicro brand which values NT$3,083.5 billion, ASUS brand which values NT$2,778.8 billion, and ACER brand which values NT$2,167.3 billion.  Recently, academic studies have also begun to discuss trademark value relevance. For example, Barth et al. (1998) found that brand values as estimated by the FinancialWorld are value-relevant. Kallapur and Kwan (2004) took samples of 30 English firms, and found that the brand values are also value-relevant to share price. In the above study, the brand or trademark value information can be obtained from the publicly distributed magazines or financial statements.  In this paper, we explore two issues related to trademarks that are highly relevant to the emerging market. The first is whether trademarks are also related to Taiwan’s firm performance or market value in which the legal protection of trademarks is relatively poor. The answer to this question is not obvious, as the poor legal protection of trademarks leads to several factors that can weaken their value. For example, weak legal protection of trademark rights gives the owners little incentive to invest innovative activities to increase the firm’s operating performance. Whether the differences in institutional factors will render insignificant the relation between trademarks and the firm’s performance is not clear, and needs to be tested in an empirical context. The second issue this paper examines is whether the association between trademarks and firm’s performance is more pronounced in the early stages than in the later. Our argument is motivated by strategic prescriptions prevalent in management literature. The underlying logic is that firms can create permanent cost (or revenue) or demand advantages over competitors if they acquire trademarks in the early stages. On the other hand, investment in trademarks is less rewarding in the mature or stagnant stage of lifecycle.  


The Impact of Floating Thai Baht on Export Volumes: A Case Study of Major Industries in Thailand

Dr. Lugkana Worasinchai, Bangkok University, Thailand



The purpose of this study is to examine the impact of exchange rate movements on the export volumes of Thai major industries which are jewelry, textile, automotive, food, and software industries under new exchange rate regime. The 1997's financial woe in Asian region and the export sector coming to a stage of halt were partially caused by the change in the exchange rate system, which adversely affected Thailand's competitiveness during the change period. Econometric time series model is utilized to analyze monthly time series data. The sample data covers from July 1997, the beginning date of managed float regime in Thailand, to April 2004. The result suggests that the movement of exchange rate impacts to the movement of export volume of software industries in the same direction. The result implies that the depreciations (appreciation) of Thai baht will increase (decrease) the export volume of software industry. However, the result indicates that changing of exchange rate will not affect to export volumes of jewelry, textile, automotive, and food industries. The face of today's international trade environment changes greatly from that of the recent past. Globalization has brought with its a fierce competition from international rivals. The world's aggregated export volume has grown enormously, from US$20,000 million in 1913 to US$154,000 million in 1963, US$6,473,000 million in 1997, and US$8,000,000 in 2003. Comparing the world's total trade volume to the growth rate of the world's total productivity, it can be concluded that the world's total trade volume grew at the rate of 1.5 times compared to that of the world's total productivity. This statistical figure indicated that the increased productivity of each country was utilized to serve the increased market demand, which varied greatly from country to country, or even within the same country in terms of the value attached to the product quality and the production process. Consequently, another vital competitive factor determining the business sector's success is an ability to respond to these varying consumer needs in each country. (Arize,1995) An emerging trend among several world's major economic regions is a consolidation of regional economies. An example of the regional economic consolidation is an establishment of the European Union, which has evolved into a final stage of development by using a single currency system, the Euro currency. Regional economic consolidation generates huge benefit to the member countries in terms of trade and investment, as evident from an explosion of export volume among the three major economic regions, namely, European region (EU), North American region (NAFTA), and Asia region (ASEAN). Professor Michael E. Porter conducted a cluster analysis that would enhance Thailand's competitiveness based on the Competitive Advantage of Nation theory. Professor Porter described the cluster is a geographical congregation of companies, suppliers, companies in related industry, and related institutions, which when locating close to each other, yields a higher value compared to a dispersed geographical location. On the basis of cluster analysis, Professor Porter also pointed out that there were several potentially competitive industries in Thailand such as automotive industry, textile industry, jewelry industry, software industry, and food industry. (Porter, 2003) Prior to the 1997's financial crisis, Thailand had experienced a continued growth for a decade, led by the export sector as a major economic driving force. The country enjoyed many comparative advantages compared to its neighboring countries in many aspects such as the physical location, the developed infrastructure, and the lower cost of skilled labors. The structure of Thailand's export products in 2000 forecast by the Ministry of Commerce comprised of several industries such as electronics, electrical appliances, textiles, jewelry, automobile accessories and parts, travel accessories, leather goods and shoes, which accounted for 66.37 percent of total export. Agricultural produces and processed agricultural products accounted for 16.61 percent, and mineral products and fuels accounted for the rest 17.02 percent. (Porter, 2003) The 1997's financial woe in Asian region and the export sector coming to a stage of halt were partially caused by the change in the exchange rate system, which adversely affected Thailand's competitiveness during the change period. There is limited research to investigate the relationship between exchange rate regime and trade volume, with the exception of two studies, Brada and Mendez (1988) and Pozo (1992).  The study of the link between the exchange rate regime and trade flows is theoretically ambiguous.  One strand that favours fixed rate regimes argues that flexible exchange rates influence trade adversely on two grounds. First, flexible exchange rates lead to higher exchange rate volatile which, in turn, depresses trade. Second, flexible exchange rate regimes reduce the volume of trade by inducing governments to impose trade barriers in order to prevent the destabilizing effects of volatile exchange rates. The above discussion indicates that the effect of a change in the exchange rate on trade volume is ambiguous and an empirical investigation is needed to resolve. For this research, we applied econometric time series in order to examine the impact of exchange rate movement on export volume under managed float regime.  Several research works had been conducted on the export topic and they comprehensively covered many issues such as the importance of export to the country's economy, various analyses of factors affecting export, including the impact of exchange rate fluctuation on export volume. The international empirical evidence on the influence of volatility on exports is also mixed.  IMF (1994), Cote (1994), and McKenzie (1999) provide comprehensive reviews of the empirical literature. However, all existing studies, with the exception of Pozo (1992), do not consider the impact of the exchange rate regime on trade flows. Pozo (1992) approach is unsatisfactory for a number of reasons: First, by choosing the Gold Standard (1900-1914) to be the reference period, she does not take into account other periods of fixed exchange rate regimes included in her sample in her comparison with the managed float period.  Second, she concentrates on the early part of this century, thus, not including in her analysis two very interesting periods associated with the Bretton Woods system and the more recent managed float regime. Third, she does not consider the potential nonstationarity of the involved time-series variables when performing econometric analysis.  Brada and Mendez (1988) also analyzed empirically the impact of the exchange rate regime on bilateral trade flows among 30 countries using cross-sectional data from the mid 1970s. They found that bilateral trade flows among countries with floating exchange rates are higher than those among countries with fixed rates. Aristotelous and Fountas (2000) investigate the impact of the different exchange rate regimes that spanned the twentieth century on the bilateral exports between the United Kingdom and the United States over the last 98 years.  The results found that fixed exchange rate regimes and managed float exchange rate regimes are equally conductive to trade. The research also found that freely floating exchange rate regimes are more conductive to trade than fixed exchange rate regime.


The Dynamic Relationship and Pricing of Stocks and Exchange Rates: Empirical Evidence from Asian Emerging Markets

Dr. Shuh-Chyi Doong, National Chung Hsing University, Taiwan

Dr. Sheng-Yung Yang, National Chung Hsing University, Taiwan

Dr. Alan T. Wang, National Cheng Kung University Taiwan, Taiwan



This paper examines the dynamic relationship and pricing between stocks and exchange rates for six Asian emerging financial markets. We find that stock prices and exchange rates are not cointegrated. Using Granger causality tests, bi-directional causality can be detected in Indonesia, Korea, Malaysia, and Thailand. Except for Thailand, the stock returns exhibit significantly negative relation with the contemporaneous change in the exchange rates. It implies that the currency depreciation accompanies with a fall in the stock prices. The conditional variance-covariance process of changes of stock prices and exchange rates is time-varying. The results are crucial for international portfolio management in the Asian emerging markets and also of particular importance for testing international pricing theories as misspecifications could lead to false conclusions. Given the increasing trend toward globalization in financial markets, a substantial amount of research has been devoted to the investigation of the correlation of stock returns across international markets. Eun and Shim (1989), Hamao et al. (1990), and Bekaert and Harvey (1996), among others, investigate the dynamics of international stock movements, and find significant cross-market interactions. These empirical findings are of interest for two reasons. First, portfolio theory suggests that if the stock returns between markets are negatively correlated, investors should be able to reduce their risk through international diversification. However, if countries’ stock returns are positively covarying, it is possible to use the information in one market to predict the movement in the other market. Second, the Asian emerging markets had policy and regulation changes in recent years to facilitate cross-border investing. The expected returns from the investment in foreign stocks are determined by changes in local stock price and currency value. If the effect of exchange risk does not vanish in well-diversified portfolios, exposure to this risk should command a risk premium. Therefore, the interaction between currency value and stock price is an important determinant of global investment returns. This paper focuses on the dynamic relationship and pricing between the stocks and the exchange rates for Asian emerging markets. We first test for cointegration and causality between these two variables. Then we apply a bivariate GARCH-M model to investigate such relationships with a time-varying covariance matrix. We are motivated by recent market advances and research developments in the global economy. Asian emerging markets have experienced a rapid growth in national income, which becomes a main source for the expansion of capital markets. Global fund managers consider these markets the major areas to engage their investments in the emerging markets. Since the deregulation of financial industry in most of the Asian countries in recent years, the surge of cross-border capital movements has increased the volatility in these markets. The volatile movements of exchange rates create uncertainty in international trade as well as capital flows. Risk management and country risk evaluation are then crucial for investing in Asian countries. Compared with the numerous researches in the industrialized countries, there has been relatively little comparable research devoted to the Asian emerging stock markets. Ma and Kao (1990) suggest that for an export-dominant economy, currency appreciation has a negative effect on the stock market for the industrialized countries. Focusing on Asia-Pacific countries, Tai (1999) shows that foreign exchange risk is not diversifiable and hence should be priced in both stock and currency markets. Chiang, Yang, and Wang (2000) show that stock returns and currency values are positively related for nine Asian markets. They conclude that, when evaluating and managing investment risk in the Asian emerging financial markets, it is crucial to investigate the interactions between exchange rates and stock prices. The paper proceeds as follows. In next section, we provide econometric properties of stock returns and changes in exchange rates. Then, we apply the Engle (1982) and Bollerslev (1986) GARCH modelling framework to investigate the dynamic behavior and relationship between the stock returns and the change in the exchange rates. Finally, we offer a summary and concluding remark. We adopt the International Finance Corporation (IFC) classification of emerging markets and include six Asian financial markets in our study (Indonesia, Malaysia, Philippines, South Korea, Thailand, and Taiwan). Weekly (Friday) exchange rates, expressed in domestic currency unit per US dollar, and stock indices from 1989/1/6 to 2003/1/3 are gathered from the Datastream International. The returns are calculated as the logarithmic difference. The justification of using weekly data is that it has fewer problems of non-synchronous trading and short-term correlations due to noise while utilizing more information. Table 1 presents descriptive statistics. Philippines has the highest mean among the six Asian stock markets, followed by Malaysia, and Korea has the worst performance. As expected, the Asian stock markets are characterized by a higher volatility than that presented in the developed markets. Philippines and Taiwan have the highest level of volatility (as measured by standard deviation) in stock return. Malaysia and Korea are the least volatile emerging markets during this time period. Another important characteristic of the stock return series is the high value of kurtosis, suggesting that big shocks of either sign are more likely to be observed and the stock return series may not be normally distributed. Autocorrelations are detected at short lags and the Ljung-Box statistics Q(6), the joint test of autocorrelations at 6 lags, show that the null hypothesis of independency can be rejected. The Q2(6) statistics suggest that temporal dependency for the higher moment appears to be pronounced and significant. For the exchange rates, Indonesia suffers from the highest currency depreciation with the largest volatility during the sample period in this study. On the other hand, Taiwan has the lowest currency depreciation with the smallest volatility. The data series of change in the exchange rates also exhibit a high value of kurtosis, even more pronounced than the stock return series. Like the stock return series, linear and non-linear dependencies are found in the exchange rate changes. Therefore, GARCH specifications are appropriate for empirical modelling.


The Effects of Repatriates’ Overseas Assignment Experiences on Turnover Intentions

Dr. Ching-Hsiang Liu, National Formosa University, Taiwan



In today’s global marketplace, it is critical for Taiwanese multinational corporations to remain competitive in the area of international human resource management. This study is taken in an effort to determine if repatriates overseas assignment experiences will affect Taiwanese repatriates’ intentions to leave their organization. Overseas assignment experiences include number of repatriates’ overseas assignments, length of most current overseas assignment, time return from the overseas assignment, host country of most current assignment, and family accompany during the overseas assignment. By building on the repatriation adjustment and turnover theories and researches, this study expanded these recent findings to Taiwanese repatriates. The study results indicated that number of repatriates’ overseas assignments, time return from the overseas assignment, and family accompany during the overseas assignment were found to be significantly related to repatriates’ intent to leave the organization. Length of most current overseas assignment and host country of most current assignment were not related to intent to leave. The implications for international corporations of these results are discussed in detail. This study may help multinational organizations in Taiwan to enhance the international assignment process of their employees and keep valuable human capital within the organization. Improvements in information and communications technology have facilitated the globalization of markets and industries. Moreover, the impact of regional free trade agreements among nations has contributed to an increase in foreign direct investment in member countries. In today’s global marketplace, it is critical for Taiwanese multinational corporations to remain competitive in the area of international human resource management. To ensure expatriates can support a company’s expansion through both technical expertise and cultural understanding, it is not surprising to find that many organizations attempt to provide expatriates with support, programs, and skills to help them be productive and effective in their foreign assignments (Joinson, 1998). While much attention is given to the process of expatriation, much less attention is given to repatriation, the final link to the completion of the international assignment (Bonache, Brewster, & Suutari, 2001; Brewster & Scullion, 1997; Riusala & Suutari, 2000).  After the expatriate experiences an extended foreign assignment in a host country, that country becomes the more familiar environment and the home country becomes the more foreign setting. This is perceived as a shock for the repatriates and their family and is a difficult set of circumstances with which to cope (Baruch, Steele, & Quantrill, 2002; Paik, Segaud, & Malinowski, 2002).  Upon returning to the home country corporate environment, if the newly repatriated employees are not given the opportunity to utilize the knowledge and skills they developed abroad, the employees may become frustrated and seek more professionally rewarding opportunities with other firms (Stroh, 1995; Stroh, Gregersen, & Black, 1998). Unfortunately, once the repatriates leave their employer, the expert knowledge, critical business contacts and relationships are gone as well. Repatriation failure and turnover is costly to organizations. Understanding the role of overseas assignment experiences in the repatriation process would give organizations an edge to use these factors for successful adjustment and retention of repatriates. Moreover, repatriation management is an issue that is overlooked. This study contributed to the existing knowledge of international human resource management by specifically focusing on repatriates’ overseas assignment on turnover intentions. Hypotheses are provided to examine relationships between a particular independent variable and the dependent variable (intent to leave). Hypothesis 1: Number of repatriates’ overseas assignments has no relationship to their intent to leave the organization. Hypothesis 2: Repatriates’ length of most current overseas assignment has no relationship to their intent to leave the organization. Hypothesis 3: Repatriates’ time return from the overseas assignment has no relationship to their intent to leave the organization. Hypothesis 4: Repatriates’ host country of most current assignment has no relationship to their intent to leave the organization. Hypothesis 5: Repatriates’ family accompany during the overseas assignment has no relationship to their intent to leave the organization. The literature review begins by discussing prominent theories of cultural adjustment and repatriation adjustment, intent to leave and turnover and repatriates turnover are examined.  U-Curve theory represents graphically the emotional stages of the adjustment process and shows how it develops over time. The U-Curve can be interpreted as phases of adjustment in three general stages. First, the honeymoon phase is “a period of initial enthusiasm in which the sojourner is essentially a spectator, absorbing the sights and forming impressions with and interaction with host nationals” (Heyward, 2002). Second, the culture shock phase is distinguished by confusion, depression, and negative attitudes. As the individuals must seriously cope with living in the new culture on a day-to-day basis, they may encounter genuine difficulties in a different culture (Black & Mendenhall, 1991). This phase represents the bottom of the “U.” The third phase is adjustment. In this phase the expatriates become integrated into the new environment and have a new level of understanding regarding their role in that environment (Martin, 1984).  The central premise of the theory is that people entering a new country and culture at first feel excitement, then experience a problematic period of cultural shock (reverse cultural shock for repatriates), and finally adjust (readjust) over time to their new social-cultural environment.  Gullahorn and Gullahorn (1963) argue that the cross-cultural adjustment processes that lead to a U-Curve pattern can be applied to the repatriation process. They suggest extending the U-Curve to form a W-Curve in order to represent the reentry process (see Figure 1). The expatriate adjustment is indicated by the first U-shaped curve in the letter “W” and repatriation adjustment is the second U-shaped curve. Harvey (1982) argues that the longer one spends away from home, the more home can change, and this leads to greater uncertainty upon return. When repatriates have been away from their home for a long time, this can lead to feelings of alienation even when they are interacting with old friends or acquaintances.


Prepare for E-Generation: The Fundamental Computer Skills of Accountants in Taiwan

Dr. Yu-fen Chen, National Changhua University of Education, Changhua, Taiwan



The study aimed to explore the fundamental computer skills of accountants for E-generation in Taiwan, also to examine the proficiency levels of these computer skills possessed by accountants currently to serve as crucial references and suggestions to education authorities, schools, faculties and curricular planners. Literature review, expert meetings and questionnaire surveys were utilized for gathering data concerning the items of fundamental computer skills of accountants and proficiency levels of these computer skills possessed by accountants in Taiwan.  Data collected from the questionnaire surveys were analyzed through statistical methods including frequency distribution, T-test, one-way ANOVA, and Scheffe’s method.  The study developed 3 major categories and 17 subcategories as a research framework in consisting of the items of fundamental computer skills of accountants for E-generation in Taiwan. The progress of information technology not only trigger the business management mode to change but new graduates entering the job market will invariably be required to possess rudimentary computer skills. As the administration of the Ministry of Education directed that training manpower, the intermediate professional technician should be a key objective in all vocational junior colleges [4]. In 2000, the Chinese Computer Skills Foundation indicated that junior college graduates are not only required to be fluent in the domain of conventional accounting knowledge but also need to be proficient in computer skills, such as computer operating system, system software, word processing software, spreadsheet software, packaged commercial accounting software, database software, graphic software, presentation software, multimedia software, Internet software and so forth. A survey in 2002 also found the top four technology skills for new accounting hires to possess, in order of importance, were spreadsheet software (e.g., Excel), Windows, word-processing software (e.g., Word), and the World Wide Web [8]. In an effort to strengthen the vocational education system, the education authorities have moved to introduce a series of educational reforms, including the upgrade of excellent vocational junior colleges to the full-fledged status of institutes of technology, and the establishment and implementation of a certification system. A survey conducted by China Times of business owners on June 9, 1998 indicated that 52% of the respondents would require the interviewees to possess some kind of computer skills, and as much as 70% of the businesses will require that the interviewees should possess computer skills particularly when hiring administrative and accounting positions. Hence, to be an educator, a compelling question had emerged as to how best to design a preferred curriculum for accounting students, and how best to enhance their computer skills. This study focused on a crucial task of exploring what items of fundamental computer skills are required of accountants in Taiwan. This study includes the objectives below: To explore the items of fundamental computer skills required of accountants in Taiwan. To examine the proficiency levels of these computer skills possessed by accountants in Taiwan. To propose tangible suggestions as references for planning computer program of department of accountancy for vocational colleges or schools in Taiwan. The methods of the current study consisted of literature review, expert meetings and questionnaire surveys with which to attain research objectives.  The respondents of the questionnaire surveys were requested to rate each competency on a five-point Likert-type scale from unimportant (1) to important (5). 1. Literature review: Literature review was conducted by examining publications pertaining to computer skills required of accountants as references for drafting the contents of an initial draft of a pilot-test questionnaire. Expert meetings: There were two expert meetings held. The first expert meeting was on October 21, 2001, intended to examine the contents of the initial draft of a pilot-test questionnaire. The second one was on February 24, 2002 to examine the results of the first pilot-test round and, furthermore, to develop the contents of the formal questionnaire. Questionnaire surveys: First pilot-test round: The first pilot-test round was scheduled for the period between December 5 to 15, 2001 in telephone interviewing conducted by the marketing research center of the Department of Statistics at Fu Jen Catholic University in Taiwan. Second pilot-test round: The analysis of responses of the first pilot-test round had led to the conclusion that preliminary question 3 pertaining to “launch a web site” should be deleted or modified as concluded from the statistical methods of internal consistency analysis and correlation coefficient analysis. Under which, results concluded from the reliability coefficient analysis showed that the Cronbach alpha (α) rating of the category of other computer related skills was below 0.7 to offer insufficient reliability. As a result, some of the questions in this category were later discussed and modified in the second expert meeting and an ensuing telephone interviewing by the marketing research center of Department of Statistics at Fu Jen Catholic University was to follow in the second pilot-test round. Formal questionnaire survey: A formal questionnaire had been concluded from the second pilot-test round, with which a formal questionnaire survey was conducted from April 11 to 30, 2002. The samples for the two pilot-test rounds and formal survey included 1,800 businesses that were randomly selected from the industry and commerce registry published by China Credit Rating Corporation in April 2001[5][6][7]. From which, a valid sample number of 100 businesses were chosen from the respondents in two pilot-test rounds, and a valid sample number of 400 businesses were selected for the formal survey; a sample distribution chart was depicted in Table 1.


The Comparative Study of Information Competencies-Using Bloom’s Taxonomy

Jui-Hung Ven, China Institute of Technology, Taiwan, R.O.C.

Chien-Pen Chuang, National Taiwan Normal University, Taiwan, R.O.C.



After we collect the professional competency requirements related to information occupations from America, Australia, and Taiwan, we manually extract the action verbs which describe the competencies from the competency statements. All action verbs extracted are then classified into six categories based on Bloom’s cognitive taxonomy. Next, we compare the competency requirements for information occupations from the three countries. We also create a classification lexicon for action verbs based on Bloom’s six cognitive categories. The competency requirements have almost similar distributions in six cognitive categories, which reflect the fact that the same information competencies are required in the three countries. The most needed information competency belongs to the synthesis category and the percentage average is 45%. The second needed competencies are application and analysis with around 20% each. In both knowledge and comprehension level together, the percentage is low, which is up to 5% only. Competencies, synonymous with abilities, are the state or quality of being able to perform tasks. Therefore, competencies are observable or measurable knowledge, skills, and attitudes (KSA) (Rychen & Salganik, 2003). Knowledge and skills give a person the ability to perform tasks, while attitudes give a person the desire to perform tasks. Many countries have developed their own occupational information system to provide services to people who can understand the competency requirements needed for each occupation, such as O*NET OnLine ( developed by the U. S. Department of Labor, National Training Information Systems (NTIS) ( developed by Australia National Training Authority (ANTA), and the Occupation Information Systems (OIS) ( developed by the Bureau of Employment and Vocational Training of Taiwan. Since competencies are the abilities to perform tasks, they can usually be grouped into two categories: (1) generic competencies and (2) professional competencies (European Training Foundation, 2000; Kearns, 2001). The former are more general and domain independent competencies such as listening, speaking, reading, writing, problem solving, and etc., which are needed in every workplace. The latter are more specific in terms of knowledge and skills, which are domain dependent. Competency statements are used not only to describe competency requirements in skill standards systems (Aurora University 2003; Mansfield & Mitchell, 1996), but also to describe basic academic skills, teaching objectives, assessment criteria, and learning outcomes in school systems (Ruhland & Brewer, 2001), as well as to describe personal profiles, curriculum vitaes, career plans, and job recruitment advertisements in job market systems (Michelin Career Center, 2004). The competency statement must describe the result of activity in performance terms. This is achieved first by using an action verb which describes the action or behavior which will produce the outcome. Secondly, the competency statement must also describe the focus of the action grammatically and the object of the action – to what or to whom the action is directed. If necessary, for clarity, the competency statement may also describe any conditions which qualify the action – the context in which performance takes place. To sum up, the competency statement has the form of “action verb + action object + condition”, and should be in natural language texts and in grammatical sentences (Norton, 2004; Mansfield & Mitchell, 1996). For example, in the competency statement “monitor, adjust and check variables to meet product specifications”, the action verbs are “monitor, adjust, and check”, the object is “variables”, and the condition is “to meet product specifications.” The action verbs which indicate explicit meaning of competency are also called competency verbs.  Action verbs must reflect the level of competency outcomes. These levels can usually be classified based on Bloom's cognitive taxonomy, namely: knowledge, comprehension, application, analysis, synthesis, and evaluation (Bloom, 1956); from the lowest level of the simple recall or recognition of facts-knowledge, to the highest level of critical thinking-evaluation. Different occupations may need different competencies in a country, while same occupations may need different competencies among countries because of the differences in the technological level. In this paper, we collect the professional competencies related to information occupations from America, Australia, and Taiwan, and then we extract the action verbs manually from the competency statements. We also build an action verb lexicon, which is divided into six classifications based on Bloom’s cognitive taxonomy. All action verbs extracted are then classified into six categories according to the action verb lexicon, after which we compare the competency requirements for information occupations in the three countries. In the next section, we describe data collection procedures and data processing, while in section 3, we present the results. Lastly, in section 4 we give conclusions. In this section, we describe the procedures of collecting the competency requirements related to information occupations from America, Australia, and Taiwan respectively. For the basis of comparison, the competency requirements that we collect are national level data published by the government. First, we search the occupational classification codes related to information occupations in each country. Then we gather the professional competency requirements corresponding to the occupational classification codes. On America’s part, we search the Standard Occupational Classification (SOC) ( published by the US Department of Labor. We find out that there are 17 occupations related to information occupations. Then, we log on to the website ( of the Occupational Information Network O*NET OnLine and discover that some of the information occupations have similar competency requirements. In fact, there are only 10 occupations which provide the competency requirements, as shown in Table 1. For each SOC code in Table 1, we enter the code to search the tasks which are the professional competencies to be performed, and collect all the competency statements for further processing.


Expected Default Probability, Credit Spreads and Distant-from-Default

Dr. Heng-Chih Chou, Ming-Chuan University, Taiwan



This article analyzes the information content of the distant to default regarding a firm’s default risk. Under the Merton’s (1974) option pricing model, both the relations between the expected default probability of a firm and its distant-from-default, and the relations between the credit spreads and distant-from-default are examined. We demonstrate that both expected default probability and credit spreads could be expressed by the analytical function of the distant-from-default. This means that people can easily infer a firm’s expected default probability and also its credit spreads from the information of its value of distant-from-default.  The distant-from-default metric, proposed by KMV Corporation is new but one of the most potential tools in terms of its ability to model default risk. The distant-from-default measures how many standard deviations a firm’s asset value is away from its debt obligations. Higher value of distant-from-default means that the firm's asset value is further from its expected default point, and thus less its expected default probability; lower value of distant-from-default implies that the firm's asset value is close to its default point, and thus higher its expected default probability. However, like the credit rating, the value of distant-from-default does not straightly tell us what the expected default probability and credit spread are. In order to extended this risk metric to a cardinal or a probability measure, one alternative is to use historical default experience to determined an expect default frequency as a non-linear function of distant-from-default. According to KMV’s approach and based on its huge default database, a firm with asset value 7 standard deviations away its debt obligation has 0.05% chance to default over next year (Crosbie, 2003). Thus, the predictive power of distant-from-default is based on the assumption that the past default experience is a good predictor of the future default rate. However, it is obvious that the value of distant-from-default contain the information of a firm’s expected default probability. Different from the KMV’s regression approach or without its database of default experience, in the article under the Merton’s(1974) model we connect the value of distant-from-default with the expected default probability of the firm. Meanwhile, with the same approach we also find the relations between the value of distant-from-default and credit spread. Credit spread is the difference in the yield of a risky debt versus that of a risk-free debt of similar maturity. It tells us the risk premium of the risky debt and thus it is also regarded as a credit risk metric. where the default point is dependent on alternative default triggering conditions, and the simplest choice would be the amount of debt. The distant-from-default metric is a normalized measure and thus may be used for comparing one company with another.  A key assumption behind Merton’s model is that all the relevant information for determining relative default risk is contained by the stochastic value of the firm, the level of debt obligations, and the volatility of firm’s asset value. The distant-from-default value utilizes two primary types of information: the firm’s financial leverage level and the volatility of the firm’s asset value. The relationship between the firm’s asset value and the firm’s debt obligations is a leverage relationship. And the leverage is judged against the volatility of the firm’s value. It is interesting to know that Zeta’s credit risk model also considers leverage information conveyed by book leverage and coverage ratios, as well as volatility information conveyed by earnings stability and size (Altman, Haldeman, and Narayanan, 1977). Although distant-from-default uses the same information with that of traditional Zeta’s model, we apply different model to the information and derive distinct insight regarding a firm’s default risk. In Merton’s option pricing model, the equity of a firm is viewed as a call option on the firm’s asset. The reason is that equity holders are residual claimants on the firm’s asset after al other obligations have been met. The exercise price of the call option is the book value of the firm’s debt. At the maturity day, when the firm’s asset value is less than the exercise price, the value of equity is zero. Similar to Crosbie (2003) and Vassalou and Xing (2004), we follow Merton's model assuming that capital structure of the firm includes both equity and debt, and that the market value of  a firm’s asset follows a geometrics Brownian motion of the form: where a,l,sv are constants, and W represents a standard Wiener process. We denote a as the instantaneous growth rate of the firm's asset value, l is the payout ratio, and sv as the instantaneous volatility of firm’s asset value.   Since the value of equity can be regarded as a call option on the firm’s asset, with B as the book value of the debt, and  t  as the time-to-maturity of the debt, we can write the value of equity as follows:


Yearning for a More Spiritual Workplace

Dr. Joan Marques, Woodbury University, CA



Spirituality in the workplace is a term, which, for some, has merely meant yet a new buzzword in the business environment, but which is fortunately for an increasing number of business executives and workers at various levels, emerging into a serious trend that can no longer be pushed aside with an annoyed shrug, or rejected with the cry that it is just another disguise for bringing religious practices into work environments. This paper reviews 3 main insights that have arisen since the topic of spirituality in the workplace has become such an extensively discussed one, and subsequently elaborates on some major advantages of applying this mindset versus some major disadvantages of refraining from doing so. The paper finally examines one of the main reasons for today’s corporate workplaces to remain unspiritual. 'Treat people as if they were what they ought to be and you help them to become what they are capable of being.' Johann Wolfgang von Goethe (1749-1832) Spirituality in the workplace is a term, which, for some, has merely meant yet a new buzzword in the business environment, but which is fortunately for an increasing number of business executives and workers at various levels, emerging into a serious trend that can no longer be pushed aside with an annoyed shrug, or rejected with the cry that it is just another disguise for bringing religious practices into work environments. The multiple publications and presentations on this topic have, by now, educated the reader’s world enough about the difference between spirituality and religion, and at the same time, have brought about some significant realizations within the minds of members of Corporate America. The first and least complicated insight that the American workforce has picked up on is the acknowledgement that something is wrong with the majority of our work environments: more and more people want to feel comfortable and important in their workplace, and don’t want to be considered yet another name tag with yet another set of functions to fulfill. Workers want to be recognized for who they are: people, with families, ups and downs, skills and talents, and diverging - and oftentimes very useful - perspectives on matters. The second and slightly more comprehensive realization is that the implementation of spirituality in the workplace is not happening as smoothly and rapidly as may have initially been expected. This unfortunate setback has a number of important reasons of obstinate nature at the core: Cultural values and social trends that have been in place for almost a century, and that are therefore very hard to correct. The individualistic mindset of the average U.S. corporate worker and its encouragement from childhood on immediately come to mind. While spirituality in the workplace calls for an interconnected approach and an enhanced level of trust among workers at various levels, the bare-boned reality is, that we learn not to trust anyone but ourselves; definitely in the workplace where, as we learn, “everyone may be out for your position.” The third realization, although more subtle in nature, may be the hardest to overcome on our way toward comprehensive implementation of spirituality in the workplace: It pertains to the human tendency to surround ourselves with similar looking and -reasoning associates, because this guarantees faster decision-making, less time investment in learning about each others’ perspectives, and a higher level of reflectivity. In other words, the human inclination toward homogeneity as opposed to diversity. There has been much upheaval with regards to the implementation of this phenomenon, which can now consequentially be traced as a particular point of attention on practically all contemporary corporate websites. Unfortunately, the comfort zone of remaining surrounded by kindred individuals, predominantly based on backgrounds and looks rather than on mindsets, has turned out to be a more persistent and insurmountable one than one could initially estimate. Reviewing the trends since the 40s and 50s, running from company picnics, anti-smoking programs, and on-site yoga sessions, Raizel (2003) asserts that, in contemporary times,  "People have made the link between mental health and productivity and absenteeism--and the whole notion that people who are happy at home and happy at work are more productive in the workplace" (p. n/a). However, it is also Raizel’s (2003) opinion that “the well-being of knowledge workers is suffering” (p. n/a).  This author categorizes modern days’ knowledge workers, unlike Peter Drucker’s reference to Information technology workers when he first introduced the term, as all those who put their college degree to work in any work environment. Raizel (2003) underscores that work, for these employees, can be “physically strenuous, contributing to illnesses such as heart disease and diabetes, and nervous disorders like anxiety and depression” (p. n/a). Raizel calls for business corporations to not only introduce wellness initiatives in the workplace, but to particularly ensure that these initiatives are being implemented regularly. This author admits that in tight budget times, wellness programs are the first to be discontinued. Yet, Raizel warns, “a worn-out, unhealthy workforce is a costly one” (p. n/a). This author further stresses that in many workplaces, although there are multiple wellness programs and employee facilitation projects in place, top management maintains a culture that seems to disable workers from making appropriate use of these accommodators. And indeed: how many of us have not faced the stress of wanting to spend some more time with our loved ones versus participating in an immense project at work that just has to be finished before the deadline? Disadvantages of refraining from implementing spirit at work Gul and Doh (2004) alert us that “Despite an extensive set of critiques and criticisms offered by scholars and practitioners, most modern organizations remain devoid of a spiritual foundation and deny their employees the opportunity for spiritual expression through their work” (p. 128). These authors stress that this attitude of neglect and ignorance by today’s corporations brings about higher costs and damages than they are willing to acknowledge. The most obvious costs and damages that come to mind are those pertaining to the cost of having to hire and train new entrants to the workplace at a higher pace, because of the high turnover rates that exist in these workplaces. The high level of absenteeism that is also normality in these work environments is another factor to consider: because people endure higher levels of stress and resentment in these unpleasant, high-pressure workplaces, they use every opportunity to stay away. Worse, their indignation to have to perform in such an environment creates various psychosomatic symptoms within them, providing them with genuine reasons to stay at home. The increasing numbers of absentees place excessive pressure on the employees who are at work, so that they, in turn, get overworked and discontent about their workplace, and the downward spiral has been established. One should not underestimate the levels of aggravation that emerge among employees who continuously have to fill in for colleagues who are absent, and the reasons that this regular double-working for the same pay finally create for these workers to look for another job.


The International Harmonization of Accounting Standards: Making progress in Accounting Practice or an Endless Struggle?

Kellie McCombie, University of Wollongong, Australia

Dr. Hemant Deo, University of Wollongong, Australia



This research paper aims to use a Foucauldian theoretical framework to explain Australia’s attempts to develop a set of global accounting standards. The involvement of the current Australian Federal Government (AFG) in the standard setting process is crucial to this understanding. The argument put forward by the AFG is seen as one that is constructed according to the “totalizing” discourse of globalization. The power and knowledge of the Australian Accounting Profession (AAP) and the AFG, are highlighted using the Foucauldian framework, providing a means by which this process of harmonization can be appreciated. Australia’s IH project will be explored, with particular emphasis on the AFG’s involvement. This research paper provides an understanding of the Foucauldian theoretical framework, highlighting the significance of discourse, and power and knowledge relationships being studied in an archaeological and genealogical context.  This allows a discussion and analysis of the IH project in Australia, with special attention given to the AAP and AFG roles in creating and sustaining a globalization discourse for the accounting and business community, through the interplay of power and knowledge.  In order to promote the IH project, the paper reveals the promulgation of proposed benefits of IH. These benefits are shown to be based on the neo-liberal version of globalization. In a final section, this research paper critiques the proposed benefits of a globalized accounting discourse that business in Australia now faces.  It is concluded that the IH project in Australia can be best described as an endless struggle to dominate accounting discourse, rather than as representing progress in accounting practice. Accounting standards provide the accountant with a guideline to report economic transactions and events for an organization. Accounting standards in Australia have legal backing through the Corporations Act (CLERP, 1997) and are binding on members of both professions (ASCPA & ICAA, 2004). The accounting standards are also described “as a piece of delegated legislation…parliament has given the power of making accounting standards to a body that has experts on it rather than developing the documents itself as a body of legislators” (Ravlic, 2003). The number of companies that have to apply standards in preparing financial reports is therefore quite significant, only part of which includes listed companies.  How economic events and transactions should be recorded can vary from country to country.  Australia embarked on a project of internationalizing their national accounting standards in the 1970’s. This project was initiated by the Australian Accounting Profession (AAP), which consisted of the Institute of Chartered Accountants in Australia (ICAA) and Australian Certified Practising Accountants (CPA). By the 1990’s the Australian Federal Government (AFG) were taking an active role in standard setting which resulted in the “full adoption of International Financial Reporting Standards (IFRS) of the International Accounting Standards Board (IASB)” (Parker, 2002, pp.36). The overall implications that this project poses for the accounting profession and the business community, is important both for the Australian economy and international financial markets. The stated benefits of IH for the Australian economy are couched in the “totalizing” discourse of globalization (Boyce, 2002). This is also in line with the argument that the AFG apply a neo-liberal discourse to all policy-making, including the IH project for Australia. The issue of globalization has itself generated intense debate (see Hardt and Negri, 2001; Held and McGrew, 2003; Held, 2004; Quiggin, 2001; Sheil, 2001). Quiggin (2001, pp. 11) states, “the term ‘globalisation’ obscures as much as it clarifies”. Opposed to the earlier versions of globalization, the 1970’s version of globalization has a tendency to obscure the hidden agenda of neo-liberalist discourse. It is noted that it is difficult to separate the debate on globalization from a neo-liberalist discourse; however the confusion over the term itself adds another dimension to the debate. Given its current usage, Sheil (2001. pp. 7) concludes that “globalisation has, nonetheless, remained largely elite, top-down term”. This paper gives support to the idea that defining terms is relevant; the meanings adopted by the authoritative speakers, and presented in public policy have a large impact on how an issue is viewed. The issue of discourse as a significant source of power has been discussed by other researchers in accounting (Clegg, 1997). Arrington (1990) argues “political factors can become so dominant that the discourse within a discipline can have virtually no resemblance to the discourse of the broader academy” (Arrington 1990, pp. 6). This is true of the discourse surrounding globalization, in that the discourse is heavily influenced by the neo-liberal movement, this discourse becomes dominant, and this discourse does not resemble the discourse of the broader academy interested in globalization. Arrington (1990), although not discussing globalization or IH, implies that rhetoric is used to hide the political agenda of a particular group (he discusses modernist accountants using positive theory to hide the neo-liberal politics or “libertarian economic voodoo” underlying the dominant discourse in reporting standards).  This research paper undertakes to firstly, introduce the framework offered by Foucault. A Foucauldian theoretical framework is an applicable framework to explain the IH project in Australia, given its emphasis on the notion of discourse, and power and knowledge. The influence of the AAP and the AFG highlights the interplay of power and knowledge that ultimately creates a discourse. Foucault argues that “discourse is not a translation of domination into language but is itself a power to be seized” (Hooper & Pratt, 1995, pp. 14). Secondly, this research paper will trace the development of the IH project from an Australian perspective, given the importance of archaeology and genealogy in Foucault’s theoretical framework. Specifically, the paper will address two phases – the AAP attempt, and the AFG attempt, at globalizing AAS. In the final sections of this paper, evidence of the creation of a totalizing discourse is revealed to be both empowering (for the AFG, ASX and multinational companies) and disempowering (for some reporting entities and the business world in general). Final conclusions will be offered. and the business world in general). Final conclusions will be offered. 


The Impact of Environmental characteristic on Manufacturing Strategy under Cleaner Production Principles Guidance

Dr. Jui-Hsiang Chiang and Dr. Ming-Lang Tseng, Toko University, Taiwan



The intense competition and environmental protect in the current Taiwan marketplace has forced the firms to re-examine their methods on current manufacturing strategy. All manufacturers are struggled within the market competition and awareness on green production principles. Above all, this research formed from various previously studies and further to organize the questionnaire and come with research design on causality.   This study surveys on 25 ready-food manufacturing companies in Taipei area to determine the model under such circumstance and manifested statistical tools such as independent sample t test, reliability scale and path analytical method to determine the causality on manufacturing strategy. Ultimately, this research results demonstrated the total direct effect and total indirect effect of exogenous environment and cleaner production principles on manufacturing strategy, and also indicates that exogenous environment and cleaner production principles are vitals on manufacturing strategy. The intensified competition in a number of global manufacturing industries has triggered in the manufacturing function and the contribution of manufacturing strategy can make towards a company’s competitiveness. The exogenous variables of the model are three dimensions of the environment: dynamism, munificence and complexity (Fluent, 2004). These attributes represent the main characteristics of the exogenous environment considering the resources dependence on manufacturing strategy under the guidance of cleaner production principles. Furthermore, the citations have frequently appeared in the literature for analyzing the effects of environment on organizations (Scutcliffe, 1998) (Boyd, 1993). The proper management of manufacturing requires that sufficient and maximum strategies should be formulated to govern its operations. Such strategies should consist of coordinated objectives and strategic plan, which shall have the purpose of securing medium and long-term sustainable competitive advantages over the firm’s competitors (Tseng & Chiu, 2004). And sustainable development presented for the preservation of resources for the future generations while the present generation continuous its growth and development. However, the manufacturing industry is not only integrated the manufacturing strategy into its business system but also the environmental protection has to be the guidance for its future operation (Azzone, 1998) (Ward, 1996). This research focuses on the cleaner production principles as the guidance of manufacturing strategy (Chiu & Tseng, 2004) and impact of exogenous environment.  The manufacturing strategy process variables proposed in this study, can find support in previous study (Ho, 1996). These are the role of manufacturing function, manufacturing planning activities, environmental interaction and worker participation. Currently, the manufacturing strengths and weaknesses are critical factors in the determination of the most effective bases on which a company should attempt to compete in the marketplace or external information (Hill, 2000). As such, it is essential that efforts to build on strengths and minimize weaknesses are driven by and integrated with the organization's overall competitive strategy (Porter, 1980). In addition, the increasing cost and sophistication of modern technology, changes in the nature and mix of human skills required, and the emergence of new and radical conceptual developments in manufacturing management philosophies and techniques have drastically increased the importance of adopting a long-term strategic view of the development of the manufacturing function, and of the ways in which it can be integrated with and can support overall business strategy. Traditionally, Taylor’s scientific management, manufacturing decisions have been taken in an operational framework defined by internal performance standards such as machine downtimes, scrap rate, work-in-process inventories. Hence, it often drives manufacturing managers to formulate various programs of productivity improvement in order to be globally competitive (Bolisani, 1996). Swamidass, (1987, 1988, 1990, 1993, 1999) defines manufacturing strategy as the development and deployment of manufacturing capabilities in total alignment with the firm’s goals and strategies. Wheelwright (1984) states that an effective manufacturing strategy is not necessarily one that promises the maximum efficiency, or engineering perfection, but rather one “that fits the business; that is, one that strives for consistency between its capabilities and policies and the business’ s competitive advantage”, but all the efforts are targeted to be a world-class manufacturer. Manufacturing planning activities is manufacturing strategy research needs to move away from only studying the relationships of manufacturing structures to performance and toward studying the core capabilities that certain structural and infrastructural forms encourage. Strategic management research indicates the existence of diverse product differentiation strategies (Young et al, 1996).  The concept of cleaner production concept was founded by UNEP in 1989, along with other terms similar in meaning such as eco-efficiency, green productivity, pollution prevention, etc. UNEP (2002) (Kjaerheim, 2004) provides a clear definition for cleaner production which states that cleaner production refers to two series of controls in the whole process and one is that of the provision of services. For production processes, cleaner production means using energy and resources efficiently in order to eliminate toxic raw materials, and to reduce both the amount and toxicity of all emissions and wastes before they leave the production process, thus, production technology improvement or the adoption of new environmental technology help to increase environmental protection. For products themselves, cleaner production means reducing the products’ environmental impact throughout their entire life cycle, from raw material extraction to ultimate disposal. Furthermore, Cleaner production has become the fundamental pattern for many industries in the 21st century (Zhou, 2001). Prior to all the theories and implementation, the awareness level of CP is significant for the manufacturers having initial view on CP and the program helps on both environmentally and economically (Andrew et al, 2002). Production basically covers everything from input to process, to output. Zilahy (2004) discussed the critical decision of technology application with regards to environmental issues. There are many benefits that a company can avail in using cleaner production methods. When applying CP, many improvements can be made in the production process at zero or very little cost. This improves both a company’s profitability and its environmental compliance. The benefits are listed below: 1.)Improved efficiency of the firm 2.)Lower costs 3.)Conservation of raw materials and energy 4.)Improved compliance to market requirements 5.)Improved environment 6.)Better compliance with environmental regulations 7.)More cohesive working environment for laborers 8.) Better public image of the company (Halme, 2002).


The Relationships Between Explicit and Tacit Oriented KM Strategy, and Firm Performance

Dr. Halit Keskin, Gebze Institute of Technology, Turkey



In this study knowledge is considered as explicit and tacit; and in line with this, KM (KM) strategies are classified into two categories: explicit oriented KM strategy and tacit oriented KM strategy; and the relationships between these variables and firm performance are investigated. Also the environmental factors are used as moderators between KM strategies and firm performance. According to the regression analyses, explicit and tacit KM strategies have positive effects on firm performance; and the impact (magnitude) of explicit oriented KM strategy is higher than the tacit oriented one on firm performance. Also it was found that greater environment hostility, the greater relationship between explicit and tacit oriented KM strategies, and firm performance.  Disappearing boundaries, globalizing competition and rapid changing technology and business life; lead the economy to a knowledge based direction (Clarke, 2001). While the factors of production, such as labor and capital were concentrated assets in traditional economic structure; knowledge has come on the scene as a factor itself and has become the most important one (Cliffe, 1998; Hansen, Nohria&Tierney, 1999; Davenport, 1997). In this sense, firms have become much more interested in stimulating knowledge, which is considered as the greatest asset for their decision making and strategy formulation. For example, Drucker (1993) stated that, “We are moving to a society in which the basic resource of economy is knowledge, instead of capital, labor and natural resources”. As a result, it is necessary to manage knowledge effectively in the new economy, because the achievement of a sustained competitive advantage depends on firm’s capacity to develop and deploy its knowledge-based resources (Perez & Pablos, 2003). In this sake, firms have to adopt an appropriate Knowledge Management (KM) strategy in regard to their knowledge entity. For instance, Choi and Lee (2002) indicate that applying tacit and explicit-oriented strategies is imperative for firm performance by large sized firms in western countries. However the impact of tacit and explicit oriented KM strategies on firm performance in SMEs in developing countries is interestingly scant. Although the basic concepts and principles of KM are similar for small and large organizations; there is a difference is with value placed on systematic KM practices like formalized environmental scanning and computer-based knowledge sharing systems (Lim & Klobas, 2000; Kailer & Scheff, 1999). In general SMEs depend on the collaboration with external know-how experts. Although there is no systematic environmental scanning, SMEs are found to be very reliant on the external environment for information; they depend on the experiences of others to serve as benchmarks against which they measured their performance. Knowledge acquisition in SMEs usually occurs through close contact between senior staff and their clients and suppliers; there are not any systematic methods for environmental scanning. There are no formal methods or systems for internal knowledge processing and no tools for storing knowledge because they are not appropriate or too expensive for SMEs. Due to the lack of a knowledge sharing systems, knowledge related to an organization’s core competencies is held as tacit knowledge in the minds of key employees; so SMEs are very sensitive to the loss of employees (Lim & Klobas, 2000; Kailer & Scheff, 1999). Also, since SMEs are very vulnerable to environmental factors; environmental contingencies on the relationship between tacit and explicit oriented KM strategies and firm performance should be investigated in a SME context in developing countries. Accordingly in this paper, the effects of KM strategies on firm performance in Turkish SMEs are investigated; while environmental turbulence and intensity of market competition are taken as moderators during this investigation.  Knowledge is an organized combination of data, assimilated with a set of rules, procedures, and operations learnt through experience and practice. According to Nonaka (2000) knowledge is a justified true belief and skill. Knowledge which can be identified as meaningful information, is embodied and put in progress in the minds of the people who have it. Also it can be found in routines, processes, applications and norms as much as in documents and stores in an organization (Davenport & Prusak, 1985; Bhatt, 2001). Knowledge is classified into two types as tacit and explicit by Polanyi (1966, p.135-146). Explicit knowledge is the type of knowledge that can be easily documented and shaped (Choi & Lee, 2003). It can be created, written down, transferred or followed among the organizational units verbally or  through computer programs, patents, diagrams and information technologies (Choi & Lee, 2003; Perez & Pablos, 2003). Polanyi expresses tacit knowledge as “we can know more than we can tell” (Polanyi, 1966, p.136). This form of knowledge: i-) is embodied in mental processes; ii-) has its origins from practices and experiences; iii-) is expressed through ability applications; is transferred in form of learning by doing and learning by watching (Choi & Lee, 2003). For instance, Howells (1996) identified tacit knowledge as “uncodified and unembodied knowledge (know-how) that is obtained from learned behaviors and procedures through informal ways”. Tacit knowledge and explicit knowledge complete each other; and they are important components of KM approaches in organizations (Beijerse, 1999). Serban and Luan (2002), note that KM can be identified as systematic and organized approaches that ultimately lead organizations to create new knowledge, which can manipulate both tacit and explicit knowledge and use their advantages.


The Cultural Dimension of Technology Readiness on Customer Value Chain in Technology-Based Service Encounters

Chien-Huang Lin, National Central University, Taiwan 

Ching-Huai Peng, National Central University, Taiwan 



Most extant literatures discuss how technology can successfully infuse into the new emergent service setting. Only a few articles, posing customer might be reluctant to interact with technology, are discussed from the stance of customer. To investigate how country level variables, such as culture dimensions, influence the chain is also important for international marketing strategy planning.  A conceptual model of customer technology-based service value chain that integrates micro- and macro- level perspectives is developed, in which possible influence from customer technology readiness and from national culture dimensions are also discussed. As technology advances daily, many traditional human-to-human service encounters are now replaced by human-to-machine service encounters, where technology plays an important role in delivering service to customer. Hence, it is valuable to distinguish within conventional solid service value chain how various customers’ beliefs toward technology affect their service perceptions and consequential behaviors.  Furthermore, in a global environment, service firms do compete one another beyond country boundary. This research reviews papers in related domains and leads to the construction of a conceptual framework for better explaining consumer behavior under technology-based service encounter from cross-culture points of view. Quality-value-loyalty chain (Parasuraman and Grewal 2000) from marketing viewpoint is the important antecedent of firm profits.  Firms with higher performance in the chain are thought to gain higher profits than those firms with lower performance. Service quality then is the most important root of firm profits for non-pure physical products transaction.  To measure “service quality” is not easy due to its nature of intangibility, inseparability, variability and perishability until the most well-known SERVQUAL, a multiple-item scale, had been developed to measure consumers’ perceived service quality (Parasuraman, Zeithmal and Berry 1988; Parasuraman, Berry and Zeithaml 1991). SERVQUAL captures how customers perceive service quality from five distinct dimensions: reliability, responsiveness, assurance, empathy and tangibles, where reliability is the most important factor to customers and tangibles the least important (Berry, Parasuraman, Zeithaml and Adsit 1994). SERVQUAL has been cited and replicated widely in marketing studies for interpersonal service settings in particular; there are many researchers have attempted to test and adapt the SERVQUAL instrument in various settings (e.g., Fick and Ritchie 1991; Lewis 1991; Young, Cunningham and Lee 1994; Boshoff and Tait 1996; Espinoza 1999; Garland, Tweed and David 1999).  However, there are still some criticisms toward this instrument (see e.g., detailed review of SERVQUAL studies of Asubonteng, McCleary and Swan [1996] as well as of Llosa, Chandon and Orsingher [1998]).  Just recently it is not unusually to see researchers in order to overcome the situational limitations of SERVQUAL trying to identify critical service quality dimensions from the very beginning.  They often undertake qualitative research for certain industries (e.g., Johnston 1995), and some even develop scales with quantitative methods after (e.g., Dabholkar, Thorpe and Rentz 1996; Shemwell and Yavas 1999; Burgers, de Ruyter, Keen and Streukens 2000; Mentzer, Flint and Hult 2001; van Riel, Liljander and Jurriëns 2001; Janda, Trocchia and Gwinner 2002).  Generally, the original five-factor structure seems not have a good fit in all settings.  Especially when customers interact with technology instead of with service personnel (Parasuraman and Grewal 2000), and modification of SERVQUAL or even develop a new scale based on SERVQUAL to fit certain context is inevitable. In the long list of studies that build their own instruments, two are noteworthy.  One is a study to investigate consumer evaluations of e-services from a portal site (van Riel, Liljander and Jurriëns 2001), where the researchers find three dimensions: core service, supporting services, user interface, are what consumers use to evaluate e-services provided in a medical portal site.  In the other study, a five-factor internet retail service quality (IRSQ) scale is developed (Janda, Trocchia and Gwinner 2002).  The five respective dimensions are performance, access, security, sensation and information.  The instruments in these two studies are somewhat different, but their internal items do indeed partially overlap.  It is obvious that the technology service quality domain is still in its preparadigmatic phase that no single unique instrument can dominate the application in all settings, and the studies are converging on the same orientation. Another topic related to service quality attracting scholars’ great attention in this discipline is customer perceived value (e.g., Parasuraman 1997; Slater 1997; Woodruff 1997; Parasuraman and Grewal 2000; Slater and Narver 2000).  General agreeable concept toward customer value is the trade-off of “give-versus-get” that customers evaluate for a transaction.  The “get” component is simply the concept of “quality”, where service quality is a much more important competitive advantage for firms than product quality is in the framework of Parasuraman and Grewal (2000. Ideally, the measurement might be replicated or adapted from above-mentioned five-dimension SERVQUAL.  The “give” component is a concept of “customers’ perceived sacrifice.”  The sacrifice includes not only the substantive “monetary price” customers pay for a transaction, but includes also the intangible “nonmonetary price” of “time cost,” “search cost” and “psychic cost” related to a transaction (Babin and Darden 1995; Zeithaml 1988).  Another much broader definition of customer value by Woodruff (1997) is: Customer value is a customer’s perceived preference for and evaluation of those product attributes, attribute performances, and consequences arising from use that facilitate (or block) achieving the customer’s goals and purposes in use situations.


Applying Knowledge Management System with Agent Technology to Support Decision Making in Collaborative Learning Environment

Rusli Abdullah, Universiti Putra Malaysia, Malaysia

Shamsul Sahibudin, Universiti Teknologi Malaysia, Malaysia

Rose Alinda Alias, Universiti Teknologi Malaysia, Malaysia

Mohd Hasan Selamat, Universiti Teknologi Malaysia, Malaysia



Knowledge management system (KMS) with agent technology has played the major roles in managing knowledge to support making decision among communities of practice (CoP) of collaborative learning environment. This service is provided that to ensure utilization of knowledge as the corporate assets could be acquired and disseminated at anytime and anywhere, in the context of reaching and sharing the knowledge between CoP. Agent technology has also been used to speed up and increase the quality of service in KM process of collaborative learning environment, in term of creating, gathering, accessing, organizing and disseminating knowledge. This paper described on the conceptual and its relationship with agent technology in KMS and also been demonstrated on how it was being used to support the community members to make decisions of their purposes of learning environment. In this case, the KM system development is implemented by using the groupware software that is Lotus Notes product as a case study. Emphasis will be given in discussing the algorithms process specification of agent used that helps members to making decision and work collaboratively. The identification of critical success factors (CSF) of KMS in collaborative learning environment to ensure its initiatives will also been discussed.  Knowledge is information that is contextual and relevant of event, as well as actionable by something like human or agent. The concept of knowledge also could be as information in action as proposed by O’Dell (1998). This relationship between data, information and knowledge is shown in Figure 1. According to Nonaka and Takeouchi (1995), knowledge could also be categorized into two types that are explicit and tacit. The differences between two types are shown in Table 1. Therefore knowledge is an asset that should be managed well to be more valuable and more meaningful. The understanding of knowledge is very essential as we are about to define what does it really mean by knowledge management. Knowledge management system (KMS) has become a common medium to acquire and disseminate knowledge nowadays by using the IT as enabler tools for everyone to reach, share with among the members, and use it from any workplace in world at any time (Alavi and Leidner, 2001; Andriessen, 2002). In order to speed up and increase the quality of service for the communities of practice in an organization, agent technology (AT) can be used to help in the search and retrieval methods with knowledge management system (KMS) as well as to assist in combining knowledge thus leading to the creation of new knowledge (Barthes and Tacla, 2002; Barbuceanu and Fox, 1994).  This paper described on the conceptual and its relationship with agent technology in KMS and also been demonstrated on how it was being used to support the community members to make decisions of their purposes of learning environment. In this case, the KM system development is implemented by using the groupware software that is Lotus Notes product as a case study. Emphasis will be given in discussing the algorithms process specification of agent used that helps members to making decision and work collaboratively. The identification of critical success factors (CSF) of KMS in collaborative learning environment to ensure its initiatives will also been discussed.  What is agent? Webster's dictionary defines an agent as “One who acts for, or in the place of, another, by authority from him; one intrusted with the business of another; a substitute; a deputy ...". This definition is largely not applicable to software agents, except possibly for agents for personal assistance.  A more acceptable definition is by Russell and Norvig, (2002), ``An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors."  Similarly, Franklin and Graesser (1997) defined an autonomous agent as, ``a system situated within and a part of an environment that senses that environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the future", in their interesting article on taxonomy of agents. In the information technology, agent could be describes as a computer program that possess properties, and importantly, perform communication for the purpose of co-operation and negotiation, learning to improve performance over time, as well as autonomy, implying that agents can act pro-actively over an environment rather than passively waiting commands. In this case, the role of agent in knowledge management (KM) especially in collaborative context could be viewed in terms of a service agent and a personal assistant. The agent technology will allow and facilitate the team member to work together by using its capabilities like informer for certain message or notification. Service agents are used to be implement in certain features, like handling documents, managing white pages or yellow pages, do web searches, etc. Personal assistants have the responsibility of coupling human users to the knowledge management system (KMS). They also mediate communication among people, and this is of prime importance. Agent technology operation in KM is shown in Figure 2. In this case, an agent will act as a communicator for the user that is based on the direction given and produces the result when it is required to do so. Agent also could be categories in terms of its roles in knowledge searching, knowledge monitoring, and others (Hendriks, and Vriens, 1999).


Using Balanced Scorecard and Fuzzy Data Envelopment Analysis for Multinational R & D Project Performance Assessment

Dr. Kuang-Hua Hsu, Chaoyang University of Technology, Taiwan, R.O.C.



Performance indicator is an important factor used in business goal setting and management performance assessment. Traditionally, companies tend to measure business performance in terms of financial data such as ROI, ROA, etc. However, those indicators are not enough for management to evaluate business performance as a whole. Balanced Scorecard can be used to overcome this problem by integrating all aspects of business operation such as customer aspect, organizational process aspect, business creation and learning aspect, and financial aspect. While we are proposing the Balanced Scorecard method, we have to, at the same time, evaluate the performance of the organization. Whilst business performance can be expressed in certain clear numbers or words, it can only be described in vague languages for most of the times. Thus, the Data Envelopment Analysis Method (DEA) along with the Fuzzy Theory can be implemented to generate the objective performance indicators. This paper uses the Fuzzy DEA to evaluate the performance of Balanced Scoreboard for a multi-national research and development project.  Business organization’s goal is to pursue its operational performance. It is to assessment how business organization’s goal is achieved. Therefore, the management performance indicator is an important factor used in business goal setting and performance assessment.  However, the operational performances are clearly defined and should be explained with concrete events in order to be understood.  Marsh and Mannari (1976) defined operational performance in two parts: the top goal and the sub-goal performances.  The former includes the product and service output level, sales volume, and profitability.  The latter is set to help achieve the top goal, which means with the sub-goal the organization has to reduce work-absence rate, promote work morale, and implement a projects proposing system in order to reach business’s top goal.  Hitt(1988) classified the organizational performance into two categories. One is Executive Policy Research Approach which is usually used by company management and business policy researchers. The financial data such as ROI, ROA are used as performance indicators in this approach. The other is Organization Research Approach which is used by researchers. Some non-financial indicators such as total production output and work morale are used to measure business operational performance. Venkatraman and Ramanujam(1986) considered operational performance as only one part of organizational performances.  They pondered performance assessment in three dimensions: financial ,organizational and financial and operational integration. Quinn and Rohrbaugh(1996) also proposed a 3-dimension analysis model.  They divided organizational performance indicators into 30 items , then they put these 30 items into three categories.  From the above, we find that some people used financial indicators for performance assessment but lost the non-financial aspects. Focusing only on financial aspects is not enough for management to deal with the changing business environment.  Basically, traditional financial indicators have three major drawbacks. First, traditional performance indicators only emphasize the operational results but not the processes.  In today’s competitive business environment, management not only needs to focus on the operational results but also the operational processes in order to facilitate its quality control and cost-down policy.  Second, traditional performance assessment method lacks forecasting function.  A proper performance assessment method should cover three functions; recording the operational results, explaining the cause-effect of operational results, and forecasting the possible future.  Traditional method only offers the former two functions, but not the latter.  Third, although traditional performance assessment method is very cost-effective, it could be very harmful as far as companies’ long-term benefits are concerned.   Using only financial performance indicators may be easy, simple and cost-effective, but management may focus too much on the short-term financial effects and turn down the projects that bring long-term profits to the company. Maisel (1992) supposed that traditional financial assessment could not fully comply with business strategy.  Thus, it may become an obstacle for company to improve its competitive advantage and profitability, and to implement its policy.  A strategic performance assessment system should integrate both financial and non-financial indicators, assess both operational results and processes, and measure organizational performance from top to bottom. Campi(1992) thought that the main purpose of performance assessment system is to help business achieve its strategy.  A strategy-oriented performance assessment can help management to evaluate business managerial system, operation process, operation system, management processes, and the complexity of the processes.  The feedback of the performance assessment system can provide management with on-time information about whether the business operation is on the right track or not.  In designing a strategic performance assessment system, one not only should consider financial and non-financial indicators, he should also consider the organizational hierarchy effects.  Different levels of personnel or divisions in the organization have their own unique goals to accomplish.  Thus, they need different standards to measure their performances.  Frecka (1988) used four divisions in performance assessment process: work place, plant, business, and market.  In work place, management should focus on the performance of operational process such as quality, cost, delivery and so on.  While in plant level, management should focus more on the performance of the whole plant.  In business level, one should pay more attention to the performance of business divisions or units.  In market level, management should consider the competitive environment, macro economic, and industry performance.  Cross and Lynch(1992) presented the concept of  “Performance Pyramid” which separated the performance hierarchy into different levels: organization, business unit, operation system, and divisional work team levels.  In the organizational level, customer satisfaction, flexibility, and productivity are assessed.  Customer satisfaction concerns quality and delivery time and productivity concerns operational cycle and defect rate.  Flexibility is an indicator concerns the performance inside and outside the organization and it means the speed of responding to market demand, including the delivery time and operation cycle.  In the business unit level, short-term financial performance is important, an so is its marketing performance such as market share, product renovation, estimated market share etc.  In the divisional work team level, management should focus on quality, delivery time, cycle time, defects, and so on.


Contemporary Knowledge Management Platform - EPSS

Dr. H. W. Lee, National Chia-Yi University, Taiwan



The goal of an EPSS is to provide whatever is necessary to generate performance and learning at the moment of need.  Gery (1991) states that people have been provided with some of the help to accomplish this goal with powerful tools such as job aids and CBT. However, these tools are not an EPSS by themselves, although they can be part of an EPSS. The common denominator that differentiates an electronic performance support system from other types of systems or interactive resources is the degree to which it integrates information, tools, and methodology for the user. This paper examines the definition of an EPSS, reviews EPSS components, examines some course management tools currently used in educational programs, and explores the use of EPSS technology in education. Definitions of electronic performance support systems (EPSSs) vary, but they are generally agreed to be software programs or components that directly support a worker's performance when, how, and where the support is needed. EPSSs are intended to improve workers' ability to perform tasks, usually being performed on a computer. They are related to, but more than, task-oriented online help.  In order to determine if these systems can be used in education, more complete definition is essential.  Also, an examination of the literature will reveal the current state of affairs relating to the existence of fully formed EPSS systems in education or the alteration of these systems to accommodate educational needs.  Finally, an effort will be made to draw conclusions that will provide directions in educational settings. The integration of different tools to help the user perform a task is the key feature of an Electronic Performance Support System (EPSS).  An EPSS system is build to integrate resources and tools and to facilitate working on complex tasks (Laffey, 1995).  It is a computer-based system that improves worker productivity by providing on-the-job access to integrated information, advice and learning experiences and it is the electronic infrastructure that captures, stores and distributes individual and corporate knowledge assets throughout an organization, to enable individuals to achieve required levels of performance in the fastest possible time and with a minimum of support from other people (Raybould, 1990).  It is an integrated tool suite that supports the user of a complex system by providing embedded assistance within the system itself (McGraw, 1994). As a result, EPSS is credited with reducing the amount of time required to access information and bring workers to an entry level of job competency (Bastiaens, Nijhof, & Abma, 1996; Tait, 1995; Lamy, 1994; Bramer & Ghenno, 1993; McGraw 1994; Geber, 1991). The key characteristics of EPSS which make them different from other computerized instructions or tools are (a) computer-based, (b) provide access to the discrete, specific information needed to perform a task at the time the task is to be performed, (c) used on the job, or in simulations or other practice of the job, (d) controlled by the user, and (e) reduce the need for prior training in order to accomplish the task (Sleight, 1993). Learning may occur during the use of an EPSS, but the main purpose is to help the user to perform a task and improve productivity (Witt & Wager, 1994).  The primary goal of an EPSS is to enable employees to achieve required levels of performance which result in higher productivity and quality of service.  The EPSS should include components such as task- and situation-specific information, task-oriented training, expert advice, customized tools, appropriate user interface design, etc. and be available on demand at any time, any place, and regardless of the situation. Carr (1991) lists four benefits of an EPSS.  First, there is no delay between refresher training and the moment the knowledge is required.  Second, the employee always has access to the latest information and procedures. Third, expert and detailed advice is always available; and fourth, large potential savings are gained for an organization with employees in many different locations. An EPSS typically consists of four components: tools, advisory, information, and training (Sleight, 1993; Leighton, 1997; Desrosiers & Harmon, 1996; Gery, 1991; Ladd, 1993; Raybould, 1990). It is an interactive expert system, case-based reasoning system, or coaching facility that guides a user through performing procedures and making decisions. The advisory component provides help whenever the user needs it and it usually consists of a job aid in electronic form.  Examples of such are: job-task automation, online help, custom-designed templates and forms, checklists, online calculators, spreadsheets, statistical analysis packages, etc. The information component provides all the information the users require to do their job and it consists of data and the tools that the users can use to access the data.  Just-in-time and just-in-place information includes quick and easy access to the right information when you need it and where you need it. It is an on-line reference information (often called an "infobase"), hypertext on-line help facilities, statistic databases, multimedia databases, and case history databases. The training component is provided so that the user can get training on demand and it usually uses traditional CBT technology.  Other examples of training tools or instructional resources are: a video showing a procedure, simulations of tasks that allow the user to practice, computer-based training such as interactive multimedia, tutorials, and scenarios. The user interface is the most important aspect of the EPSS.  Without appropriate integration, the system will not be able to quickly or easily provide the required assistance to the user requesting system support (Milheim, 1992).   Task logic is built into the interface.  And related knowledge is tightly coupled with the logic, either embedded within the display or accessible through “tell me”, “guide me”, and “teach me”.


An Empirical Investigation of Sexual Harassment Incidences in the Malaysian Workplace

Dr. Mohd Nazari Ismail, University of Malaya, Kuala Lumpur, Malaysia

Lee Kum Chee, University of Malaya, Kuala Lumpur, Malaysia



This paper presents the findings of a study which investigates the factors that contributed to incidences of sexual harassment at the Malaysian workplace.  A questionnaire survey which was partly based on the Sexual Experience Questionnaire (SEQ) developed by Fitzgerald et al (1988) was carried out involving 656 respondents. The findings showed that sexual harassment incidences are rampant at Malaysian workplaces.  The findings also indicate that it is aggravated by several factors related to both the organization as well as the individual worker.  Specifically, a working environment characterized by lack of professionalism and sexist attitudes biased against women would cause female employees to be more prone to being sexually harassed.  When the various demographic characteristics were studied, the findings reveal that the sample of women employees who face a greater risk of sexual harassment tend to fall under the category of the unmarried, less educated, and Malay.  As the level of competition increases at a rapid rate, workers remain undeniably an important factor in the competitiveness of modern organizations.   However, as more women enter the labor force, the phenomenon of sexual harassment is increasingly becoming a workplace problem thereby affecting the competitiveness of organizations involved.   According to a major study conducted in the United States in 1981 by the U.S. Merit Systems Protection Board on 23,000 federal government employees, as much as 42 percent of the females had experienced sexual harassment in the two years prior to the survey (cited in Stockdale and Hope 1997).  Sexual harassment also exists among women in 160 of the Fortune 500 list of companies surveyed (Frits, 1989), where 15 percent of these workers have been sexually harassed within the past year while 50 percent tried to ignore it.   In addition, 24 percent of victims took leave from work so as to avoid it and five percent resigned after experiencing such incidents.  These in turn result in resignations, absenteeism, and reduced productivity of victims.  A similar trend in Malaysia has been emerging as women have been entering the workforce in increasing numbers.  By the year 2000 almost half of Malaysian women were economically active.  This, coupled with the simultaneous upward trend of women in traditionally male-dominated occupations, has set the stage for the sexual harassment threat.  As a consequence, sexual harassment has also become a widespread problem in Malaysia, as shown by recent studies.  In fact the rates of occurrences do not differ too much from the situation found in the United States.  Between 35 percent and 53 percent  of women have experienced sexual harassment at work (see for example, Ng et al, 2003; Marican 1999;  Muzaffar 1999)   This has in turn led to increased awareness of sexual harassment at the workplace and subsequently, the recognition that this problem must be formally addressed in order to safeguard the congeniality of the working environment.  The government started taking action to arrest the problem by launching the Code of Practice on the Prevention and Eradication of Sexual Harassment through its Ministry of Human Resources in 1999.  It was reinforced through a series of workshops for employers, and has through these initial efforts created awareness as well as provided guidelines for dealing with sexual harassment.  The Code has also helped clear the confusion or ambiguity over the meaning of sexual harassment by defining it explicitly as “any unwanted conduct of a sexual nature having the effect of verbal, non-verbal, visual, psychological or   physical harassment” that could be offensive, humiliating, or threatening to the victim.  It also specifies two categories of sexual harassment: sexual coercion and sexual annoyance.  Sexual coercion refers to sexual harassment that directly affects the employee’s benefits, e.g. salary, promotion, and other benefits, while sexual annoyance covers sexual conduct is perceived as offensive, hostile or intimidating to the victim.  Although implementation of the Code is completely on a voluntary basis, its implication for Malaysian employers is that they must now accept some responsibility for any sexual harassment incidences within their premises.  As a consequence, disciplinary rules and penalties have started to be installed in some companies (see Ng et al, 2003), providing a means of legal redress to victims.  Besides legal remedies, identification of the factors that could cause sexual harassment is deemed a necessary step towards minimizing the extent, and ultimately, even eradicating this problem.  Several models have been postulated in the literature to explain various likely factors for the phenomenon.  For example, the organizational model of Tangri et al (1992) suggested that the existence of several characteristics within an organization could contribute towards the proliferation of sexual harassment incidences.  Their socio-cultural model contends that sexual harassment is the result of the dominant position of men over women in society at large.  Men are regarded as the key source of economic and political power, but women as mere sexual objects.   From the viewpoint of these sexist attitudes, Gutek  also proposed  a sex-role spillover model  which explains a carryover from society into the workplace of gender-based expectations of behaviors that are irrelevant or inappropriate to work.  Women’s expected sex roles would then take precedence over their actual work roles.  This tendency is true when gender ratios are grossly imbalanced in the workplace, whether in favor of men or of women.  (Fitzgerald et al, 1997; Fitzgerald and Shullman, 1993; Gutek and Morasch, 1982).  Despite the possibility of a variety of risk factors exerting an influence on sexual harassment, there still remains a major gap in existing studies in Malaysia, as this important aspect tends to be ignored.     The purpose of this paper is thus to help fill this gap by examining some of the most likely causes of sexual harassment of female workers.  Specifically, these are the working environment in terms of the level of professionalism and sexist attitudes, dressing manner, as well as the demographic characteristics of the respondents.


Mortgage Decision … Lower Payment or Faster Payoff ?

Dr. Ralph Gallay, Rider University, NJ



Homeowner financing decisions are confused by the difficulty of comparing the benefits of long-term low monthly payment mortgages, with those of short-term faster payoff loans. The author presents a simplified return on investment perspective that allows one to make an objective comparison of the two alternatives based on increased equity as well as reduced interest costs, relative to extra payments made. This paper is positioned to be both a borrower’s, or lender’s tool to evaluate and explain the relative merits of different mortgages, as well as a pedagogical instrument for educators of personal financial issues. With interest rates in the United States near their lowest in over forty years, an entire new generation of homeowners is faced with the beneficial opportunity of refinancing. The ensuing discussion and calculations apply to every state within the country and affect all of the approximately 74 million owner-occupied homeowners therein (2. Danter, p.1), many of whom either are, or have been recently engaged in seeking homeowner refinancing. In fact, “(2002) was surely one of the most memorable years ever experienced by the home mortgage market”, to quote Federal Reserve Board Chairman Alan Greenspan (1. AFSA, p.1) and these represent more than sixty percent of all loans (6. MSN Money, p.1). The decision many people make in refinancing, inevitably hinges largely on how monthly payments might be reduced and most promotional efforts seem directed at emphasizing this point above all others. Many lending institutions do little to clarify the situation further. In fact some intentionally seem to confuse and confound the consumer with their messages (5. Mokhiber, 2002). A second concern, but one less stressed, is how quickly the loan is paid off, thereby reducing interest payments which do relatively little to enhance the benefit of the borrower. While on a United States itemized tax return, a homeowner may qualify to deduct mortgage interest payments, usually one of the taxpayers largest single deductions, for most this yields only about thirty cents of savings on taxes for every one dollar of interest paid. Clearly, the best scenario would be to avoid interest payments altogether unless one has a higher priority, or better reason to borrow such as for college, medical, home improvement costs, etc. Several studies have sought to explain the confusing alternatives homeowners face in their selection among various mortgage offerings. Most have focused on the choice between fixed and adjustable rate mortgages. In this regard “… many borrowers are unable to correctly assess which type of mortgage had the higher expected cost … implying that consumer education programs designed to help borrowers make a mortgage choice are needed” (4. Lino 1992, p.272). While there is an abundance of studies of this type in choosing between the fixed and adjustable rate mortgages, amazingly, there is almost no treatment of the advantages and disadvantages of a fifteen versus a thirty year mortgage. The problem for many homeowners and one that has not been addressed is how to compare the benefits of lower monthly payments with longer loan periods such as thirty years, to the benefits of lower interest payments of shorter loan periods such as fifteen years. One might presume that the choice between the two options would be primarily influenced by the size of the difference in monthly payments. As it turns out however, with rates as low as they are it is almost possible to secure a 15 year mortgage with payments that are little more than what they might be for an older 30 year mortgage. Hence opportunity costs, or what else one might do with one’s money, are not a main consideration if payments for both plans are comparable. The issue of comparison is really made more difficult since the lower monthly payments result in a very real addition to one’s budget, but the reduction in interest payment is a concept more difficult to envision since it is not directly viewed as more money in one’s pocket. Knowing also that there are some tax benefits associated with interest payments also makes it difficult to view them as undesirable. Bear in mind that this is not an exploration of the consumer’s mental point of view and associated risk tolerance or comfort level with different lending instruments. This manuscript assumes that a decision maker is a mature adult who has already reached that point in the process of securing the mortgage and needs only to discern which of several types offered, would be the wisest choice. It has been suggested when considering a fifteen year versus a thirty year mortgage, that one look beyond the shorter term’s interest savings and the longer term’s lower monthly payments. The advice is to compare the after-tax investment value from the cash flow savings of the longer term, to the higher equity in the home at the end of the shorter loan, choosing the strategy that yields the greatest increase in net worth (7. Refinancing Your…Residence). This while sound investment advice, is fairly complex for all but the most financially sophisticated homeowners to evaluate. Moreover, it is based on assumptions that the homeowner will wisely and consistently invest the difference in monthly payments between the two mortgages and can estimate accurately at the time of decision, what will be future interest and tax rates affecting the outcome of such a choice. Neither assumption is very practical. As a result, the author suggests here, another perspective based on a simpler calculation that makes neither any assumptions about the homeowner’s ability to commit to future investments, nor the various interest and tax rates that might exist in later years.


Taiwan Multinational Companies and the Effects Fitness Between Subsidiary Strategic Roles and Organizational Configuration on Business Performance: Moderating Cultural Differences

Dr. Ming-Chu Yu, National Kaohsuing University of Applied Sciences, Kaohsuing city, Taiwan



The purpose of this study is to examine the relationships among Taiwan’s overseas subsidiaries based on their strategic roles (including degrees of integration and degrees of localization), organizational configurations (including degrees of resource dependence and degrees of delegation) and business performance. However, their relationships also depend on the subsidiaries’ cultural differences. Using regression analysis, we show that different types of industries, stages of internationalization, degrees of integration, degrees of localization, and degrees of resource dependence are the most important factors on the subsidiaries’ perceived activity satisfaction. The results indicate that the sample of Taiwanese MNC affiliates falls into three subgroups (Autonomous Strategy, Receptive Strategy, and Active Strategy) depending on their global strategies. Active Subsidiaries are highly integrated and have high local responsiveness; Autonomous Subsidiaries have high local responsiveness but low integration, while Respective Subsidiaries have low local responsiveness but are highly integrated. Corporate internationalization and globalization is the focal point in the twenty-first century. The internationalization and the liberalization of business activities are two important elements for the success of contemporary enterprises. Recently, enterprises in Taiwan face surges of wages, rigorous requirements on the benefits of workers and changing business climates. In addition, the establishment of regional economic cooperatives has further accelerated the pace of overseas corporate investment to gain competitive advantages and market niches. Most of these investments concentrate on Mainland China and South East Asia. There has been a profound impact on multinational corporations (MNCs) in the past decade in Taiwan.  Most MNCs are operating in markets where the competition is intense and the number of competitors is large. To succeed in these markets, MNCs must successfully implement their strategies to provide them with a competitive advantage. As these overseas subsidiaries grow in size and develop their own distinct resources, numerous researchers become aware that corporate headquarters are no longer the only competitive edge for the MNCs (Birkinshaw & Hood, 1998). Some scholars have developed the heterarchy and transnational models, to explore the critical roles played by subsidiaries (Hedlund, 1986; Bartlett & Ghoshal, 1989). Furthermore, global expansion into countries with one or more overseas branches compels MNCs to pay attention and investigate the relationship between corporate global strategies and organizational configurations on corporate performance. This research examines the relationships among Taiwanese corporations’ stages of internationalization, the patterns of the roles of subsidiary strategies, and the organizational configurations of governing the interactions between parent companies and their subsidiaries. In striving to maintain competitive advantage and maximize global reach, MNCs are relying more heavily on the scope, nature and flexibility of strategic and operating roles performed by their overseas subsidiaries. The subsidiary role may be defined by the corporate parent, it may be developed by the subsidiary itself, or, more likely, it may evolve from an interweaving of the two (Taggart, 1996). The primary focus of the MNC at the corporate level is its own global strategy, within which foreign subsidiaries are assumed to play a fairly passive role in developmental terms and an active role in operational terms. Among these strategies, the best known and perhaps most strongly exemplified by empirical evidence is the integration-responsiveness framework proposed by Prahalad & Doz (1987) and the coordination-configuration framework developed by Porter (1986). The former framework was adapted slightly by Jarillo and Martinez (1990) for an empirical study of 50 Spanish subsidiaries of manufacturing MNCs. They found only three classifications in the strategy space: Active Subsidiaries are highly integrated and have highly local responsiveness; Autonomous Subsidiaries are highly local responsiveness but have low integration; Respective Subsidiaries have low local responsiveness but are highly integrated. Given a subsidiary's need to favor either global integration or local responsiveness, or to strike some balance between the two, and the need for the proper fit, both within the MNC and across its boundaries, we have reached the following hypothesis: Hypothesis l: The global strategies of MNCs' subsidiaries will be segmented according to two dimensions of global strategy: global integration and local responsiveness. The sample of Taiwanese MNC affiliates will fall into three subgroups depending on their global strategies. These clusters will resemble the three types identified by Jarillo and Martinez (1990) (autonomous strategy, receptive strategy, and active strategy). Ghoshal and Bartlett (1991) suggested that the MNC itself is best regarded as a controller of a network of interrelated activities. The extent and the form of these link-ups will rest on the resource and the capability configuration of the MNCs, which in turn will depend on the types of activities and the countries in which they are engaged. Sun (1997) used “perceived objective actualization” and “activity satisfaction” as performance measuring indices. Targart (1999) pointed out that different subsidiary’s strategic roles make significant differences in their business performance. Birkinshaw and Morrison (1995) thought matching both the strategic roles and organizational configurations of MNC will make for better business performance. We have reached the following hypothesis: Hypothesis 2: There are significant differences in organizational configuration and business performance between the different subsidiary’s strategic roles. Hypothesis 3: A subsidiary’s strategic role has a significant influence on the subsidiary’s business performance. Hypothesis 4: A subsidiary’s organizational configuration has a significant influence on the subsidiary’s business performance. Hofstede (1983) has identified four dimensions along which managers in multinational corporations tend to view cultural differences: power distance; individualism/collectivism; uncertainty avoidance; and masculinity/femininity. Baliga & Jaeger (1984) believe that when the cultural differences between parent company and its subsidiaries are getting large, there will be more uncertainties in decision making and a significant negative effect on its subsidiary’s business performance. Jaw (1994) found that there exists an interaction between cultural difference and resource dependence and this interaction will influence business performance. We have reached the following hypotheses:


An Analysis of Global Retail Strategies: A Case of U.S. Based Retailers

Soo-Young Moon, The University of Wisconsin Oshkosh, Oshkosh, WI



Since international marketing channels have played an important role in distributing products from international marketers to consumers in the world, and the impact of some global channel members has been far beyond the traditional functions of retailers, this study reviews the global strategic issues of U.S. based retailers.  Specifically, this study analyzes the global strategies of Wal-Mart, Home Depot, Kroger, Target, and Sears based on the concept of generic strategic options. Overall, the study found that globalization is not the answer for all retailers.  If a retailer finds that globalization is consistent with its strategic advantages, it is critical to analyze this option along with others.  Otherwise, the retailer should seek other alternatives.  However, all major retailers may have to expand their operation to the global market due to its growth potential and heavy competition in the U.S. market. Since Levitt (1983) advocated the importance of globalization in international business, much research (e.g., Bakhaus, Muhlfeld, and Van Doorn, 2001; Czinkota and Ronkainen, 2003; Hong, Pecotich, and Schultz, 2002; Laroche, Kirpalani, Pons, and Zhou, 2001; Shoham, 2002; Zou, Shaoming and Cavusgil, 2002) has been conducted empirically or theoretically to identify which strategy, standardization or customization, is more appropriate for a global expansion.  However, most focus has been on how to market products in the global market by using a standardization or customization approach.  The literature review in this field indicates that though marketing channels are an indispensable part of global marketing, there has not been enough research to find major channel issues.  International marketing channels have played an important role in distributing products from international marketers to consumers in the world.  Furthermore, some global channel members such as Wal-Mart, Carrefour, and Metro AG became global retailers, and their impact is far beyond the traditional functions of retailers.  Thus, the objective of this study is to review the global strategic issues of U.S. based retailers.  Specifically, this study analyzes the global strategies of Wal-Mart, Home Depot, Kroger, Target, and Sears based on the concept of generic strategic options.  All firms have four major strategic options: market penetration, product expansion, market expansion, and diversification (Aaker, 1995).  Though the origin of these options was for “goods” producers, they can be transformed easily to retailers for their strategic direction. A market penetration means that a retailer expands its business with the existing retail format (e.g., department store or convenience store) in the existing market. Here, the main survival tool is traditional marketing measures such as improved customer service, lower prices, wider product assortments, heavy in-store and non-store promotions, and better location.  As long as the market grows fast enough to meet the target sales for all existing local retailers, this option could be acceptable.  However, the effectiveness of this option is questionable when the equilibrium of the market breaks by an entry of new retailers or a decline of the market.  The adoption or continuous use of this strategy will be constrained further if the new entry is a retailer of a new format.  For instance, formidable competitors of supermarkets could be wholesale clubs or hyper stores rather than other supermarkets.  When local supermarkets face an entry of a wholesale club, an immediate challenge to the supermarkets is how to avoid or minimize losses caused by a price war with the wholesale club.  On the other hand, the wholesale club can make profits in the war since its cost structures are different.  It carries only limited merchandise lines which produce high inventory turnover, and offers very low every day prices with limited customer service.  This case exemplifies that no longer is there a fixed rule in retailing competition since all retailers face all types of competition such as inter, intra, system, and global competition. Sears, Roebuck and Co., originating in 1886, has the longest history of five U. S. based global retailers (Sears 2002 Annual Report).  Its annual report shows that it has 872 full-line stores in shopping malls and 1300 specialty stores in free-standing, off-the mall- locations or high-traffic neighborhood shopping centers.  Of the 1300 specialty stores, 767 are independently-owned stores that carry Sears brands as well as a wide assortment of national brands.  In addition, Sears operates 474 automotive and hardware stores, including NTB (National Tire & Battery), Orchard Supply Hardware, and Sears Hardware, and provides home services (remodeling, appliance repairs) under the Sears HomeCentral brand. Sears acquired Lands’ End, a leading direct merchant of traditionally-styled clothing, and owns about 55% of  Sears Canada.  Davidson, Bates, and Bass (1976) argue that a retail life cycle can explain the life of each retailer format, and each format has a limited life span.  Like a product life cycle, it has limitations: (1) there is no clear time dimension of each stage, (2) the exact dates of the birth and death of most formats are unknown, and (3) not all formats follow the exact bell curve. However, this life cycle offers an excellent tool to find where a particular format exists and where it will go. Their assessment indicates that the department store format is at the end of the maturity stage.  This means that department stores in general face slow growth, and they have to find an option of survival rather than an option of expansion.  A few department stores will grow faster than other formats as long as they can find their unique strategic advantages and maintain them.  These successful department stores are an exception to the rule.  Sears recognized the problems with its main retail format. Like other department stores, Sears has suffered from losing customer basis, heavy inter and intra competitions, and limited growth.  Thus, though sears has kept its basic retail format and has a limited exposure to global markets except in Canada and Puerto Rico, it has made strategic changes.  First, it added independent retailers to its traditional company owned and operation system. Though their contribution seems relatively small, this is the beginning of two separated channel patterns, corporate vertical marketing system and contractual system.  Second, it saw the limitations of its own non-store retailing and acquired Lands’ End to promote non-store business. 


The Impact of Consumer Product Knowledge on the Effect of Terminology in Advertising

Shin-Chieh Chuang, Chao-yang University of Technology,Taiwan

Chia-Ching Tsai, Da-yeh University, Taiwan



The use of terminology in advertising is rather popular and commonplace. Previous research suggested that using terminology in ads was intended to create more vividness effect on the audience, who may adopt the “central path” in the Elaboration Likelihood Model (ELM) and be convinced by the terminology in the advertisement message. However, we found that the better vividness effect occurs, when the subjects possess low product knowledge; conversely, a worse vividness effect is present when the subjects possess high product knowledge. In recent years, terminology has been used in large quantity, especially in ads, to which lots of terminology is attached. Shibata (1983) pointed out in his study that there was an increase of the use of monolingual message in Japanese society, and the English language was of greater and greater importance and was used more frequently. Mueller (1992) studied the use of Western languages in Japanese ads in 1978 and 1989, and the results showed the percentage of using English in Japanese ads was increasing and there was an upward trend of using English (which is not translated into Japanese) in Japanese ads. The main reasons of adopting terminology is that when audiences receive terminology, a vividness effect will occur to capture audiences’ attention and audiences may process ads containing terminology via the “central route” of the Elaboration Likelihood Model (ELM) and ads become persuasive. Moreover, the use of terminology may appeal to professional recognition, which is associated with technology and a professional image is formed to impact consumers’ purchasing behaviors.  Previous studies suggest that ads containing terminology can increase effectiveness of ads (Hong, 2002). In his further studies, Hong examines whether different effects occur in different product items in which terminology is used. The result of his study shows that when target products in ads are less innovative, such as daily necessities, subjects adopt an informational searching model with low involvement, and they do not need to collect too much information; therefore, no obvious vividness effect occurs in response to terminology. The persuasive effect in ads containing terminology drops dramatically. In contrast, when target products in ads are more innovative, a high degree of persuasion will occur, due to the fact that the products require high involvement, subjects will employ a better informational searching model, and ads containing terminology can appeal to audiences for more recognition, attract more attention and explanation. As a result, the ads generate better persuasive effect. This study is focused on whether or not ads containing terminology can create different persuasive effect in consumers who possess high level and low level of product knowledge. Under which situation, can terminology makes ads more persuasive (as opposed to ads without terminology)? According to Stewart and Keslow (1989), they hold that the message in a print ad can distinguish it from other competing brands so that a product can be recalled and be more persuasive. The message will form a so-called advertising value in the mind of a consumer and have an impact on his purchasing behavior. Under the framework, large quantities of advertising technology is adopted and developed. The purpose of using terminology in ads is to make ads more persuasive. Terminology refers to the advertisement message associated with the functions of products and science and technology. Ads consist of all sorts of messages. Empirically, ads containing terminology creates a preference for ads and brands; furthermore, ads increase consumers’ desire to purchase the advertised products, especially innovative products. Such ads can make a deeper impression on consumers, and consumers attach a high-tech image onto the products, resulting in better ad attitudes (Hong 2002). For consumers, terminology themselves represent professionalism, which can capture the attention of consumers. Moreover, ads containing terminology apply “comparative advertising” to highlight the characteristics of products and to distinguish them from other competitors. In other words, consumers associate products promoted by ads containing terminology with uniqueness and better thoughts through comparison in the comparative advertising. Thus, the presence of terminology can creates better brand attitudes in consumers, compared to the absence of terminology.  The reason terminology makes ads more persuasive due to vividness effect. Terminology are associated with high-tech and professionalism; thus, they can draw consumers’ attention (vivid impression). Such vivid impression will trigger what Frey and Eagly (1993) call “vividness effect.” They found that when products or the contents of ads create a vivid impression, they are more persuasive than those products that don’t create a vivid impression. However, when clear and specific information is given, there is no vividness effect. Especially when ads containing terminology cause consumers to adopt “central route” in the ELM to elaborate the ad message, and consumers put in more resources to notice and process the contents of ads; consequently, the ads are more persuasive. Previous studies suggest that consumers make purchasing decisions based on their bygone memories. Bieha and Chakravarti (1983) discovered that consumers made choices after they acquired information, and they also pointed out that consumers recalled different information based on different patterns of decision- making. Lynch, Marmorstein and Weigold (1988) proved that consumers made decisions based on the information existing in their memories. Rao and Monroe (1988) found that product knowledge could influence how consumers assess products. Consumer product knowledge was an important conceptual variable in consumer behaviors, affecting such as information collecting (Brucks, 1985; Rai & Sieben, 1992) and information processing (Hutchinson & Alba, 1991; Bettman & Park, 1980; Johnson & Russo, 1984; Rao & Monroe, 1988). How much consumers know about products is an important factor before they make a purchase. When facing all sorts of products and a multitude of information, consumers need to get to know the products through the given information. Different consumers have different interests and different degrees of understanding about products. Consequently, consumers form different opinions about products. The impact of product knowledge is demonstrated mostly on durable goods, especially on some high-involvement and high-risk products. Consumers tend to make an assessment based on their knowledge relevant to the product.


Analyzing Functional Performance of Hong Kong Firms:Planning, Budgeting, Forecasting, and Automation

Dr. Steven P. Landry, Monterey Institute of International Studies

Dr. Terrance Jalbert, University of Hawaii at Hilo

Dr. Canri Chan, Monterey Institute of International Studies



This paper presents a case study concerning benchmarking one particular set of performance attributes of firms. Specifically, the study addresses issues of planning, budgeting, forecasting and automation of the accounting and finance functions of Hong Kong firms.  Students are required to compile raw data gathered from a survey into a format that can be utilized for benchmarking.  Further, students are asked to address inherent data limitations associated with using low return-rate, small sample size, survey data for benchmarking.   Students are also required to develop various tables and discuss how the data and tables might best be used.  The case is appropriate for sophomore and junior undergraduate students.  Students should be able to complete the case in 1 hour of preparation outside the classroom.  In order to survive in a competitive economy, firms must continuously improve and enhance their products, services and operations.  Benchmarking is the primary method by which firms assess their performance relative to their peers.  Benchmarking involves comparing an attribute of one firm to the same attribute in a group of  comparison firms.  To make the comparison fair, the comparison firms should be similar to the firm being examined.  Thus, comparison firms are generally selected from firms in the same industry that are similar in size and operate in a similar geographic area.  By comparing a firm to its peers, managers can identify areas where they have a competitive advantage, areas where they have a competitive disadvantage and areas for improvement.  In so doing, firms can best position themselves in the market to better deliver their value proposition and therefore maximize shareholder wealth. In many instances the data for benchmarking can be obtained from data collection and reporting firms, such as Robert Morris and Associates, which specialize in collecting benchmarking data.  These data are generally related to the overall financial operations of firms.  In some instances, data available from reporting firms may not be sufficient.  This might be the case when the firm operates in a highly specialized industry, in a unique geographical area, or when more specialized data than is provided by reporting firms is needed.  When faced with such data challenges, firms and/or associations must collect their own industry data to complete a benchmarking analysis.  This study examines one particular set of performance attributes of the Accounting and Finance functions of publicly traded firms in Hong Kong.  Because data regarding the accounting and finance functions of Hong Kong firms was not available from other sources, a survey of firms was conducted under the sponsorship of the Financial Management Committee (FMC) of the Hong Kong Society of Accountants (HKSA).  The survey contained thirty-two questions associated with the accounting and finance functions of the firms.  The questions were further divided into seven specific areas of interest.  An excerpt from the survey instrument, relating directly to the performance attributes of interest in this particular case, is presented in Appendix 1.  In this case, we are concerned with the two sections of the survey related to (1) Planning, Forecasting and Budgeting, and (2) Automation.  The data are attached in Appendix 2.  Using this information, students are required to compile the results of the survey for use as benchmarks for other firms.  The firms were classified as being a member of one of the following seven industry groups:  Consolidated Enterprises (CE), Financial (F), Hotels (H), Industrial (I), Property (P), Utility (U) and Miscellaneous (M).  Classifications were made based on the collective opinion of the HKSA Financial Management Committee.  Survey instruments were sent to 633 listed Hong Kong companies.  Sixty seven surveys were returned representing an overall response rate of 10.6 percent.  The response rate varied from 0% for the hotel industry to 25% for the utility industry.  Question A:  Based on the data provided in Appendix 2 and using the suggested work sheet formats shown below (Tables 2 – 8), develop the baselines for each Industry classification.   To complete this step, you should compute the number of observations for each industry, the mean, sample standard deviation and skewness of the data.  Briefly interpret the figures you obtained from the tables prepared.  A perfectly normal distribution will have a skewness measure of 0.  A positively skewed distribution will have a skewness measure that is greater than zero and a negatively skewed distribution will have a skewness measure that is less then zero.  The range of skewness values depends upon the number of observations in the sample.  Larger sample sizes produce larger possible statistical values.  The approximate maximum and minimum values for the statistic for varying sample sizes are presented in Table 1.


Risk Perception, Risk Propensity and Entrepreneurial Behaviour: The Greek Case

Dr. P. E. Petrakis, Athens National and Kapodistrian University, Greece



The analysis clearly projects two main points: a) the way that risk is perceived by the entrepreneur, which is a primary procedure that determines other important aspects of the entrepreneurial behaviour and performance, b) that there are no determining factors of the risk perception; although it influences the way that cultural idiosyncrasies, knowledge and flexibility characteristics are developed and affects the firm’s performance. On the contrary risk propensity is determined within the entrepreneurial behaviour framework and it takes mediating role of transforming influences mainly from the external macroeconomic environment into important personal traits. It also mediates the entrepreneur’s independence, his need for achievement and his risk perception.  The research concerning the factors affecting the entrepreneurial activity has a long history and is extended in the fields of economics (Schumpeter, 1934), sociology (Weber, 1930), and psychology (McClelland, 1961). The entrepreneurial activation is the combined result of macro level environmental conditions (Aldrich, 2000), which have economic or social origin, the characteristics of entrepreneurial opportunities (Christiansen, 1997), and of human behaviour that are related to entrepreneurial motives (Shane, Locke and Collins, 2003) and cognitions (Mitchell, Smith, Morse, Seawright, Peredo and McKenzie, 2002). The issue of risk is central to the study of entrepreneurial behaviour and performance. Different points of views are employed in entrepreneurial risk research agenda (Norton and Moore, 2002): opportunity recognition (Hills, Shrader and Lumpin, 1999; Rice and Kelley, 1997), opportunity evaluation (Hean Tat Keh, Maw Der Foo and Boon Chong Ling, 2002) decision making and problem framing (Kahneman and Tversky, 1979; Tversky and Kahneman, 1981; Schneider, 1992), risk propensity and cultural approval of risk (Wallach and Kogan, 1964; Brockhaus, 1980; Gomez-Mejia and Balkan, 1989; Rowe, 1997) and cognitive approaches to entrepreneurship (Palich and Bagby, 1995). Finally the issue of entrepreneurial alertness in relation to risk has been restated (Norton and Moore, 2002) in the light of Bayesian model.  The present article focuses on the relationship between three different aspects of entrepreneurial risk, namely: entrepreneurs’ risk propensity as a personal trait, entrepreneurs’ risk perception and finally the firm’s risk undertaken as it is actually observed in the firm’s data. These three different aspects of the risk concept are related to the main sub-frames which shape entrepreneurial behaviour: macro-environmental factors, cultural values, entrepreneurial motives and traits, cognitive variables, personal characteristics and finally the microeconomics of the project. Thus the paper tries to contribute to the study of ex ante and ex post entrepreneurial attitudes towards risk.  Section 2 focuses on opportunities evaluation under risk, while section 3 examines entrepreneurial risk attitudes. The factors that determine the entrepreneurial behaviour towards risk are presented in section 4; in section 5 we examine the field research. Finally conclusions are drawn in section 6. This paper is about the effect of the rate of uncertainty and risk (in cases where it can be measured – see Knight, 1921) on the opportunities evaluation procedure. Decision risk is defined here, according to an extension of Sitkin and Pablo’s (1992) definition, as the extent to which there is uncertainty about whether potentially significant (satisfactory) and/or disappointing outcomes of decisions will be realized. Thus risk reflects the degree of uncertainty and potential loss associated with outcomes which may follow from a given behaviour or set of behaviours (Forlani and Mullins, 2000). Yates and Stone (1992) identify the basic element of risk construction: potential losses and the significance of those losses. The point of research focuses on how entrepreneurs cope with the risks inherent in their decisions, what determines the way they perceive the riskiness of their decisions, whether they possess character traits which predispose them to engage in uncertain behaviour or whether they assess opportunities and threats differently from non-entrepreneurs (Norton and Moore, 2002).  In order to do so, we should clarify the meaning of the two basic concepts and target variables we use. Thus, risk perception is a subjective concept about the controllability of uncertainty (Sitkin and Pablo, 1992; Baird and Thomas, 1985). This subjective concept generally speaking is developed according to problems framing (how the problem is presented to the entrepreneur, positively or negatively), outcome history (Sitkin and Weingart, 1995), the problem under consideration and the cognitive process of risk perception development. The analysis of these factors is part of the function of this paper. This concept could be connected with society’s general sense of uncertainty of controllability as a social value and it is formed at a personal level. Risk propensity is defined as an individual’s current tendency to take or avoid risks (Sitkin and Pablo, 1992; Sitkin and Weingart, 1995). It is a clear personal trait and can also be influenced by general social values (which can influence all aspects of entrepreneurial behaviour). In conclusion, when we speak about high or low risk perception we will state a situation where the individual believes that the uncertainty of outcomes is highly uncontrollable (strong controllability).  Drawing and extending the work of Sitkin and Pablo, (1992) and Forlani and Mullins (2000) we will imply that entrepreneurs’ perception of risk and decisions involving risk are distinct and separate cognitive processes. Moreover risk propensity is a separated cognitive process than risk perception. Following Sitkin and Weingart (1995) and in contrast with previous researchers (Derby and Keeney, 1981) we do not consider risk propensity as a stable personal attribute. Thus, we employ a trait-based oriented approach which is constructed as a cumulative tendency to take or avoid risks and can be changed as a result of experience.


Growth, Entrepreneurship, Structural Change, Time and Risk

Dr. P. E. Petrakis, Athens National and Kapodistrian University, Greece



This article is about the role of the entrepreneurial perception of time and risk vis à vis structural change and growth. Entrepreneurship is a basic constituent element of social capital which in turn is a productive lubricant of the growth process. Different structural entrepreneurial prototypes with respect to time and risk have different structural change effects. Those structural changes (and any structural changes) are not neutral as far as the implications of growth rate changes are concerned. Therefore the time and risk characteristics of active entrepreneurship are reflected in the growth process either in the form of structural change and/or in the form of growth rate change. The paper is developed as follows: section 2 focuses on social capital, entrepreneurship and growth relationship; sections 3 and 4 relates to the analytics of risk and time; section 5 clarifies the time and risk dimensions of entrepreneurship; section 6 analyses the effects of entrepreneurial time and risk on structural change. Finally conclusions will be drawn.  The concept of social capital has been put forward alongside the traditional concept of financial, real and human capital during the 1990s (Portes and Landolt, 1996) and it has recently been related to entrepreneurship (Westlund and Bolton, 2003). According to Bourdien and Wacquant (1992) social capital is an individual or group-related resource that accrues by possessing a durable network of more or less institutionalised relationships. According to Coleman (1988, 1990) it is to be found in the relations between individuals and it includes obligations, expectations, information channels and social norms (Piazza-Georgi, 2002), like high-trust and low-trust attitudes (Fukuyama, 1995) or family-based social trust vs community-based trust (Fukuyama, 1995). Social capital should be regarded as the most diversified of capital forms. The extent of the diversification will largely depend on how its basic nature is analysed: Coleman’s (1990) endogenous phenomenon of social relations vs Fukuyama (1995)’s view that it is the result of society's trust and cooperation.  Woolcock (1998) and Fedderke et al. (1999) proposed that we should see in the concept of social capital two interacting dimensions: ‘transparency’ (the transaction-cost-lowering functions of social capital) and the rationalisation potential of maintaining increasing returns to scale, i.e. delaying the onset of diminishing returns. Two more notes were added to this by Piazza-Georgi: (a) social capital operates to a significant extent through human skills capital and entrepreneurial skills by lowering their creation costs; (b) there may be a significant substitution effect between human and social capital (towards human capital) through the increased cost of human time. If we then accept that investing in human capital is more efficient that investing in social capital we could have another reason for delaying the diminishing returns process. Thus growth and social capital are positively connected since the accumulation of the latter is fuelling the growth process.  Yu (2001a) utilizing Kirzner’s theory of entrepreneurial discovery, Schumpeter’s two types of economic responses (extraordinary and adaptive) and the Austrian theory of institutions as building blocks, constructs an entrepreneurial theory of institutional change and social capital accumulation. Yu and others do not use the concept of social capital and institutions alternatively. However, the concept of social capital as it is defined (Westlund and Bolton, 2003) can be very closely compared with Yu’s definition of institutions (Yu 2001a, 2001b). The process of institutional change is the continuous interaction between entrepreneurial exploitation and exploitation of opportunities (Dosi and Malebra, 1996). Institutions (stock of knowledge) emerge as a result of human agents attempting to reduce structural (vs neoclassical static) uncertainty. Therefore, entrepreneurship enlarges institutional development and social capital accumulation. It is obvious that there are second-round effects where social capital accumulation reinforces entrepreneurship through the production of externalities which promote the distribution of information and generate asymmetric information. At the same time institutions reinforce entrepreneurial alertness and the discovery process (Yu, 2001b). Thus we may accept that entrepreneurship enlarges social capital accumulation. Therefore, entrepreneurship positively affects growth. According to Brouwer (2000) in Knight’s view, true uncertainty is the only source of profits, since profits would disappear as soon as change became predictable or it can be hedged and they will be changed into costs. Brouwer shows how the introduction of Knightian uncertainty can abate diminishing returns of innovative investment. This can be done by R&D cooperation, that is, by creating social capital through R&D networks. So we can conclude that uncertainty makes perpetual innovation more likely. Thus growth and uncertainty are positively related. Knight saw rates of return on entrepreneurial investment vary around an average and it is the relative entrepreneurial ability that is rewarded. Entrepreneurs also create a great deal of uncertainty through Schumpeterian innovation which creates confusion in the market. Lack of entrepreneurship means that we are locked up in old structures, interpretations and understandings (Yu, 2001a). Thus, entrepreneurial activation is positive concerned with uncertainty.  The portfolio theory, as shaped by Markowitz (1952), Tobin (1958) and Sharpe (1964), recognised the positive relationship between expected return and risk. When an individual creates a portfolio, optimalisation is based on the risk and return relationship: risk is the result of either systematic risk or, in the case of imperfect unsystematic risk, diversification due to project indivisibility or project interrelationships (Acemoglu and Zilibotti, 1997) or a combination of both. Given these prerequisites, the risk premium that the economic agent enjoys is the entrepreneurship premium that we come across in entrepreneurship theories.


The Effect of Genetically Modified Seed Technology on the Direct and Fixed Costs of Producing Cotton

Dr. D. W. Parvin, Mississippi State University, MS



The introduction of genetically modified seed technology dramatically changed cotton production practices.  Production systems based on reduced tillage and varieties containing genetically modified genes improved net returns by $47.35 per acre (53%) when compared to systems based on conventional tillage and non-transgenic varieties.  Changing the planting pattern to 30” 2x1 full skip and reducing the seeding rate to 3 per foot of row increased net returns by 78% when compared to a 38” solid planting pattern and 4 seed per foot of row.  Emerging harvesting technology, cotton picker with onboard module builder, which eliminated boll buggies and module builders (and the tractors they require) will reduce harvest cost by 32%. Monsanto introduced their genetically modified seed (GMS) technology in 1996 and dramatically changed the way cotton is grown.  In 1995 most of the cotton grown in the Mid-south was based on conventional varieties and employed conventional tillage practices.  In 2004 approximately 95% of the Mid-south cotton acreage was planted with genetically modified varieties (Mississippi Agricultural Statistics Service) and was based on conservation tillage or no-till production practices.  The new technology has reduced the number of trips-over-the-field, reduced labor and equipment requirements per acre of cotton, and stimulated the development of other new technology.  Basically, the new systems of production employ fewer trips-over-the-field with wider equipment. The Department of Agricultural Economics, Mississippi State University, annual cost of production estimates, (available on line at Research/ budgets.php) indicate that since the introduction of Monsanto’s genetically modified seed technology (1996-2004) tractor hours per acre of cotton have been reduced by 49% and labor hours have been cut by 43%.  Harvest is the most costly component of cotton production.  Cotton harvesting systems require that the cotton picker be supported by a boll buggy (BB) and a module builder (MB).  Each requires a tractor.  Currently, on most cotton farms, more tractors are required during the harvest season than any other period during the production cycle.  New harvesting technology, which eliminates boll buggies and module builders and the tractors that support them is expected to increase the reduction in tractor hours to 74% and labor hours to 64% relative to 1995 levels. Data on the cost/unit of production inputs such as labor, fuel, fertilizer, herbicides, insecticides, etc. are 2004 estimates (Cotton 2004 Planning Budgets).  Data associated with power units (tractors and cotton pickers) and towed equipment include 2004 estimates of price, length of life, annual hours of use, performance rate (hours per acre), repairs, salvage value, etc. (Cotton 2004 Planning Budgets).  Cost to producers of technology not yet marketed was estimated by contacting knowledgeable individuals in the cotton industry.  The experience of the author has been that estimates of this type involve errors of unknown magnitude but that the cost of new technology is typically underestimated.  GMS technology has directly impacted two components of the cotton production system, insect control and weed control.  Genetic traits have been added that allow over-the-top application of selected herbicides and that control selected insect pests.  The new technology should be jointly evaluated as a component of an insect management subsystem and a component of a weed control subsystem within an overall cotton management system.  The Mississippi State University Budget Generator (Laughlin and Spurlock), a widely accepted computer algorithm which standardizes many accounting calculations, was utilized to estimate the impact of recent and futuristic changes in cotton production technology on the cost of producing cotton. This section examines the impacts of five changes in cotton production systems induced by the introduction of GMS; tillage practices, planting pattern and seed drop rate, wheat as a cover crop, pickers with onboard module builder (PWOBMB), and a single power unit that functions as a tractor, sprayer, and cotton picker.  This section begins by comparing cotton production systems based on 8 row (8R) planters and 4R pickers (dominant 1995 equipment size) and ends by comparing systems based on 12R planters and 6R pickers (current or emerging equipment size). The adoption of GMS which allows over-the-top applications of selected herbicides (RR technology) and genetic control of selected insect pests (Bt technology) has opened new opportunities to consider reduced tillage and no-till cotton in situations where previously it was not practical.  Per acre budgets were estimated for three systems of cotton production. I.  8R-40” solid, conventional tillage, non-transgenic (conventional) variety [CT/CV]. II. 8R-40” solid, reduced tillage, BtRR variety [RT/BtRR]. III. 8R-40” solid, no-till, BtRR variety [NT/BtRR].  Reduced tillage is often referred to as conservation tillage or limited seedbed/chemical tillage.  Output prices selected were $0.64/lb. of lint and $0.04/lb. for cottonseed.  Yield (lbs. of lint/a) was set to 825 and 1.55 lbs. of cottonseed was assumed for each pound of lint.  System I was compared to the other systems in terms of selected costs, returns, labor, and equipment.  Table 1 reports estimated per acre revenue, selected costs, labor hours, and gallons of diesel fuel for each of the three systems of production. 


E-Government – A proactive Participant for e-learning in Higher Education

Sangeetha Sridhar, Majan University College, Sultanate of Oman



The advent of Information Communication Technology (ICT) has empowered both learners and teacher with capabilities to reach and resource beyond physical borders. The information age is changing the way people work, learn, spend their free time and interact with one other. In the Knowledge era, communication technology has made it possible to have a Global University Campus in all its true senses: a collaborative community, multinational staffing, distributed campus resources and multimedia technology. ICTs are driving down costs, improving efficiency and creating a climate of innovation, with competitiveness moving from the national to the global level. They are challenging existing methods of governance, commerce, education, communication and entertainment. This paper presents the findings of a research into the issues regarding the role of e-governance in higher education especially incorporating ICT technologies. The recent trends in higher education demand knowledge creation, capture, dissemination and application for crafting a sustainable development of the entire economy. The paper looks into the Vision 2020 statement for strategic goals, while analyzing the Census 2003 data for trends and direction of higher education in the local region. The research begins with an appraisal of the advent of higher education in Oman, the different centres of excellence and disciplines to begin with. It draws statistics from the Census 2003 to identify further requirements and potentials for future direction and initiatives. A set of strategic issues in the practical implementation of e-learning in higher education through e-governance is presented along with coverage of ICT policy framework. The focus areas would be capabilities of e-learning mode enabled by ICT technology and its implications in the region, Oman market trends, current levels of literacy, legal framework, nation-wide digital library, public services online and citizen awareness. The paper concludes with strong recommendations both at National and Institutional levels for serving as guidelines in paving the path for their directions in future. E-government is the process of offering better government service to the public at a lower cost (1). The right balance to be achieved in implementing e-government with ICT is to balance the public ability to reach and access electronic information. Access to all official information and service offerings of the Government administration is of prime focus and the same applies to educational resources as well. As we move from an industrial to a knowledge economy, it's not what you produce but what your people know (and are capable of), that gives you competitive advantage. E-learning is about use of networked technology to manage, design, deliver and support learning. In its simplest form, it is the use of technology to deliver and facilitate learning. It can cover a broad set of applications and processes such as online training and education (over the internet, intranets, and local or wide-area networks) and computer-based training (PC-based, CD-ROM). The definition of e-learning can also extend to digital collaboration, communities of interest, information and knowledge sharing. Online technologies are as much about communication as they are about information access and this creates a powerful combination in the context of learning.  E-Learning, enabling the use of traditional print media along with electronic communication capabilities such as emails, Internet, instant message chats, discussion forums, voice and video conferencing in higher education has opened a plethora of opportunities as well as challenges.  E-government vision requires a community that is information and technologically literate to access the information they require. Information literacy involves more than computer literacy. It is the set of skills that allow people to find information, evaluate its accuracy and credibility, and apply it to help them solve real problems. Information literacy is perceived as a life skill. This requires an element of ‘Trust’ from the citizens ensured by 24/7 secure technology infrastructure.  The 1970s and 80s witnessed higher education in Oman primarily with the aid of overseas scholarships due to lack of local expertise and specialisations. However it enabled crucial personnel to gain valuable training to take over key positions across vital sectors. In 1994 the government has addresses this mass demands in higher education initiated a major expansion in numbers and scope of higher education institutions both by the government and private, to meet the local market needs.  The Higher Education Council set up by the Royal Decree No.65/98 in 1998 draws up a general policy for higher education and scientific research in the sultanate’s higher education institutions. It also prescribes to regulate student numbers and intake procedures (2). The council is also responsible for evaluating the performance of existing institutions and approving proposals for new universities. The Accreditation Council set up by the Royal Decree No.74/01 in 2001 is an independent body responsible to the Higher Education Council. Three permanent committees of this board make recommendations on accreditation of higher education institutions, accreditations of programmes of study and quality control. The advent of higher educational institutions in Oman began with the establishment of the Omani Bankers Institute (currently College of Banking and Financial studies) in 1983 and the Intermediate Teachers’ Colleges (currently Colleges of Education) in 1984. At the same time the Technical Industrial College (currently Higher Technical Colleges) was opened to provide specialist vocational qualifications. Health institutes were opened to train Omani nurses, radiographers, physiotherapists and dental hygienists to work in the government hospitals. The former institute of Sharia jurisprudence is now divided into the Institute of Sharia Sciences and the Sharia and Law College. The private sector which began its operations after 1994 has scaled rapidly with about 14 private colleges and private universities at Soha, Nizwa and Salalah.


The Role of Affect and Cognition in the Perception of Outcome Acceptability Under Different Justice Conditions

Dr. Douglas Flint, University of New Brunswick, Canada

Dr. Pablo Hernandez-Marrero, University of Toronto, Canada

Dr. Martin Wielemaker, University of New Brunswick, Canada



Prior research has focused on negative affective responses to distributive justice.  This study extends this to consider both affective and cognitive responses to various combinations of procedural and distributive justice.  The effects of these responses on perceptions of outcome acceptability are then determined.  Structural equation modeling is used to measure the interrelationships of the affective and cognitive effects. In the conceptualization of equity theory Adams (1965) and Walster, Walster and Berscheild (1978) postulated that perceptions of injustice would lead to negative emotional states that would then motivate a search to redress the inequity. Since that time a limited amount of research has confirmed the production of negative affective responses to injustice (Clayton, 1992, study 2; Hegtvedt, 1990;Mikula, 1986; Sprecher, 1992).  The most comprehensive study to date by Mikula, Scherer and Athenstaedt (1998) involved 2,921 students who reported situations in which they had experienced positive and negative affective reactions.  Situations perceived of as unjust elicited feelings that were longer in duration and more intense. The studies to date have focused on affective responses to distributive justice.  To the best of our knowledge no studies have examined cognitive responses to justice.  Therefore, this study proposes to extend this to the consideration of the effects of both procedural and distributive justice on cognitive and affect responses.  Further, we propose to examine the effects of cognitive and affective responses to the formation of perceptions of the acceptability of an outcome.  We begin with a description of procedural and distributive justice. This is followed by a discussion of affect, cognition and their interrelationships.  Research hypotheses are formulated about the effect of different interrelationships on the formation of perceptions of outcome acceptability. Organizational justice is concerned with the fair treatment of employees in organizations. Organizational justice is conceptualized as two factors: procedural and distributive justice. This study examines the role of cognition and affect on the formation of perception of the acceptability of decisions made under different conditions of procedural and distributive justice. Distributive justice asks: How fair is an outcome? For example, previous research has examined the fairness of pay (Folger & Konovsky, 1989) and performance evaluation (Greenberg, 1986) outcomes.  The present study considers the fairness of the outcome of the discipline of a student by a university. Procedural justice asks whether the process leading to an outcome is fair. Procedural justice is of interest to because it has impact on important organizational outcomes.  These include: performance (Ball, Trevino & Sims, 1995; Gilliland, 1994; Konovsky & Cropanzano, 1991; Welbourne Balkin & Gomez-Mejia, 1995), organizational commitment (Brockner, 1992; Konovsky & Cropanzano, 1991; Schaubroeck May & Brown 1994), job satisfaction (Schaubroeck et al, 1994), organizational citizenship behavior (Ball et al, 1995), commitment to organizational decisions (Greenberg, 1994; Korsgaard, Schweiger & Sapienza, 1995; Lind, Kulik, Ambrose & de Vera Park, 1993), turnover intentions (Schaubroeck et al, 1994, Olson-Buchanan, 1996),  theft (Greenberg, 1990, 1993), and retaliation against organizations (Skarlicki & Folger, 1997).  This study measures perceptions of the acceptability of an outcome under different conditions of procedural and distributive justice. The impact of cognition and affect in the formation of those perceptions is tested. There is a large body of social cognitive research that examines the relationships between cognition, affect and behavior. This literature is complicated by inconsistencies in the definition of cognition and disagreement as to the relationship between the three constructs.  Some researchers equate cognition with beliefs (Ajzen & Fishbein, 1980; Fishbein, 1967) others with stereotypes (Breckler, 1984; Breckler & Wiggins, 1991; Zanna & Rempel, 1988) or as mental representations held in memory.  The lack of cohesion in this area can be seen by the numerous models of mental representation, some of which include associative network theory, (Wyer & Carlston, 1979, Carlston & Skowronski, 1986), associative systems theory (Carlston, 1992), schema models (c.f. Mandler, 1979; Rumelhart, 1984; Wyer & Gordon, 1984), the bin model (Wyer & Srull, 1986, 1989), etc.  For the purposes of this discussion we use Breckler’s (1984) definition of cognition as beliefs, knowledge structures, perceptual responses, and thoughts.  Our first hypothesis concerns the influence of cognition on perceptions of outcome acceptability. Hypothesis 1: Cognitions directly influence the perceptions of the acceptability of an outcome under different conditions of procedural and distributive justice. Affect refers to an emotional response or a “gut reaction” (Breckler, 1984). The view of affect employed here is that outlined by Crites, Fabrigar and Petty (1994).  These researchers define affective information as consisting of discrete qualitatively different emotions that vary along an evaluative dimension and are accessible to the verbal system.  Our second hypothesis concerns the influence of affect on perceptions of outcome acceptability. Hypothesis 2: Affect directly influences the perceptions of the acceptability of an outcome under different conditions of procedural and distributive justice. It has been found that affective and cognitive components of attitude have distinct relationships to behavior (Breckler & Wiggins, 1989a, Miller & Tesser, 1986, 1989, Wilson and Dunn, 1986).  Several combinations for the relationship between cognition and affect have been proposed.  Schachter and Singer (1962) proposed that cognitions influence affect to give meaning to undifferentiated emotional arousal.  Bower (1991) suggested that affect influences cognition by producing selective attention to stored cognitions.  Zajonc (1980) suggested that affect and cognition are independent effects of information processing. Our next three research hypotheses test these three possible relationships. 


Developing Measurements of Digital Capital in Employment Websites by Analytic Hierarchy Process

Dr. Chung-Chu Liu, National Taipei University

Dr. Shiou-Yu Chen, Chung-Yu Institute of Technology



The Internet has provided a place for businesses to come together with a speed and possibility for communication that was never possible before today's information age. Digital capital, the currency of the future, refers to intangible assets gained through knowledge and relationships. This research developed 17 indicators to assess the digital capital on employment websites. The researchers used in-depth interviews to collect data followed by Content Analysis and Analytical Hierarchy Process (AHP) for analysis. According to the analytical results, the study identified four dimensions of digital capital; customer capital, innovation capital, service capital, and relational capital. These results provide a reference point for web-base organizations to assist in determining the key digital capital of their employment websites.  The emergence of Internet technology and the World-Wide Web as an electronic medium of commerce has brought tremendous change to the way businesses compete.  Companies that fail to make use of Internet technology are regarded as not delivering value added services to their customers and are consequently at a competitive loss.  Internet technologies provide businesses with tools to adapt to changing customer needs and can be used for their economic, strategic and competitive advantage (Hamid & Kassim, 2004). Recruitment has emerged as a critical human resource management function for organizations, particularly in an environment of competitive labor markets and mobile employees.  Despite changes in the nature of work and the adoption of new technologies, organizational effectiveness is still largely dependent on the competency and motivation of individual employees (Allen, Scotter & Otondo, 2004).  Recently, employment websites in Taiwan have played an important role as a cost and time effective means of helping job hunters find new employment. According to a 1999 survey of HR professionals by the Society for Human Resource Management, while nearly two-thirds of human resources professionals placed classified ads in Sunday newspapers, almost 40% also relied on Internet job postings.  It is estimated that 32% of all recruitment- advertising budgets in the year 2000 would be spent on the Internet, while the share that went to newspapers would reduced from 70% to 52% (Mondy, Noe & Premeaux, 2002).  The significance of intellectual capital has risen proportionately with the boom of the information age and virtual economy (Litan & Wallison, 2000; Blair and Wallman, 2000). As many authors point out, a major proportion of growth companies are valued beyond book value. The market value of a firm consists of its financial capital and “something else”. The financial capital refers to the firm’s book value and is formed by organizational financial and physical assets. The “something else” represents the firm’s intellectual capital, defined as resources created from internal learning and the development of valuable relationships (Pablos, 2002). Stewart (1997) defines intellectual capital as: the intellectual material-knowledge, information, intellectual property and experience that can be put to use to create wealth. Intellectual capital provides firms with a huge diversity of organizational values such as profit generation, strategic positioning (market share, leadership, name recognition, etc.), acquisition of innovations from other firms, customer loyalty, cost reductions, improved productivity and more. Successful firms are those which routinely maximize the value from their intellectual capital (Pablos, 2002).  The digital age is evolving all around us, both in our personal and business lives.  Business websites open doors to alternative business models leading competitors to innovative concepts in connectivity, information sharing, business alliances, and customer relationships. According to Tapscott, Ticoll & Lowy (2000), digital assets add new dimensions to traditional business formats. It is within this context that the desire to construct a digital capital measurement model originates. The focus of this study is to define and measure the contents of digital capital and design corresponding qualitative indices based on a thorough understanding and integration of former research.  The resource-based view of firm states that firms achieve competitive advantage and superior performance through acquiring, holding and subsequently using strategic assets that are essential for achieving competitive advantage and maintaining strong financial performance (Wernerfelt, 1984). The resource-based view advocates that core skills central to a company’s competitive advantages must be acquired from internal development within the company itself and that general technology can be acquired from outsourcing. The core skills are characterized by properties such as value, rareness, not being imitable and immobility (Chen & Ku, 2004; Barney, 1991). This study tries to identify those inimitable competences that are generated from inside of an employment website. The definition of digital capital is the value of intangible assets accumulated by Internet technology. As we witness these fast-paced changes, it is difficult to maintain a perspective on what has happened, what is happening, and what the future will bring. Tapscott, Ticoll & Lowy (2000) coined the term “b-web” for business website, a system of suppliers, distributors, commerce service providers, infrastructure providers, and customers that uses the Internet for their primary business communications and transactions. They differentiated b-webs along two dimensions: control and value integration. This research focuses on employment websites; as such there are some differences with Tapscott, Ticoll & Lowy (2000) findings.


Socializing a Capitalistic World: Redefining the Bottom Line

Dr. Joan Marques, Woodbury University, CA



As the trend toward global integration intensifies, the awareness increases that no single ideology has proven flawless. Although still very much in evaluative stage, and unfortunately not yet accepted to the same degree by all participants in the global playing field, there is significant progress to be ascertained in determining what the new, workable trend will be. Interestingly, this development synchronously happens at multiple levels: within the global environment, and within the corporate world, for there, too, it has been proven that no single overbearing style sorts lasting success. An integrative, local approach from all angles seems to be the way to go in the future. This paper reviews some of the current trends and criticisms, and presents a model for the new bottom line in the new world. While countries in today’s ever-intertwining world are gradually detaching themselves from the utilization of explicit ideologies and embracing a more moderate and meaningful approach, companies are mirroring this very trend in a more compact format. What, exactly, are we talking about? Well, just observe the global trends for a moment: countries that used to be known for decades on behalf of their strict capitalistic, socialistic, or communistic systems are now, due to their increased exposure to other cultures, mindsets and procedures, moving toward what is popularly referred to as mixed economies. How did this come about? Through the dazzling events and dynamic developments of the past century, of course! Cars, airplanes, and the Internet have enabled people from different cities, countries, and continents, to observe, mingle with-, and learn from one another in no time. This development gradually yet massively elicited the insight that no ideology works satisfactorily if performed to the extreme:  Capitalism enhances the opportunity of undertaking and wealth creation, but it also increases the gap between rich and poor, and puts a firm price label on even the most elementary commodities such as medical care, education, and transportation. Rogoff (2005), for instance, foresees huge problems arising on the capitalistic medical horizon, explaining that “as rich countries grow richer, and as healthcare technology continues to improve, people will spend ever growing shares of their income on living longer and healthier lives” (p. 74).  Rogoff continues, “U.S. healthcare costs have already reached 15 percent of annual national income and could exceed 30 percent by the middle of this century-and other industrialized nations are not far behind” (p. 74).  Socialism enhances stability for the poor, the disabled, and the elderly, but it also increases mediocrity and aversion toward optimal performance due to high tax pressures and limited growth opportunities. In his foresight about a resurrected battle in weighting the pro’s and cons of capitalism versus socialism, Rogoff (2005) explains, “When the price of medical care takes up just a small percentage of national income, it is hard to argue with the notion that everyone should enjoy similar medical treatment” (p. 74). Yet, explains this author, “as health costs creep up to, say, 25 percent of national income, things get more complicated” (p. 74).  It is at this point that Rogoff clarifies the tendency of socialist-oriented countries to limit their citizens’ eligibility for decent - and often necessary - medical treatment, solely for the purpose of guarding the levels of their expenses. Communism enhances access to elementary existential needs, but it also increases, even more than in the case of socialism, the lethargy and spiritual murder of all who would like to undertake any entrepreneurial initiative. In this regard one only has to read the many articles written by ex-Cubans who visited their island after having lived outside of it for several decades, and the dismay they express when describing the dilapidated state of matters they encountered. A Cuban descendant explains his confrontation with humiliating images in a backward town when he recently visited family members on the island, and the question he presented to a local communist official from his old neighborhood, “When I left Cuba in 1956, workers had the right to strike, made more money, and worked less. Now, in this 'dictatorship of the workers,' they do not have the right to strike, make less money, and work more. Please explain this to me” (p. A. 13). So, what’s happening in today’s world? Well, if we zoom out and try to review our globe from a great distance, the following becomes obvious: Through the increased exposure of various cultures to one another, and the consequential mutual learning that happened during the past 50 years, smart economies are now reciprocally adopting each others’ best elements: Capitalist environments are trying to (or will hopefully start doing so before too long) alleviate the sharp edges of their practices by applying some socialistic structures into their systems in order to enhance equality among citizens, while socialistic and communistic environments are upgrading their wealth creation opportunities through the incorporation of some capitalistic structures.  Jossa (2004) reveals that, in recent years, a number of authors have been presenting “workable models of market socialism” (p. 546), thereby theorizing forms of socialism that have little in common with the strict Soviet model so often criticized. Jossa (2004) particularly refers to the model of “market socialism, or economic democracy, with state-controlled investment proposed by David Schweickart first in 1993 and 1998 and recently, in its final version” (p. 546).  Mies and Kovel (2004) theorize a trend called “localization” rather than “globalization.” These thinkers consider themselves not anti-globalization, but globalization-critical, as they refer to the prevailing contemporary perspective of globalization as “capitalist globalization,” meaning that this phenomenon is mainly utilized for the benefit of wealthy nations and their representing multi national corporations. Mies and Kovel (2004) strongly advocate the trend of enabling local economies to work their way up toward equity, whereby they warn us that “local doesn't only mean your neighborhood. It can be a bio-region, it can be different things” (p. 41).  Mies and Kovel elaborate on their perspective by asserting, “there are regions, including in [our] part of the world, in Germany, in Austria, in Switzerland, where people now realize that the global is in the local” (p. 41). Mies and Kovel subsequently voice a typical contemporary perspective of the spiritual mindset and modern management practices in business organizations, by presenting the following viewpoint on the successful application of real globalization: “If we organize from below, with pressure from local areas and communities on governments, then we can say, ‘Here, this is impossible,’ and a difference can be made” (p. 43). 


A Study of Factors Affecting Effective Production and Workforce Planning

Dr. Halim Kazan, Gebze Institute of Technology, Gebze-Kocaeli, Turkey



Four hundred small and medium-sized manufacturing companies, operating in the iron-steel, construction materials, and food industries and registered with the Chambers of Commerce and Industry in Konya, Kayseri, and Kocaeli, have been selected and their managers interviewed to identify those operational practices that have the greatest influence upon workforce planning and effective production. The 71.59% of the effective production and workforce planning is expected by internal factors, whereas external factors which are not under the control of firm explain the variance of 28.416 %. Nowadays, global developments have a profound effect on local and regional industries, necessitating frequent and ongoing reevaluation and reorganization of business practices and priorities, particularly in the areas of workforce allocation and production planning. In this study, four hundred small and medium-sized manufacturing concerns in the iron-steel, construction materials, and food industries, registered with the Chambers of Commerce and Industry in Konya, Kayseri and Kocaeli, have been examined with the object of determining those factors that most crucially affect their allocation of human and material resources. A number of issues related to research on effective production and workforce planning have been addressed in a number of studies over the last couple of decades. They have considered workforce planning and effective production research from a general perspective (Alfares 2000); proposed three- and four-day work weeks (Browne and Nada [1987]; Burns et al. [1998]; Hung [1993, 1994]; Hung and Emmos [1993]; Steward and Larsen [1971]); and classified workforce planning problems into three categories (Baker 1976). Eylem Tekin, Wallace J. Hopp, and Mark P. Van Oyen (2004) have examined simple models of serial production systems with flexible servers operating under a constant work -process release policy; Esma S. Gel, Wallace J. Hopp, and Mark P. Van Oyen (2002) have considered how to optimize work sharing between two adjacent workers each of whom performs a fixed task in addition to their shared task(s); and Wallace J. Hopp and Mark P. Van Oven (2004) have investigated the  assessing and classifying manufacturing and service operations to determine their suitability for the use of cross-trained (flexible) workers. Hooks and Cheramy (1994) and Wooton and Spruill (1994) have examined the increasing proportion among new public accounting recruits of women, who, according to a recent survey (AICPA 1999), comprise approximately one-half of all new hires. Kalyan Singhal (1992) has explored a noniterative algorithm for multiproduct production and workforce planning; and Joseph M. Milner and Edieal J Pinker J. Pinker (2001) have developed mathematical models to describe the interaction between manufacturing firms and labor supply agencies when demand and supply are uncertain. Cathleen Stasz (2001) has investigated two theoretical means of measuring skills by incorporating economic and sociocultural perspectives; Edward G. Anderson Jr. (2001) has developed an optimal staffing policy at the strategic level to cope with nonstationary stochastic demand for a workforce divided between unproductive apprentice employees and fully productive experienced employees; and Ravi Kathuria and Elizabeth B. Davis (2001) have examined the synergistic effect on an organization of quality emphasis combined with appropriate workforce management practices. Linda S. Hartenian and Donald E. Gudmundson (2000) have scrutinized the effect on firm performance of cultural diversity in a small business; and Michael J. Brusco and Larry W. Jacobs (2000) have examined a compact integer-programming model for large-scale continuous tour scheduling problems. Sanjay Ahire et al. (2000) have investigated workforce-constrained preventive maintenance scheduling by using evolution strategies. Linda L. and Robert A. Orwig (2000) have examined conflicting approaches to work allocation in a typical engineering consulting organization; and Eylem Tekin, Wallace J. Hopp, and Mark P. Van Oyen (2002) have investigated the logistical benefits of workforce agility.  Planning entails a series of actions that engage every aspect of a firm’s activities. Basic information is assembled and tracked from its earliest stages and applied to a process by which future performance within all areas of the company’s operations may best be anticipated, designed, and implemented. The essential ingredient in the planning process is information, and the essential result is preparation.  In particular, planning is the focal point of human and material resources management (Moynihan 2002, 344). Simply stated, workforce planning is a process by which a business ensures that the right people are in the right place and at the right time to accomplish the firm’s mission, by identifying the areas of workforce competency that are most crucial to the achievement of the goals of the enterprise and developing strategies to access and optimally employ the personnel best suited to those areas.  More directly, workforce planning is a systematic approach by which whatever gaps might exist between the workforce of today and the human capital needs of tomorrow are identified, addressed, and accounted for before they appear. According to James W. Woodard (2001, 36), three requisites for successful  workforce planning are that: 1. Workforce planning should be strategic; 2. Workforce planning should be comprehensive; 3. Workforce planning should be tailored to the specific needs and capabilities of the firm.


The Response of Various Real Estate Assets to Devaluation in Argentina

Dr. Ricardo Ulivi, California State University, Dominguez Hills, CA

Jose Rozados, Arch. Reporte Inmobiliario, Argentina

Germán Gomez Picasso, Arch. Reporte Inmobiliario, Argentina



This study reviews the effects that Argentina’s currency devaluation had on the prices of residential, industrial and office real estate in that country. The findings indicate that residential prices in the best locations kept their prices better, in dollar terms, that the other sub-markets; and that the reaction of prices in the industrial and office market is initially greatly affected initially by the devaluation, but that the subsequent performance is mainly determine by what happens to economic growth as a result of the devaluation.  In Argentina’s case, the devaluation led to strong economic growth—two years of over 8% growth—which in turn helped recover the market prices of the industrial and office market. Real estate prices, as any other assets, are affected by the socioeconomic environment they are set in.  Generally speaking, market prices are set by the interaction of supply and demand factors.  But, what happens to real estate prices when a sudden devaluation occurs?  Some lessons can be learned from the Argentine experience of recent years.  By the end of 2001 and beginning of 2002, the Republic of Argentina was shaken by a very profound financial crisis that drastically impacted the overall financial system and resulted in a currency devaluation of nearly 70%. How did this crisis and devaluation affect the prices of real estate? Was the reaction of all real estate submarkets similar or can we establish differential and particular behavior characteristics to each type of real estate submarkets? For example, was there any difference between the residential properties, industrial, and office markets? In an earlier paper, the authors concluded that the argentine devaluation affected residential real estate prices as a function of its location. The better the location, the lesser the fall of prices as a result of the devaluation.  This paper will analyze the residential, offices and industry values before devaluation and their price behavior after the crisis.  We will try to explain how the market prices of each real estate submarket (“residential”, “office” and “industrial”) behaved as a result of the devaluation.  During the decade of the 1990’s the Republic of Argentina had a money exchange parity regime established by law. The so called “Convertibility Law” issued during 1991 established a fixed conversion of one peso into one dollar.  This law was abolished in early 2002 and in its place a free exchange market was in established.  Even with out the convertibility law, it was a well established fact that, in the real estate area, most real estate assets were historically quoted in USA dollars. The City of Buenos Aires, Federal Capital of the Republic of Argentina, with 2,770,000 inhabitants, historically held the higher average real estate values of the country. Buenos Aires presents residential sectors of varied quality and sociocultural levels with a strong concentration of administrative activities in the so called “center” (downtown) of the City, displacing the industrial activities to the periphery or outskirts in the suburbs outside the City geopolitical limits. To establish a starting point and understand its development after the crisis, it is necessary to present a brief summary of the building typology for the years before devaluation. However, we will first present the database used for this work. For this report, an analysis was made of the most representative byproducts of each market under study (residential, industrial and office). For housing, the “apartment” typology, represented by 1 and 2 bedroom units located in the City of Buenos Aires Northern corridor; for “offices” the comparison between those classified as A+, located in the City center area and in the City of Buenos Aires’ Northern corridor; and for “industries” standard type buildings suitable for logistic and industrial processes located in the City of Buenos Aires peripheral areas and the suburban ring in the Province of Buenos Aires, were selected. Due to the lack of statistics on the issue, the values for each square meter were produced by the joint contribution of, Merlo Negocios Inmobiliarios, L. J. Ramos Brokers Inmobiliarios, and Toribio Achaval.  The later two are well established realtors.  During the second half of the nineties, the residential market was very active due to the facility to obtain home loans. The average value of a square meter for a used apartment in an average zone like Caballito was in the order of US$900 to US$1,000, while the same unit in the City’s Northern corridor, identified as mid-high socioeconomic class level, quoted in the order of US$1,000 to US$1,200. The office market was also subject to a major activity upsurge during the nineties due to the growth of the economy’s service sector and as a result of the privatization of state owned companies.  Therefore, most office space built in that decade was in the AAA and A+ class, fitted with a service infrastructure lacking in the City stock office buildings, and with sale prices ranging from US$2,500 and US$3,000 per m2, and rent figures averaging 30 US$/sq.m. Class B and C offices showed quotations well below with average prices of US$1200 for Class B and US$900 to 1000 for Class C, according, obviously, to the scale and the number of square meters of each operation.


Managing Marketing Standardization in a Global Context

Dr. Boris Sustar, University of Ljubljana, Slovenia

Rozana Sustar, GEA College, Slovenia



The article discusses the possibilities for standardizing the marketing programs of Slovenian firms. The study, using factor analysis, surveyed 298 exporting firms in Slovenia. The survey found that environmental factors, such as political and economic stability, significantly affected the possibilities for standardization enabling firms to improve sales margins. Strategic control over distribution and promotion exercised by Slovenian managers was identified as constraints to standardization. The study also found many country-specific variables constraining the degree to which, standardization and its benefits could be undertaken by Slovenian managers.  The evolving world of international business is witnessing the emergence of additional players, including firms from the former Eastern block. These firms are playing a game of catch-up as they attempt to learn the intricacies of doing business in today’s global economy. The speed at which this process is occurring varies across nations. Firms in Slovenia, the Czech Republic and Hungary, for example, are rapidly acquiring the skills necessary to compete on the world stage. These firms have adopted both general approaches to marketing as well as targeted actions, which have been influenced by the local environment. This article will discuss the possibility of standardizing marketing programs and the factors influencing the process of cost lessening, as they apply to the case of Slovenian firms.  The literature in this area broadly examines the numerous variables that affect standardization. Both internal and external components impinge upon the decision to standardize the marketing program of product, price, distribution and promotion (Kreutzer, 1988). The magnitude of differences in local physical, economic, social, political and cultural environments, are being invalidated by the globalization of markets. As a result, there may be no differences between domestic and international marketing (Perry, 1990). However, a standardized marketing cannot be set once and for all. Matching firms’ resources with environmental requirements, anticipating changes in consumers’ needs, and forecasting competitors’ behaviour (Easton, 1988; Kogut, 1988) are critical business activities for developing effective standardized export marketing initiatives (Akhter and Laczniak, 1989).  The literature concludes that economic environment (Hooley et al., 1993; Huszagh et al., 1992; Sullivan and Bauerschmidt, 1988), political environment (Kobrin, 1988) and cultural environment (Jain, 1989), effect standardization process. An objective of this study is to test environment and standardization correlation for emerging economies in order to instruct firms’ management in Slovenia to pursue efficient standardization. Based on the literature we study how environmental factors result in standardization.  Organizational characteristics, such as firm size, global marketing experience and the marketing strategies of management in exporting firms (Koh, 1991) compose the internal factors, which create the conditions for global standardization. The nature of a firm’s products, markets, technological orientation, and resources (Lim et al., 1993) determine competitive advantage in international markets and possibilities for profitable standardized approaches.  Strong corporate cultures and management practices with regard to quality, innovation and product performance (e.g., the "quality, service, cleanliness and value" principle of McDonald's) are a further determinant of profitable marketing standardization (Schuh, 2000). Other studies (Michell et al., 1998) pointed out that products are much more standardized and promotion, distribution and price more localized. In contrast to this conclusion, the high price strategy seems to work well everywhere and can be standardized as well (Botschen and Hemetsberger, 1998). The efficient globalization of markets leads to global products, global brands and global advertising respectively (Ayal and Zif, 1979) however standardized advertising does often not optimally fit with the cultures (Raaij, 1997). The literature is not completely uniform that distribution practices are the least standardized elements of the marketing mix (Ozsomer et al., 1991; Botschen and Hemetsberger, 1998). An objective of this study is in addition to test marketing strategies and standardization correlation for emerging economies in order to instruct firms’ management in Slovenia to pursue efficient standardization. Marketing mix strategies are versatile internal factors of standardization in comparison to firm size and/or business experience, which are more fixed and firm internal factors of standardization. As intensive restructuring of Slovenian firms is taking place the study is giving special attention to an internal factors, which are controllable in short term. This does not diminish a need to study supplementary firm characteristics in order to explain detailed factors of standardization. Based on literature findings therefore we limit the study to a demonstration of marketing mix elements and their impact on standardization process. A written questionnaire was sent to a random sample of 1230 Slovenian exporting firms, representing approximately 18 percent of all exporting firms in the country. From this sample, 298 responses were received, for a response rate of 24.2 percent. Two (2) respondents were bankrupt firms and were therefore not used in the study. The questionnaires were addressed to general managers or executives involved in making strategic business decisions. All of the fifteen (15) questions were of closed type. Five-point and three-point Likert scales were used for the majority of the questions. Five (5) questions required yes/no responses or specific answers and two (2) questions required numerical determination. Some of the respondents were queried by phone when responses were ambiguous or incomplete. Table 1 provides more detailed information about the variables that were included in the statistical analysis.


The Association Between Firm-Specific Characteristics and Disclosure:  The Case of Saudi Arabia

Khalid Alsaeed, CPA, CMA, CFM, CI, Institute of Public Administration, Riyadh, Saudi Arabia



The main thrust of this paper is to examine the effect of specific characteristics on the extent of voluntary disclosure by a sample of non-financial Saudi firms listed on the Saudi Stock Exchange for the 2002-2003.  The variables investigated were as follows: firm size; debt; ownership dispersion; age; profit margin; return on equity; liquidity; audit firm size; and industry type.  The association between the level of disclosure and various firm characteristics was examined using multiple linear regression analysis.  It was hypothesized that for the sample firms, the level of disclosure would be positively associated with firm characteristics.  It was found that firm size was significantly positively associated with the level of disclosure.  The remaining variables, however, were found insignificant in explaining disclosure.  The purpose of this paper is twofold.  The first is to report on the extent of voluntary financial and nonfinancial disclosure by a set of non-financial Saudi publicly held corporations, and the second is to empirically investigate the hypothesized influence of several firm characteristics on the extent of disclosure.  This paper will contribute to the growing literature on the determinants of corporate disclosure level.   Two reasons lend to the importance of this study.  First, voluntary disclosure, information in excess of mandatory disclosure, has been increasingly receiving growing amount of attention by recent accounting studies.  Because of the inadequacy of compulsory information, the demand for voluntary disclosure provides investors with the necessary information to make more informed decisions.  Thus, this study attempts to assess the quality of voluntary disclosure reported by non-financial firms listed on the SSM, especially the annual reports, which are the chief vehicle firms apply to convey information to investors.  Secondly, this study provides insight into how the effect of certain firm-specific characteristics; namely structural, performance, and market variables may hold up in other international financial reporting and regulatory jurisdictions.  The results of the analysis are expected to help explain the variation of current and prospective disclosure extent in light of the aforementioned firm-specific characteristics.  A voluminous body of research relating the level of accounting disclosure to firm- specific characteristics has been conducted in developed countries (e.g., Barrett 1977; Cooke 1991; Lang and Lundholm 1993; Wallace et al. 1994; Camfferman and Cooke 2002).  Little attention has been devoted, however, to the association between accounting disclosure and firm-specific characteristics in the Middle East countries, and more specifically, Saudi Arabia.  This paper has chosen Saudi Arabia for several reasons.  First, Saudi holds a 25% share of the total Arab GDP and is the world’s 25th exporter/importer.  Secondly, the Saudi government has been undertaking far-reaching steps aimed at enhancing its investment climate to make it more appealing for domestic and foreign capital funds.  Third, the SSM ranks as the 1st in the Arab region, the 8th among emerging markets in term of market capitalization, and the 23rd worldwide.  Lastly, the issuance of the conceptual framework of accounting in 1986 and the creation of Saudi Organization of Certified Public Accountants (SOCPA) in 1992 were a major landmark in the development of accounting and auditing profession.  Taken together, these factors are expected to elevate the financial reporting practice and, thereby, improve the quality of accounting disclosure. The remainder of the paper proceeds as follows.  Section two examines the accounting and financial reporting evolution in Saudi Arabia, section three surveys the associated literature conducted on disclosure studies, section four includes the study variables as well as hypotheses development, section five outlines research design, including sample description, index construction, and model development, and section six reports the obtained results, whilst section seven presents the summary along with its limitations.  As early as 1980s, the accounting and auditing profession, in response to tremendous economic strides, witnessed dramatic developments in Saudi Arabia.  By 1981, King Saud University began to hold a series of annual symposiums aimed at elevating the performance of the accounting profession.  In pursuant to this event, the Saudi Accounting Association was created to foster accounting thought, facilitate the exchange of opinions and ideas, and promote accounting studies.  After frequent meetings and intense discussion over four years, Ministerial Resolution No. 692 was eventually issued.  The resolution approves the objectives and concepts of financial reporting and the standards of presentation and disclosure as the guidelines to all CPAs.  Four years later, companies as well as auditors became obligated to adhere to Ministerial Resolution No. 692.  In 1992, Royal Decree No. M/12 was released which invalidated the original CPA Regulation in 1974 and authorized the new passage of new CPA Regulations.  Royal Decree No. M/12 also entrusted to the newly established SOCPA the responsibility for enhancing the accounting and auditing profession.  Since its inception, SOCPA has been actively issuing a series of accounting and auditing standards through its specialized committees.  Until today, SOCPA has released approximately 17 accounting and 14 auditing standards.  Apart from standards issuance, the SOCPA bears the responsibility for overseeing the audit profession, by periodically reviewing the performance of audit firms by the use of quality control programs, by holding the CPA exam semiannually, by designing and presenting continuing professional education, by and giving guidelines and answers for the various requests that have been received.


A Scorecard on Intellectual Capital Performance in the Economy

Vernon P. Dorweiler, Michigan Technological University, Houghton, MI

Mehenna Yakhou, Georgia College & State University, Milledgeville, GA



The future of commerce is forecast to rely heavily on advances in Intellectual Capital (Caddy, 2000).  This research is to establish; (1) the accepted concepts of Intellectual Capital, (2) its strategic uses, and (3) gaining acceptance on a world basis. If the forecast is valid, then Intellectual Capital can be integrated into traditional accounting. The expectation is that once Intellectual Capital’s measurement issues have been resolved, use will greatly expand (Roslender, 2004).This research is to examine the issues involved.  In addition to individual business, a national program of China is also presented; an outline of that program is found in Exhibit 1. Intellectual Capital has come under scrutiny for its uses. A scorecard reports an excess of benefits over costs.  Once an accounting viewpoint is established, Intellectual Capital will have an impact on the Income Statement and the Balance Sheet (Caddy, 2000).  The Organization for Economic Cooperation and Development (OECD) conducted a survey of 1800 companies, on their uses of Intellectual Capital (Shalkh, 2004):  in organization (structure),   in business relations (to customers, and to stakeholders),   and with employee (competence). Results of the survey showed ; (i)  the extent that companies have adopted Intellectual Capital, and (ii)  how many companies have exerted effort to fit. Intellectual Capital within traditional accounting and in management reporting. The survey defined how Intellectual Capital should be measured, including Human Assets and the use of a Balanced Scorecard.  The survey questioned specifically on the respondents having an appearance of entrepreneurial spirit.  Clearly this was to determine whether companies with an interest in Intellectual Capital would also have an open management style.  Results showed that the European and Australian countries lead in positive response to the development and use of Intellectual Capital (Shalkh, 2004). An issue remaining is whether or not Intellectual Capital can be measured (Abeysekera, 2003). Measurement of Intellectual Capital is defined as: difference in Market-to-Net book value; calculated intangible value; and valuation of knowledge capital. Abeysekera recognizes that these measures are not predictive, but recommends that ratios can be used for that purpose.  Key ratios are in a comprehensive role of valuing Intellectual Capital, and are presented below. Why are forecasts of significance?  As noted, Intellectual Capital is expected to be included in traditional accounting.  That inclusion means that traditional financial reports will be impacted.  Another significant reason treats Intellectual Capital as a necessary instrument in managing business.  The rational for this comes from previous practice: managers adapted uneasily to new measures. Including Intellectual Capital in traditional reports introduces changes to which managers must respond (Roslender, 2004). This also illustrates Intellectual Capital’s growth in the U.S. economy. The business categories included within Intellectual Capital Abeysekera, 2003) are performance in the market, service to the customer, innovation of product, employee involvement, and goal achievement. The rationale is expanded in advancing development of  Intellectual Capital (Tayles, 2002): (1) motives for valuing or, understanding Intellectual Capital (why); (2) approaches to measure Intellectual Capital (how); and (3) use of classification schemes for Intellectual Capital (what). The why-how-what rational given by Andriessen (2001), is to improve the management of Intellectual Capital. The reasons given for management include: (a) “what gets measured gets managed”; (b)  improving the management of intangible resources, in addition to tangible resources; and (c)  creating intangible resource-based strategies. The results expected are: (a)  monitoring effects from actions taken; (b)  translating business strategy, into action; and (c)  weighing possible courses of action. Implementation is described as “knowledge management” (Bontis, 2001). Management is a recent recognition regarding Intellectual Capital. Management deals with implementing a business plan (Bernstein, 2000), as follows: (1)  setting the plan; (2)  performance of the plan; (3 measuring performance; and (4)  correction of performance, for variation from the plan. As a consequence, management is a comprehensive term. Knowledge is a term indicating that a field is determined as production and revenue, within an organizational unit, and contribution from the associated processes.  So then knowledge management embodies measurement issues, in quantity, quality, and value.  The concept of knowledge management is to improve business (Shalkh, 2004). Those improvements are achieved by providing: (a) decision support for management; (b) development of competencies, both individual and organizational; and (c) reporting to customers, and to shareholders. These improvements are extensive, and are demanding on managers.  Some managers consider this as top management concerns only.  Interest is found in continuation of the development of Intellectual Capital (Bontis, 2001).  Anderson (2004) describes ways to measure segments of Intellectual Capital: Human Resources; Intangible Asset Monitors; and Balanced Scorecard.


Enhancing Education Through Technology?

Dr. H. W. Lee, National Chia-Yi University, Taiwan



The purpose of this paper is to synthesize some empirical research articles that video and other multimedia tools are used in the subject matters of science and mathematics in teacher education. First, I will introduce what research is in the field of instructional technology and what kinds of research issues have been mostly addressed in this field. Second, two quantitative and two qualitative research articles related to the topic will be examined and discussed in terms of the reliability and validity of these articles by using the standards of research proposed by Reeve (1995), including whether the research is “pseudoscience” (Reeve, 1995), keeping its norms, and sticking to its social responsibility. In addition, I will discuss how and what video and other multimedia tools have been used in each of these research articles. Furthermore, some implications will be proposed for the future use of video and other multimedia tools in the subject matters of science and mathematics in teacher education. Video is one of the most popular instructional media in public schools and with the combination of other multimedia tools it has become increasing prevalent in our schools. However, the amount of videos and other multimedia tools in schools may not be the promise of success; how teachers effectively use these tools indicates the quality and success of our education. Some research (Ferguson, Holcombe, Scrivner, Splawn, & Blake, 1997; Holmes & Wenrich, 1997; Kumar & Sherwood, 1997; Viglietta, 1992; Whitworth, 1999) explore different issues and aspects of video uses in teacher education for teachers of the specific subjects of science and mathematics in public schools. These empirical studies also present the possibilities of combining other technologies with the use of video in the classrooms and address some difficulties and limitations of video use for both teacher education and classroom settings of the public schools. However, some of these studies need to be further investigated in terms of their validity and reliability for the purpose of adopting some suggestions elicited from them in the real classroom setting. Thus, I will use the criteria proposed by Reeve (1995) to examine whether some of these studies are “pseudoscience”, which means the lack of norms and standard, and social responsibility. Furthermore, I will discuss the suggestions made from these studies and conclude some implications for the use of video in both teacher education and classrooms in public schools. After I read some research articles in the field of instructional technology, I found some studies do not meet the norms and standards that Reeve (1995) have proposed, and also may not be social responsible. Thus, before discussing the content of these empirical studies, I will discuss the methods of research and some mistakes have been made in these research designs. First, I will categorize some main research methods in the field of instructional technology. Second, I will explain what “pseudosicence” (Reeve, 1995) is and what mistakes some researchers have made. Moreover, I will also propose what can be done to improve the reliability of these studies.  Reeve (1995) reviews the research articles in Educational Technology Research and Development (ETR&D) from 1989-1994, and finds that most research in this field fits to the categories of empirical quantitative, qualitative, literature review, and mixed methods of two or three. Although he also finds that different journals of instructional technology emphasize different types of research, he points out that empirical quantitative and qualitative research need to be more addressed and examined.  By examining the articles in ETR&D, Reeve (1995) also found some common characteristics of mistakes in IT quantitative empirical research, and he characterizes them as pseudoscience. These characteristics include: specification error, lack of linkage to robust theory, inadequate literature review, inadequate treatment implementation, measurement flaws, inconsequential outcome measure, inadequate sample size, inappropriate statistical analysis, and meaningless discussion of results. Among these characteristics, I found specification error, inadequate literature review, inappropriate statistical analysis, and meaningless discussion of results in some studies. In addition, I found one study with ambiguity of sample selection. By using some characteristics of pseudoscience presented by Reeve (1995), I will examine two articles that both related to teacher education on the subjects of science and mathematics in classrooms of public schools. One was about the applications of hypermedia in science and mathematics conducted by Kumar and Sherwood (1997); the other was about teacher’s perceptions of effectiveness of Windows on Science (WOS) as curriculum teaching tool conducted by Ferguson et al (1997). First, I will examine the overall structure of these studies. Second, I will examine its specification, sample size, and statistical measurement of each article. Moreover, I will discuss some meaningful comments in these two articles.  Overall structure.  The overall structures serve as the basis of examining the quality of the studies. Both studies used quantitative methods to analyze the results of surveys or questionnaires. I find that the approach of scanning the heading of the articles I use serves to detect some errors in these studies. For example, the Kumar and Sherwood’s study (1997) presents a very simple content without the description of their research method, subject, procedure, and results. Also, citations and reference lists are confusing and not specifically listed on Table 2 of the article. On the other hand, Ferguson et al (1997) presents a clear structure of their study with a detailed literature review, purpose of the study, research method, results, and discussion of the results. Also, they discuss the limitation of their study and provide directions of future research. Specification and sample size.  Reeve (1995) defines specification error as vague definitions of the primary independent variables. Kumar and Sherwood’s study compares the effectiveness of hypermedia tools between student teacher and practicum teachers. However, they do not mention how they define the terms “student teacher” and “practicum teacher”, and how many prior teaching experiences in average of each group and how many hours the participants teach per week (or day) are not mentioned. In addition, the sample size of the experimental groups is not mentioned. On the other hand, Ferguson et al (1997) describe sample characteristics and size clearly: “The sample consisted of approximately 200 teachers in elementary schools in grade 1 to 6. These graduate students were teachers in their teaching position, used WOS” (p. 48).


An Examination of Dynamic Panel Models Using Monte Carlo Method

Dr. Junesuh Yi, Information and Communications University (ICU), Daejeon, Korea



This paper investigates the best appropriate parameter estimator among ten recent prominent estimators based on dynamic panel data by using Monte Carlo experiments. For finding the best appropriate estimator, the bias and the RMSE (root mean square error) are calculated through the application of various values of parameters. This study finds that it is more likely that Alonso-Borrego and Arellano’s systematically normalized generalized method of moment (SNM) and limited information maximum likelihood (LIML) estimator show the least bias and dispersion. They are turned out to outperform all other estimators so that SNM and LIML are observed to be the best appropriate estimator. Panel data sets that are the cross section of samples at several moments in time are recently common in econometrics or finance. The most of empirical data for analyzing phenomenon over the time are composed of cross-sectional time series such like gross national product (GNP) per capita of several countries during a certain periods or the performance of technology stocks for 90’s, turbulent market periods. Therefore, panel data sets allow a researcher to analyze a number of important economic or finance questions that cannot be addressed using conventional cross-sectional or time-series data sets. Initiated by Anderson and Hsiao (1981), the studies of dynamic panel data analysis that assumes the lagged value of the dependent variable is one of the explanatory variables have led the main stream in this area. The papers related to dynamic panel data have been written by changing some assumptions with respect to the additional moment condition to find more efficient and consistent estimator. However, it is less likely that they draw the agreement the best appropriate additional moment conditions imposed in a generalized method of moments (GMM) framework. In this paper, I examine which estimator is the best appropriate among ten prominent models by using Monte Carlo experiments that are standardized method to apply all estimators. Most of the methods for estimation and hypothesis test discussed in econometrics field including dynamic panel data are typically based on large sample asymptotics. Therefore, if such dynamic models are estimated from small samples, then the standard asymptotic approximations might be very poor. Unfortunately, since time (T) is often very short, in practice, it is relatively difficult to interpret the results with statistically confidence solving the asymptotic question.  One of the methods to solve this problem is Monte Carlo experiment. Monte Carlo method is used in many disciplines to refer to procedures in which quantities of interest are approximated by generating many random realization of some stochastic process and averaging them in some way. Therefore, in this paper, I investigate the finite-sample properties of ten prominent estimators and statistics developed by researchers using Monte Carlo experiments and find the accurate estimators for relevant data generating processes with only small values of individuals and time. The Monte Carlo experiment used in this paper is a simple dynamic panel model with one exogenous explanatory variable. Therefore, ten prominent estimators to test are modified to simple dynamic panel model. For finding the best appropriate estimator, the bias and the dispersion can be calculated through the application of various values of parameters. In summary, this paper reviews ten recent dynamic panel data models including the Within estimator and suggest the best appropriate estimator by calculating parameter estimators using Monte Carlo experiments.  Dynamic relationships are characterized by the presence of a lagged dependent variable among the regressors, i.e. where  is the observation on the dependent variable for individual i at time period t, is 1  K vector of  explanatory variables with unknown K  1 coefficient vector. is assumed to follow a one-way error component model.  where, and the generating equation for the exogenous explanatory variable is. / However, dynamic panel models are observed to have the problems of the asymptotic bias in error components.  That is, is correlated with the error term in equation (1) so that the OLS estimator is biased and inconsistent even though the are not serially correlated with themselves. For the fixed effects estimator, the Within transformation can wipe out the, but, is still correlated with The same problems may occur with the random effects generalized least square (GLS) estimator. Therefore, lots of researches have been proposed to eliminate the bias problem of error terms and this paper introduces then estimators among them. Ten estimators are modified to standardized form that has lagged dependent variable and one exogenous independent variable.


Rational Conduct, Fairness, and Reciprocity in Economic Transaction Processes

Prof. Dr. Josef Neuert, University of Applied Sciences Fulda, Germany

Jeanne Butin, University of Applied Sciences Fulda, Germany

Anne-Sopie Farfelan, University of Applied Sciences Fulda, Germany

Petr Kolar, University of Applied Sciences Fulda, Germany

Thilo Redlich, University of Applied Sciences Fulda, Germany



How would the ordinary “man on the street” describe the famous “Homo Oeconimicus”? Finding a truly right and all-comprising answer is probably almost impossible. Nonetheless, this concept has been of great interest for years, decades or even centuries since the beginning of modern management theory in the 18th century by Adam Smith.  The term “Homo Oeconomicus” mirrors an interdisciplinary idea, having tremendous influence on social, political as well as economic science.  The following project is supposed to identify and evaluate the significance of the concept in reality – in other words, examining whether humans are really that rationally oriented when making-decisions. Moreover, the claim of individuals being merely focused on profit-maximization provides the second focus researched in this paper. In the end, the final conclusion aims at either supporting or rejecting the Homo Oeconomicus as the valid theory. In the latter case, the more or less opposing approach to human behaviour – the theory of reciprocity –  is deemed more useful.  Of course, the evaluation of a theory requires thorough/profound research. Therefore, the theoretical basis of both models is given in the first part of the paper. Afterwards, the results of the conducted survey are presented. The final conclusion is then drawn by relating the theoretical fundamentals to the outcomes of the practical study performed.  Generally, the project and its findings are supposed to support the further development of  the model of individual behaviour in its entire scope as it is one of the most important concepts in social sciences, providing the ground for major advances in this field.  After having clarified the goals and intentions of this project, this chapter aims at introducing the Homo Oeconomics as a model of individual behaviour creating the basis for, on the one hand, the modern economic theory and, on the other one, also being applied in other social sciences that perceive human behaviour as a rational choice of available alternatives.  Within this context, the single human being is placed at the center of the analysis facing a situation of scarcity – not all preferences or needs can be satisfied at once – requiring a decision for one out of several available alternatives. Consequently, the interesting question arises: Is the individual socially (altruistic) or egoistically motivated when making decisions within this framework?   The description is primarily based on the book “Homo Oeconomicus” written by Kirchgässner since it offers an all-comprising and complete overview of the whole topic. Generally, all decision-making situations of an individual are strongly influenced by two elements: preferences and restrictions. Restrictions delimit the individual’s scope whereas preferences reflect the norms and values as they evolved during the individual’s socialisation. Given a specific decision-making situation, the different ways of acting lie within the individual’s scope and are delimited by the restrictions. The human being does not identify all opportunities in detail. Usually, only a very limited part of the various courses of actions and their induced consequences is known. Consequently, estimating definite expectations or, even more often, postponing the actual decision and instead gathering additional information are typical behaviours observable in decision-making situations. Beyond that, all alternatives are evaluated based on the individual’s preferences. In other words, a cost-benefit analysis, weighing the pros & cons of the different alternatives, is exercised for all available opportunities. Finally, the individual is going to choose the alternative mainly complying with her preferences and promising the highest “net-benefits”. The preferences themselves are difficult to be identified and tend to be more stable than the restrictions. Therefore, alterations in the individual’s behaviour are mainly explained by changing restrictions that let certain options become more and others less advantageous. The model of rational behaviour (Homo Oeconomicus) considers human behaviour as a rational choice out of available alternatives – in other words, it deals with benefit/profit maximization under given restrictions, preferences and uncertainty. Within this model the Homo Oeconomicus does not only evaluate the monetary but instead considers all available properties/consequences (including e.g. aesthetic quality) related to a specific alternative. The final choice is purely based on the individual’s own preferences. Of course, the human being is aware of living in a society with others. Therefore, social orientations (e.g. the desire to live in a democratic political system) are automatically included in the individual’s preferences as they affect her scope. Beyond that, the final choice is also affected by the rationality of the decision meaning that the individual is able to decide for her own benefits, in other words, to estimate & evaluate her scope and then act correspondingly. Time pressure as well as the costs for acquiring additional information are also integrated in the final decision as they can alter the individual’s scope, requiring a systematic re-evaluation of the available opportunities.  Generally, the individual within this model of rational behaviour adapts to changing environmental conditions by adjusting her preferences in a systematic and thus predictable manner – with “predictable” referring to the average effect changing environmental conditions are going to have on all individuals. Furthermore, acting is limited to the single human being, so that collective problem-solving is considered as the aggregation of individual decisions and not as cumulated decision-making. Nonetheless, the group interactions can create new opportunities resulting in the formulation of new preferences. Two problems concerning the strict separation of preferences and restrictions can be found in the literature. The second problem, because of its significance for the project, is going to be discussed in a separate chapter: the assumption of constant preferences; the role of profit/benefit maximization vs. altruism in the model.  Usually, the economic model of behaviour assumes the individuals’ preferences to change remarkably slower than the restrictions and thus, preferences are considered as being constant. Due to not being able to identify the preferences independent of the individuals’ actions, the question arises whether explaining changes in human behaviour through alterations in preferences is really useful? Because then, each and every change in behaviour can simply be explained by referring to alterations of preferences. Therefore, it seems to be more applicable to use the independently identifiable restrictions in order to explain changes in human behaviour.


Contact us   *   Copyright Issues   *   Publication Policy   *   About us   *   Code of Publication Ethics

Copyright 2000-2016. All Rights Reserved