The Journal of American Academy of Business, Cambridge
Vol. 5 * Num.. 1 & 2 * September 2004
The Library of Congress, Washington, DC * ISSN: 1540 – 7780
Most Trusted. Most Cited. Most Read.
All submissions are subject to a double blind peer review process.
The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Journal of American Academy of Business, Cambridge will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work. All submissions are subject to a double blind peer review process. The Journal of American Academy of Business, Cambridge is a refereed academic journal which publishes the scientific research findings in its field with the ISSN 1540-7780 issued by the Library of Congress, Washington, DC. The journal will meet the quality and integrity requirements of applicable accreditation agencies (AACSB, regional) and journal evaluation organizations to insure our publications provide our authors publication venues that are recognized by their institutions for academic advancement and academically qualified statue. No Manuscript Will Be Accepted Without the Required Format. All Manuscripts Should Be Professionally Proofread Before the Submission. You can use www.editavenue.com for professional proofreading / editing etc...
The Journal of American Academy of Business, Cambridge is published two times a year, March and September. The e-mail: email@example.com; Journal: JAABC. Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via the e-mail address above. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above. Address advertising inquiries to Advertising Manager.
Copyright 2000-2017. All Rights Reserved
An International Comparative Study of Economic Development:
The Recent Evidence
Dr. Tyler. T. Yu, Mercer University, Atlanta, GA
Dr. Miranda M. Zhang, Mercer University, Atlanta, GA
Dr. Lloyd Southern, Mercer University, Atlanta, GA
Dr. Carl Joiner, Mercer University, Atlanta, GA
Using data from 1990 to 2000 collected for 30 countries, ranging from high income, middle income, and low-income countries, this paper evaluates economic development by looking at the changes in socioeconomic factors between 1990 and 2000. The U.S. is used as the base country against which other countries are compared. Five social and economic variables are selected and examined for each country. These variables are: life expectancy at birth, infant mortality rate, adult illiteracy rate, merchandise trade exports, and GDP. Then, an ANOVA analysis is conducted to estimate the significance of the differences among these three groups in terms of the socioeconomic performance in the 10 year period. The worldwide uneven distribution of wealth and ability to achieve economic development has been the concern of many economists, political decision makers and others. Despite the trend of globalization and liberalization of markets, which provides opportunities of lifting developing countries out of poverty, there seems to be persistent, if not increasing, inequalities among countries. According to Kevin Watkins (2002), a Senior Policy Advisor with Oxfam, globalization is exacerbating inequalities at various levels. Income gaps based on access to markets, productive assets, and education are widening, acting as a brake on poverty-reduction efforts. The question is: why are some countries more successful than others in closing the development gaps. In order to find the answer to that question, we must first know where are the gaps and how large the gaps have become. The purpose of this paper is to examine and compare worldwide economic development. Using data collected for 30 countries, ranging from high income, middle income, and low-income countries, this paper evaluates economic development by looking at the changes in socioeconomic factors between 1990 and 2000. The U.S. is used as the base country against which other countries are compared. Five socioeconomic variables are selected and examined for each country. These variables are: life expectancy at birth, infant mortality rate, adult illiteracy rate, merchandise trade exports, and Gross Domestic Product (GDP). We will first review the existing research shedding light on the issue of worldwide economic development. Then, the comparative analysis will be conducted to examine the recent changes in economic development. An ANOVA analysis will be conducted to test the significance of the differences among the three categories in terms of the five socioeconomic variables. Finally, we will draw conclusions based on the empirical findings of the study. In general, many of the emerging economy countries use economic liberation as the primary engine for growth. Hoskisson et al (2000) examined two subsets of emerging economy countries – the developing countries in Asia, Latin America, Africa and the Middle East; and the transition countries in the former Soviet Union and China. Both private and public enterprises have had to take different paths and use different strategies in dealing with the two distinct subsets of the emerging economy sect. The research has examined the different strategies and implementation paths used by private and public businesses from three primary theoretical perspectives. Those perspectives include the institutional theory, transaction cost economics, and the resource based view or the firm. An article published in the Economist argues that the income gap between developed and developing countries has been widening again since the late 1990s. Due at least partially to the financial crisis and economic recession in the late 1990s, many countries, especially developing countries, were suffering and struggling to maintain their income levels. Meantime, however, rich countries, notably the U.S., were still able to secure their economies because of the information-technology revolution that was initiated and developed mainly by these countries. It has been an ongoing human quest to the understanding of the process of economic development and growth. Rogers (2003) conducted a survey to examine how economists try to accomplish this task. In the survey, the author reviewed the major issues and current debates, discussed the models and conceptual framework, and summarized the empirical results of some of the existing research. Recently, several books have been published reporting the study results of economic development and growth of specific countries. Peebles and Wilson (2002) provide an overview of Singapore’s economic development following a brief history of the economy and a snapshot of contemporary Singaporean society. Specifically, the book gives a glance at the institutional and political framework within which Singapore has risen to developed-country levels based upon per-capita GDP. Hanson (2003) published a book on the economic history of USSR from 1945 to its collapse in the early 1990s. The author emphasizes on the former empire’s economic growth rates, trends in consumption, economic reforms as well as changes in specific sectors (agriculture and foreign economic relations). Mukherjee (2002) published a book on the historical development of the political economy of India. The book explores the role of Indian capitalists and institutions in the development of both colonial and nationalist economic policies before independence. The book entitled “the Vietnamese Economy: Awakening the Dormant Dragon edited by Binh Tram Nam and Chi do Pham (2003) contains 20 articles by Vietnamese economists. These articles examine a set of economic issues currently facing the Vietnamese economy. The authors discussed, among other things, the need for further economic reforms, improvement of macroeconomic policies, and the need to accelerate exports.
The Hewlett Packard – Compaq Computers Merger: Insight from the Resource-Based View and the Dynamic Capabilities Perspective
Preeta Roy, The Wharton School, University of Pennsylvania, Philadelphia, PA
Probir Roy, University of Missouri-Kansas City, Kansas City, MO
In this paper, we investigate the ongoing challenges faced by consolidation in the technology industry. We focus on two different paradigms to explore value creation in acquisition events: the resource-based view (RBV) and the dynamic capabilities perspective. We utilize the RBV perspective and Dynamic capability perspective to analyze the potential of technology mergers by focusing specifically on the merger of HP and Compaq. The HP-Compaq merger presents an interesting case in which these two paradigms can be used to gain insight on potential outcomes. We begin with an overview of relevant literature. We then analyze HP and Compaq in terms of resource mix and the combined synergies that might arise from related resources. This is followed by an analysis of each company’s acquisition experience to determine if there exists the (dynamic) capability to integrate. Mergers and acquisitions have been, and continue to be, a topic of great interest to researchers trying to understand the factors explaining why some firms perform better in managing the acquisition process than others. In managerial practice as well as in academic writings, the management of the post-acquisition integration phase is established as the single most important determinant of shareholder’s value creation (or value destruction) in the acquisition process (Zollo, 2001). As Zollo and Singh (2001) find, the type of acquisition (horizontal or market extension) is an important variable to understanding performance implications. In horizontal acquisitions, there exists a higher potential for efficiency-driven costs reductions. This position pertains to the resource-based view of the firm and the impact of resource (and market) relatedness between the two firms. On the other hand, such acquisitions require a more complex integration process. There are a greater number of potential overlaps of resources and activities across the organizations and the consequently large array of simultaneous, independent decisions and action steps necessary to accomplish this integration. In such a case, the set of post-acquisition decisions about the manipulation of resources, the (dynamic) capability to do so, and the match between these two factors seem to matter most (Zollo and Singh, 2001). In deepening our understanding of how resources are applied and combined in obtaining strategic advantage, the RBV framework is utilized. The RBV is a model that is built on the notion that firms are heterogeneous in their resources (Teece, Pisano and Shuen, 1997; Barney, 1991; Wernerfelt, 1984). Resources include all assets, capabilities, organizational processes, firm attributes, information, and knowledge controlled by a firm that enable the firm to conceive of and implement strategies that improve its efficiency and effectiveness (Barney, 1991). Further, resource endowments are sticky, firms are to some degree stuck with what they have and may have to live with what they lack (Teece, Pisano and Shuen, 1997). As Teece, Pisano, and Shuen (1997) state, resources are sticky because 1) business development is an extremely complex process; 2) some assets are not readily tradeable; and 3) even when an asset can be purchased, firms may stand to gain little from doing so, as the price paid for the asset fully capitalizes the rents from that asset (unless the firm is lucky or possesses superior information). It is this third point that is central to this paper: can the acquisition of a firm create value beyond the competitive market price paid for the acquisition? Acquisition of related resources between the acquirer and the target might account for enhanced performance of the combined entity. Premiums paid to gain control of the target underestimate potential synergies that could be gained from relatedness, namely in consolidation-oriented acquisitions (Singh and Zollo, 2000). Consolidation-oriented acquisitions do result in positive abnormal returns, as well as in significantly higher post-acquisition cash-flows, for numerous reasons: 1) sharing of critical resources and replacement or dismissal of redundant pre-existing resources, 2) the ability to exploit economies of scale and scope, 3) the opportunity to create a uniquely and non-imitable combination of assets to earn positive abnormal rents on investments, and 4) the eventual consolidation in industry. As related resources enhance performance outcomes, it is important to understand how resources can be related. Resources between firms can be related in two ways: supplementary, when the target offers more of the resources that the acquirer already possesses, and complementary, when the target’s resources combine effectively with what the acquirer already possesses (Wernerfelt, 1984). The benefits of relatedness (supplementary or complementary) are expected to emerge when firms integrate their resources extensively (Singh and Zollo, 2000).
Using Six-sigma to Improve Loan Portfolio Performance
Dr. Fataneh Taghaboni-Dutta, University of Michigan-Flint, Flint MI
Dr. Keith Moreland, University of Michigan-Flint, Flint MI
Six-sigma is a customer driven quality program that identifies critical customer requirements and incorporates them into process selection, design, and implementation. Many manufacturing companies have implemented six-sigma initiatives to improve quality and precision of processes and outputs. Applications to service firms have been much less common. In this case study, we explore how six-sigma is used to design, measure, and analyze process loss and guide process improvements with respect to guarantor and refinancing decisions in the student loan industry. The firm studied experienced poor performance in its portfolio of student loans acquired for rehabilitation and subsequent refinancing with the Student Loan Marketing Association (Sallie Mae or SLMA). Benchmarking and process review under six-sigma identified incongruence between employee incentives and organizational goals and poor up-front evaluation of the ability to rehabilitate (put in good standing) an acquired student loan as the primary causes for poor loan portfolio performance. The company could improve performance through matching employee (customer representative) incentives with organizational goals and through comprehensive, more standardized, systematic review of loans considered for acquisition, rehabilitation and refinancing with SLMA. The six-sigma strategy is a customer driven quality program that uses focus groups and surveys to identify critical customer requirements and incorporate these into process selection and design. A successful six-sigma strategy will move an organization toward zero defects (Mikel and Schroeder 2000). The strategy originated with Motorola and was made popular by Jack Welch, former CEO of General Electric. Six-sigma strategy focuses on the elimination of hidden costs generated as a result of producing defective products and services. These are costs are often difficult to measure, but their elimination can add 30 to 40 percent to a companies’ profits (Mikel and Schroeder 2000). These costs include poor training, rework time, process bottlenecks, litigation, lost credibility, prevention costs, customer dissatisfaction, delays, defective work, misused resources, communication problems; and costs prevalent when product/service quality does not match consumer expectations. The focus of six-sigma is to create processes that only have random causes of variations present. In a normal distribution 95.44% of the measurable output will occur within two standard deviations of the mean. 99.73% of the measurable data will occur within three standard deviations of the mean. Most manufacturers operate within these limits. In other words, between 95.44% and 99.73% of their products are free from errors. However, a process working with two to three sigma level will produce 45,600 to 2,700 defective parts per million (PPM). An organization operating at six-sigma level can expect 99.99% of their products and services to be free from defects. This quality level will allow no more than 3.4 defective parts per million. Achieving this level of quality output means reducing process variation through a technique called DMAIC - define, measure, analyze, improve, and control. At this point, the objective of six-sigma is to insure the “voice of the process” matches the “voice of the customer.” Processes that are not meeting customer specifications are improved incrementally through the DMAIC model. The statistical tools used in the DMAIC model include process maps, Pareto Charts, control charts, cause-and-effect diagrams, hypothesis testing, box plots, and process capability ratios. In some cases there are more advanced statistical tools used, but six-sigma relies heavily on simple charting and mathematical models. The objective for the process is to reduce by one half the distances between the mean and the upper and lower specifications. Through DMAIC corporations identify and eliminate special cause variation from their processes. As a result, the process baseline incrementally improves until six-sigma quality output is achieved. The real challenge with six-sigma is in its implementation. There is more than one method available for achieving six-sigma performance level. Some programs require expensive training ranging between $5,000 and $30,000 per employee. Nearly every program requires outspoken executive leadership and a basic understanding of statistics. Project leaders are frequently called “Black Belts” – a name given to them by General Electric. There are also Green Belts, Yellow Belts, and Champions; titles used to distinguish statistical knowledge and experience. Since six-sigma is tailored to fit each type of business, there is no one model that is suitable for all companies. Nevertheless, the primary objective remains the same – reduce output variation to reflect near perfect quality. A crucial step in implementing six-sigma is to understand customer needs. This can be especially challenging for service firms (Biolos 2002). Information must be gathered through surveys, focus groups, and research. This research is then analyzed to identify critical characteristics that must be met to satisfy the customer. Next, a company should define product or service defects. Defects would include any product or service that did not meet specifications or that causes customer discontent. It is important that defects can be quantified so that they can be traced and eliminated from the process. Equally important, output must be measurable and bring some quantifiable utility to the customer. Service firms need to actively engage in quality measurement in order to reduce cost and increase customer service (Ograjensek 2002). Although many manufacturing companies have implemented six-sigma initiatives successfully, it remains more difficult for service firms. The traditional financial functions within an organization need to understand the bottom line cost of poor quality (Freidman and Gitlow 2002) and explain the financial benefits reached through six-sigma to all the stakeholders (Neuscheler-Fritsch and Norris 2001). Juran has advocated implementing quality initiatives for service firms (Juran and Bingham 1999).
International Pharmaceuticals Industry: The New Marketing Paradigm in the United States and Unresolved Issues of Public Policy
Dr. Lee Richardson, University of Baltimore, Baltimore, MD
Dr. Vince Luchsinger, University of Baltimore, Baltimore, MD
The pharmaceutical industry is one of the major industries in the world, and is increasingly owned by American companies. One of the major tools of marketing of the industry within the United States is direct-to-consumer (DTC) advertising. Approved on an interim basis in 1997 by the industry’s regulator, the Food and Drug Administration, annual expenditures already approach $3 billion annually. The industry faces increasing resistance to many of its practices, including rising prices, and numerous criticisms of DTC have not been resolved. Future trends for DTC depend on a number of factors and it is unrealistic to make firm forecasts of the future. Some trends that have contributed to strong growth of the pharmaceutical industry are strong productivity of firms, robust and innovative research and development, demographic trends that result in a heavily prescribed older population, and a free market. Senior populations themselves are growing at an accelerated rate. Last and not least is the success of marketing strategies, especially traditional personal selling, but recently the addition of aggressive direct- to- consumer advertising programs (DTC), the core topic of this paper. DTC has broken the tradition of promotion aimed at physicians who prescribe the products, plus some lesser effort aimed at pharmacists who fill the prescriptions. DTC influences the consumers/patients who in turn may seek particular pharmaceuticals through their physicians among other purposes. DTC can be understood through first, analysis of the industry and then a look at how it tries to solve problems, especially with its milieu of tools of marketing. The landscape of the international pharmaceutical industry is turbulent and changing. Advances in Research and Development (R&D) and Mergers and Acquisitions (M&A) are altering the nature and operations of the international industry. Mergers provide economy of scale, which is important in providing a stream of new products at lower costs for the marketplace. Contributions from R&D crosses international borders in alliances and combinations which attempt to put new products on shelves and in the medicine chests of consumers. Global pharmaceutical sales for the 12 months ending in June 2003 were $430 billion, a 12% growth over 2001. Standard and Poors (2003) expects worldwide drug sales to increase based on a well prescribed and growing population of seniors. In fact, IMS, a Connecticut pharmaceutical marketing research firm, projects a growth of 6% to 7% globally through the 2002-2007 time span. This is an increase over the 7.9% growth per annum seen during 1995-1999. The North American countries (chiefly the United States) constituted 51% of the world market for drugs in calendar year 2002. Other segments of the world market were Europe (25%, Japan (12%), other Asian markets, Africa, and Australia were 8%, and Latin America 5%. Growth rates for the 2003-2005 period are estimated by IMS to be 12% for the U.S. compared to 7% in the United Kingdom, 5% in Germany, 4% in France, and 1.5% in Japan according to Standard and Poors (2003) estimates. Sales in Latin America declined 10% in the face of unfavorable economic and financial environments. The United States hosts the largest and fastest growing pharmaceutical industry in the world. IMS claims U.S. drug sales will increase at a 12% compound rate in the 2003-2005 era. Interestingly, 73% of the U.S. total sales were within the country, with the 27% remainder being sales of American companies to customers beyond U.S. borders. This attests to the borderless nature of product flow and sales, such as of pharmaceuticals. The U.S. pharmaceutical market reached $219B in size in 2002, according to NDCHealth, a health information services firm. The customer sectors of that market consisted of: retail pharmacies, 67%, hospitals, 15%, mail order, 8%, clinics, 7%, and nursing homes, 3%. Mail order has the fastest growth in 2002, some 31%; hospital sales grew 10%, and retail pharmacy sales increased 9.4%. Sales growth has remained consistent. The leading pharmaceutical companies in the United States in year 2002 sales in the global market were as follows as illustrated by Pharmaceutical Executive as:
U.S. Trade Deficits with China and Mexico: The Hecksher-Ohlin Theorem Revisited
Dr. Farhad F. Ghannadian, Mercer University, Atlanta, GA
The trade deficit of the U.S. and the reaction of the second Bush Administration has been criticized by the media and politicians who demand more restrictions on trade. A huge chunk of this deficit lies in two countries in which the U.S. is heavily involved in trading. These countries are China and Mexico. The two countries are different in every aspect one can imagine. Mexico is a close neighbor, and China is over five thousand miles away. Yet both of these countries and the United States exhibit the behavior predicted by two Swedish economists, Heckscher and Ohlin. The abundance of capital in the U.S. relative to China and Mexico and the abundance of labor in these countries relative to the U.S. will move these countries to produce those goods that are least expensive in terms of labor and capital. This article looks at the products that are traded between the U.S. and China and Mexico and recommends strategies that can be employed by the U.S. government to improve the trade balance with these countries rather than fight the natural economic transition in the respective economies by legislation or trade barriers. Recent trade talks between China and the U.S. with the new Chinese Premier’s visit to the U.S. in December of 2003 has created additional questions on trade. China’s trade surplus of over $120 billion and its holding of U.S. Treasury bonds in excess of $100 billion have made its economy clearly interconnected to the U.S. In addition, China’s monthly wages are only $120 a month in the more expensive urban areas, and there are another 500 million peasants living in the countryside waiting to enter the industrial labor force. Recently the Bush Administration lifted the U.S. steel tariffs after twenty months of threats by the European Union to impose sanctions if they were not removed. Most of the people living in the United States consume many goods which are manufactured overseas. For example, many of the U.S. residents will have their children’s bicycles manufactured in China, dining room may be from Italy, the table cloth covering the table from Hong Kong, computers from a country in Southeast Asia, and the list goes on. For the first half of 2003, according to the Manufacturing Alliance, a research group imports took two out of every three dollars in manufacturing shipments. The U.S. manufacturing is running at three-quarters capacity which is at a forty year low. The tremendous U.S. trade deficits of over $400 billion in the past three years may be indicative of problems imbedded in trade and economic theories. This trade deficit is almost 5% of the GDP and almost a million U.S. dollars per minute. According to the U.S. trade statistics, the trade deficit in actual dollar value is the highest it has over the last fifty years. A sampling of the most popular imports suggests the exact problem the U.S. is facing. Table 1 shows that the biggest threat worsening the trade deficit is in Asia and the Americas. Americans have enjoyed less expensively produced consumer and industrial goods, and electronics from Asia and other regions of the world. Canada’s economy is so linked to the U.S. that is hard to classify it as foreign even though for statistical and political purposes it is classified as such. When the trading dilemma is examined closer, a huge chunk of this deficit is in two countries that the U.S. is heavily involved in trading. These countries are China and Mexico. The two countries are different in every aspect that one can imagine. One is a neighbor and one is over five thousand miles away. One is one of the most populous nations and the other has a population half the size of the United States. Yet both of these countries and the United States exhibit behavior very much predicted by the two Swedish economists, Heckscher and Ohlin. This paper attempts to show that their trade model has predicted almost to perfection given several key assumptions. Also the article will introduce some prescriptions on how to deal effectively with the problems at hand. As the manufacturers of a wide range of goods complain to Congress on losing orders to Chinese counterparts, Congress will initiate a response in the 2004 election year. Despite the growth of the trade deficit in 2002 from $102 billion to an estimated $130 billion, China remains the fastest growing export market for American goods. The U.S. sells anything from airplanes to soybeans to the Chinese. In 2002 U.S. companies increased their exports by 19%, and in 2003 this figure is estimated to be in the neighborhood of 22-23%. Table 2 gives a brief description of the U.S. exports to China which was over twenty-two billion dollars in 2002. Table 3 shows the Chinese exports to the U.S. and these figures are an astonishing 125 billion dollars. Most of these exports are in the form of consumer electronics, computers, shoes, and clothing. The protests by many of the U.S. manufacturers are
An Analysis of the Incentives to Licensing in U.S. Information Technology
Dr. YoungJun Kim, The George Washington University, Washington D.C.
This paper investigates the validity of the potential factors that might affect the incentives of companies to license out their technology. Empirical analysis is provided with the help of a panel data set of observed licensing transactions worldwide involving information technology (IT) companies publicly traded in the United States. Our results show that transaction cost, market competition, and knowledge appropriability considerations weigh in heavily in explaining the licensing behavior. The important explanatory factors relate to the firm’s prior involvement in technology licensing, the industry concentration, the sales growth and the propensity to receive patents in the primary industry of the company. Company’s stock of technological knowledge (patent), the company size and R&D intensity also play a key role in determining manager’s licensing incentive. There is anecdotal evidence that market for technology is less developed than socially desirable and not well functioned. For example, a study by British Technology Group found that large companies in the United States, Western Europe, and Japan ignore a large amount of their patented technologies, which could be licensed or profitably sold (British Technology Group, 1998). The inefficiency of market for technology is caused by a number of impediments it faces. The best-known obstacle to the efficient market for technology is the “appropriability problem”. In his early paper, Arrow (1962) argues that once an idea is disclosed to a potential buyer, it is possible for that buyer to use the information without paying for it. Because of this concern, a potential licensor would be reluctant to disclose the core of technology, depriving a potential licensee of the chance to evaluate it. Then, without being able to evaluate technology, buyers would be unwilling to buy the product. Thus, this leads to a typical “market failure”. Nelson and Winter (1982) point that innovation is largely the outcome of organizational routines, and hence is more effectively performed within organizations. “Cognitive” limitations in the transfer of technology to another context require extensive adaptations and costs (Arora and Gambardella, 1994). Additional difficulty arises in subdividing a given problem-solving task into subtasks. It can be difficult to partition the innovation process into independent tasks (Kline and Rosenberg, 1986; Von Hippel, 1990). There can also be problems in exchanging tacit knowledge and know-how through arm’s-length contracts such as moral hazard problem and asymmetric information between licensor and licensee (Caves, 1996; Hart, 1995; Menard, 1996). In spite of these impediments, however, there is also extensive evidence of the increasing use of licensing in technology-intensive industries. For instance, a recent study by Arora, Fosfuri and Gambardella (2001) shows that technology licensing transactions with a total value of over $320 billion, implying an average of nearly $25 billion per year occurred worldwide in the period 1985-1997. Thompson Financial’s SDC database used in this paper lists more than 10,000 publicly announced licensing agreements during the 1990s. Then what are factors that affect technology holders’ incentives to license? Are there differences in licensing activities across firms? What causes such differences? How firms’ operational, organizational and primary industry characteristics affect business managers’ decision making on technology licensing. We address such questions. This paper investigates the validity of the potential factors that might affect the incentives of companies to license out their technology. The probability of selling technology licenses is explained by the firm level variables (sales amount, R&D intensity, capital investment, profitability, prior experience of licensing) and the firm’s primary industry level variables (concentration, sales growth, market size, the strength of intellectual property rights protection). Empirical analysis is provided with the help of a unique panel data set of observed licensing transactions worldwide. We focus on U.S. public companies operating in information technology (IT) sector: an area that has grown in economic importance over the last few decades. Limiting the analysis to firms in a single technology sector insures that the dimensions on which firms are characterized will be of comparable importance. The organization of the rest of the paper is as follows. Section 2 proposes the theoretical perspective on technology licensing. Section 3 describes the data. The model is specified in Section 4. Section 5 discusses the main results. Section 6, finally, concludes. The incentive of a company to sell its technology out to prospective competitors is driven by the two principal effects on the profits of the licensor working in opposite direction: the revenue effect and the rent dissipation effect (competition effect) (Arora, et al., 2001). The revenue effect is given by the profits that will accrue to the licensor in the form of licensing payments (i.e. a fixed licensing fee, royalty) by licensees. The licensor firm essentially increases its aggregate market share of products produced with its own technology by adding a licensee, and raises profits from licensing payments. Thus the revenue effect has a positive impact on the licensor’s profits.
The Impacts of Country-of-Origin on Brand Equity
Dr. Chien-Huang Lin, National Central University, Taiwan
Danny T. Kao, National Central University, Taiwan
In the era of global marketing, corporations have to keep an eye on the marketing environment to survive in the long run. Branding strategies, as the key role in the marketing mix, are increasingly viewed as a powerful tool to obtain sustainable competitive advantages, to fully utilize available resources and to avoid bleeding price competitions (Aaker and Keller, 1990). Brand equity is widely acknowledged as an index of measuring the effectiveness of branding strategies. However, while facing numerous unfamiliar brands, consumers may fall into the dilemma. Therefore, the country-of-origin (COO) becomes a critical external cue for consumers to depend on. Unfortunately, the country of brand-of-origin is not exactly identical to that of the brand-of-manufacture due to the international OEM businesses. It is interesting to observe that how this phenomenon will impact on the brand equity. The concept of brand equity was addressed by advertising agents several decades ago (Barwise, 1993). Brand equity is currently not an account in the financial statements in practical field, however, it is significantly influential on revenues. For example, Philip Morris, the manufacturer of Marlboro, had acquired Kraft Foods and Miller Beer. AOL and Times Warner is another story of merger. In the process of M &A, the assessment of brand value is absolutely indispensable, thus the importance of brand equity is even conspicuous. For the time being, almost every industry is suffered from enduring economic depression and tries to reduce operational costs, as well as to raise up the profits simultaneously. Brand equity is thus receiving more attentions. While brand equity is beneficial for consumers to screen out messages in chaos, to reinforce the confidence of purchase decisions, and to create greater satisfaction, it will also conducive for sellers to increase marketing effectiveness and efficiency, to establish brand loyalty, to improve the profitability, and to distinguish from competitors (Huang, 2001). Therefore, it is suggested that the marketers spend every efforts to maintain / strengthen brand awareness and loyalty, to upgrade the quality perceived by the consumers, and build up the positive brand association, and thus to increase the comprehensive brand equity. The COO effect, regarding the consumer’s viewpoints and assessment toward a brand, has become a popular topic in past decades. Researchers observed that, in addition to the company that manufactured the product, the country that a product was made in also has potential impacts on the purchase decisions, and in turn the secondary association will emerge (Keller, 1998). In other words, a country may stick to an exclusive reputation or stereotype on specific products in consumer minds. For example, perfume—France, home appliance—Japan, and wristwatch—Swiss. However, some brands may decide to shift the manufacture bases to other countries or regions, or even authorize other foreign firms to produce some key components or parts (the so-called OEM) for lower production costs. This phenomenon leads to the country that a brand belongs to and the country that a brand was actually made in is not identical, which may affect the brand equity. Therefore, we thereafter present a conceptual framework and some propositions to interpret their relationship. Brand Awareness: Simply speaking, brand awareness refers to "the ability of a potential buyer to recognize or recall that a brand is a member of a certain product category" (Aaker, 1991). According to Keller (1993), brand awareness consists of two sub-dimensions: brand recall and brand recognition. Say, most consumers know that Colgate produces dental care products. However, many brands start to stretch into unrelated industries due to the overuse of product diversification. This inadequate decision is subject to bog down into ambiguous brand positioning, which may further do harms to brand assets (Ries, 1999). Consumers tend to feel confident in known brands for the first time purchase if they are not familiar the product category. According to psychological views, awareness brands are most likely chosen in the final stage of purchase decisions. Hence, we can verify the importance of brand awareness. Brand Association: Brand association is anything "linked" in memory to a brand (Aaker, 1991). For example, Porsche reminds consumers of a sports car with modern style, streamline shape, and top price. That is what Porsche is like in consumer minds. Simply, we can view the brand image as a set of the collective impressions of a brand. The magnitude of brand association ties to the exposure frequency of a brand and consumer personal usage experience. The more use or the more messages associated with a brand, the stronger (positive or negative) the brand association will be. Marketers have to be alert to prevent from potential negative brand association in marketing events. Perceived Quality: Perceived quality can be defined as "the consumer's judgment about a product's overall excellence or superiority" (Zeithaml, 1988). Marketers across all product and service categories increasingly recognize the importance of perceived quality in brand decision (Morton, 1994). In fact, perceived quality is a subjective judgment and even cannot be interpreted in a scientific way. Some elements, such as channels, brand images, country-of-origins, prices, designs and accredited certificates, may moderate perceived quality. Brand Loyalty: Brand loyalty is "the degree to which a buying unit, such as a household, concentrates its purchases over time on a particular brand within a product category" (Schoell & Guiltinan, 1990). Brand loyalty reflects the outcome of satisfied first-time purchase experience. However, brand loyalty may be affected by the external factors such as competitor promotional events. Price premium is one of the most critical indicators to measure brand loyalty, which means “the premium a consumer would pay for a branded product or service, compared to an identical unbranded version of the same product/service" (Biel, 1993). The purpose of this research is to pinpoint the relationship between COO and brand equity, as well as the moderators in-between those two constructs. It is widely accepted that COO is an important external cue to judge product quality while consumers face an unfamiliar brand. In a nutshell, consumer perceptions toward a brand may mainly lie in COO, which will proportionately lead to actual purchase actions to some extent. However, some factors in-between may interfere the impact of COO on brand equity. Both issues mentioned above still leave much to be desired in previous literature. Therefore, we hereby present a conceptual framework to explain the issues and make some useful suggestions. To marketers, understanding of the relationship between COO and brand equity will definitely benefit the effectiveness of marketing strategies to survive in the increasingly competitive business jungle. To consumers, making adequate purchase decision will thus become an easier job. The simplified chart below is to interpret the hypothesized relationship between COO effect and brand equity.This research is exploratory in nature. The product categories applied in this research are the sedans and personal digital assistants (PDAs) due to their essence of prevalence and multi-nationality. The research questions are as follows: To examine the effects of COO on brand equity. To explore the moderators between COO and brand equity.
Making the Most of International Assignments: A Training Model for Non-resident Expatriates
Dr. Spero C. Peppas, Mercer University, Atlanta, GA
Studies indicate that, as a result of globalization, most firms expect the number of employees they send on international assignments to grow. However, with downsizing and an increased focus on the bottom line, the three to five year expatriate posting has all but disappeared, giving way to commuter, frequent flyer and virtual assignments. With the evolution of these alternatives comes the need for new paradigms to prepare employees to perform effectively in different environments. This paper sets forth a 3-step, time- and cost-effective model to provide non-resident expatriates with basic macro-environment information and to acculturate them to their international destinations. Given globalization trends, companies are realizing the importance of having employees who can function well in the international arena. A 2002 study by PricewaterhouseCoopers found that 75% percent of firms surveyed expected an increase in the number of employees on international assignments. At the same time, 82% of firms viewed cost reduction as a priority in international assignments (International assignments: key trends 2002, 2003). Despite high costs, companies use expatriates for a variety of reasons, for example, to establish a business presence quickly in response to market developments; to provide skills that are not available in a particular country; to transfer technical knowledge as well as company culture and policy after mergers and acquisitions; and to allow employees to gain international experience as part of company management development programs (van der Boon, 2001). However, long-term (three to five year) expatriate assignments, the corporate rule in the past, are giving way to new alternatives. Such long-term assignments are complex and cause disruption to employees as well as to their families. Employers must give consideration to housing, transportation, children’s education, taxation, health insurance, retirement plans, dual career implications, as well as to the possibility of failed assignments due to various factors, including adjustment problems for the trailing spouse and family (PricewaterhouseCoopers expatriate survey, 2000; Global relocation trends 2001 survey report, 2002). A recent study (Hyde, 2002) found that, today, the majority (77%) of long-term assignments are for less than three years, and that many companies are turning to the alternatives described below. Short-term assignments are defined as those with a specified duration, usually less than one year. While the family may accompany the relocated employee, this is generally not the case. These assignments are less complex to administer than traditional long-term assignments and therefore the cost to the company is considerably less (Measuring the value of international assignments, 2003; IHRM update, 2002). International commuter assignments involve having employees travel from their home countries to work in an office setting in a destination country or countries, usually on a weekly or bi-weekly basis. The employee is away only for short intervals and since the family stays at home in this arrangement, this option greatly reduces costs to the company. A variant to the international commuter is the international frequent flyer, an employee who makes frequent international business trips, generally lasting only a few days (Measuring the value of international assignments, 2003). Faced with the difficulty of getting qualified employees to commit to any international assignment, some companies have turned to advances in technology for another solution: virtual assignments. Most often associated with international teamwork, companies manage virtual international assignments via electronic communication such as video- and teleconferencing. While not on a regular basis, some international travel is also a part of this type of assignment (Greene, 2001). Approximately two-thirds of companies provide at least one day of employee cross-cultural training for international relocation assignments (Global relocation trends 2001 survey report, 2002), but only 20% of expatriates rate the preparation they receive as “good” (HR update: family values emerge in expatriate study, 2002). Noteworthy is that pre-departure preparation for “shorter” assignments appears to be much reduced, when, according to some, the need for the employee to hit the ground running would suggest a need for even better preparation (Pinnell, 2003; Joinson, 2002). While commuter, frequent flyer, and virtual international assignments may be easier for companies to manage from an administrative point of view, process models and implementation plans are not readily available for these types of assignments (IHRM update, 2002). This paper helps to fills this void by proposing a 3-step model for training non-resident expatriates.
Corporate Governance: Theory and Practice
Dr. Malek Lashgari, CFA, University of Hartford, West Hartford, CT
Various theories and philosophies have provided the foundation for the development of alternative forms of corporate governance systems around the world. Furthermore, as economies have evolved through time it appears that corporate executives have deviated from the sole objective of maximizing shareholders’ wealth. Owners of the capital have responded to these forces for the purpose of preserving their wealth and earning a reasonable return on their invested capital. Whereas internal corporate control, external financial market forces, and institutional investors’ responses have been effective in securing shareholders’ wealth, legal protection needs to be provided for them. As a legal entity, a corporation enters into contracts to produce goods and services and it has the right to own property. Furthermore, the firm can borrow from various lenders and raise cash by issuing shares of its ownership. Shareholders would not only benefit from the earnings generated by the corporation, but by electing members of the board of directors they could indirectly oversee actions undertaken by the managers. These managers, as agents of the shareholders, are expected to perform for the best interest of the owners of the corporation. Corporate managers can add value to common stockholders without decreasing the welfare of the other corporate stakeholders. For example, borrowing a portion of the capital that is needed for financing activities of the firm, would lead to a higher return to common stockholders. This is because borrowing is generally inexpensive for the firm in the face of taxation benefits available to business enterprises. Executive decisions may result in a transfer of wealth from one group of shareholders to the other. For example, by undertaking risky investment projects, greater rewards may be available to common stockholders without any such benefits to bondholders, except for suffering from excessive risk. Corporate managers can also destroy wealth. History tells us numerous examples in which actions undertaken by corporate executives have resulted in bankruptcy of the firm. The managers of a business enterprise, however, could add value for all corporate stakeholders including owners of the capital, labor, and the society at large. This would be a case of Pareto optimality in which the welfare of some group is increased without any decrease in benefits to the others. Corporate governance is concerned with managing the relationship among various corporate stakeholders. Roe (1994), states that the American corporate governance system emerged as a result of both economic evolution and its democratic philosophy. In effect, the government by deliberately weakening commercial banks gave corporate managers excessive power. U.S. Banks were prevented from becoming corporate shareholders, let alone a large shareholder. U.S. laws further restrained activities of large shareholders. In this manner, the profile of the American corporate shareholding became as widely dispersed as possible. The idea, as expressed by the Coase Theorem, was that in this manner management would need to get the agreement of numerous dispersed shareholders, and thereby act in the best interests of them all. The political view on corporate governance was based on the belief that banks, as lenders to the corporation, should not be able to affect the payoffs to common stockholders. The modern view on corporate governance, as expressed by North (1994), depicts formal and informal contractual agreements among corporate stakeholders. These may include the payoff structure for suppliers of capital such as stockholders and lenders, the incentive structure for corporate managers, and the organizational structure for maintaining an effective balance in bargaining power of employees of the corporation. This humanly designed organizational structure would involve transaction costs for maintaining and enforcing agreements. The neoclassical view assumes that institutions do not matter. Modigliani and Miller (1958), for example, hypothesize that assuming that the investment policy of the firm is known to the market, its total market value would be independent of the mix of debt and equity that is used in financing the firm’s assets. In particular, the firm’s structure of capital claims would not affect its overall cost of capital. As a consequence, investment and financing decisions of the firm would remain independent of each other. In this manner, corporate governance structure of the firm would not contribute to creation of value for shareholders. In contrast to the neoclassical view, Williamson (1988) states that the debt and equity are not mainly alternative financing instruments, but rather an alternative governance structure. Furthermore, whether a project should be financed by debt or equity depends principally on the characteristics of the assets. Re-deployable assets could be financed by debt, while projects that are not re-deployable should be financed by equity.
A Review of Employee Motivation Theories and their Implications for Employee Retention within Organizations
Dr. Sunil Ramlall, University of St. Thomas, Minneapolis, MN
The article provides a synthesis of employee motivation theories and offers an explanation of how employee motivation affects employee retention and other behaviors within organizations. In addition to explaining why it is important to retain critical employees, the author described the relevant motivation theories and explained the implications of employee motivation theories on developing and implementing employee retention practices. The final segment of the paper provides an illustration with explanation on how effective employee retention practices can be explained through motivation theories and how these efforts serve as a strategy to increasing organizational performance. In today’s highly competitive labor market, there is extensive evidence that organizations regardless of size, technological advances, market focus and other factors are facing retention challenges. Prior to the September 11 terrorist attacks, a report by the Bureau of National Affairs (1998) showed that turnover rates were soaring to their highest levels over the last decade at 1.3 % per month. There are indeed many employee retention practices within organizations, but they are seldom developed from sound theories. Swanson (2001) emphasized that theory is required to be both scholarly in itself and validated in practice, and can be the basis of significant advances. Given the large investments in employee retention efforts within organizations, it is rational to identify, analyze and critique the motivation theories underlying employee retention in organizations. Low unemployment levels can force many organizations to re-examine employee retention strategies as part of their efforts to maintain and increase their competitiveness but rarely develop these strategies from existing theories. The author therefore described the importance of retaining critical employees and explained how employee retention practices can be more effective by identifying, analyzing, and critiquing employee motivation theories and showing the relationship between employee motivation and employee retention. Furthermore, Hale (1998) stated that 86% of employers were experiencing difficulty attracting new employees and 58% of organizations claim that they are experiencing difficulty retaining their employees. Even when unemployment is high, organizations are particularly concerned about retaining their best employees. The article provides a synthesis of employee motivation theories and offers an explanation of how employee motivation affects employee retention within organizations. In addition to explaining why it is important to retain critical employees, the author described the relevant motivation theories and explained the implications of employee motivation theories on developing and implementing employee retention practices. The final segment of the paper provides an illustration with explanation on how effective employee retention practices can be explained through motivation theories and how these strategies serve as a strategy to increasing organizational performance. In today’s business environment, the future belongs to those managers who can best manage change. To manage change, organizations must have employees committed to the demand of rapid change and as such committed employees are the source of competitive advantage (Dessler, 1993). “Commitment is critical to organizational performance, but it is not a panacea. In achieving important organizational ends, there are other ingredients that need to be added to the mix. When blended in the right complements, motivation is the result” ADDIN ENRfu (O'Malley, 2000, p.13). Fitz-enz (1997) stated that the average company loses approximately $1 million with every 10 managerial and professional employees who leave the organization. Combined with direct and indirect costs, the total cost of an exempt employee turnover is a minimum of one year’s pay and benefits, or a maximum of two years’ pay and benefits. There is significant economic impact with an organization losing any of its critical employees, especially given the knowledge that is lost with the employee’s departure. This is the knowledge that is used to meet the needs and expectations of the customers. Knowledge management is the process of creating, capturing, and using knowledge to enhance organizational performance (Bassi, 1997). Furthermore, Toracco (2000) stated that although knowledge is now recognized as one of an organization’s most valuable assets most organizations lack the supportive systems required to retain and leverage the value of knowledge. Organizations cannot afford to take a passive stance toward knowledge management in the hopes that people are acquiring and using knowledge, and that sources of knowledge are known and accessed throughout the organization. Instead, organizations seeking to sustain competitive advantage have moved quickly to develop systems to leverage the value of knowledge for this purpose (Robinson & Stern, 1997; Stewart, 1997). Thus, it is easy to see the dramatic effect of losing employees who have valuable knowledge.
Determining Success Indicators of E-Commerce Companies Using Rough Set Approach
Faudziah Ahmad, Universiti Kebangsaan Malaysia, Bangi, Selangor, Malaysia
Dr. Abdul Razak Hamdan, Professor, Universiti Kebangsaan Malaysia, Bangi, Selangor, Malaysia
Dr. Azuraliza Abu Bakar, Universiti Kebangsaan Malaysia, Bangi, Selangor, Malaysia
The successfulness of E-Commerce companies (ECC) depends on a large number of indicators. To include all relevant indicators in measuring success would present a tremendous burden in terms of data collection, analysis, and cost. Evidence in the literature, indicates that there are a limited number of critical areas necessary to the successfulness functioning of organizations (Rockart, 1979). Globerson (1985) found that three indicators were commonly used. While there is no hard and fast rule as to the correct number of indicators to use, it was recommended that not more than seven indicators should be used in measuring performance (Globerson et. al, 1991). Rough set classification technique is proposed to identify the best set of ECC success indicators. The set of indicators are identified from the set of reduct that produce rules with highest classification accuracy. The indicators are ranked through computation of its frequencies of occurrences in reduct sets. The experiment conducted discovered important indicators ranked in the ten topmost orders and a set of reduced indicators was identified. The internet-based revolution is far from over and more and more companies have realized the opportunities it offers. A study on ECC conducted by UNCTAD revealed that the global E-Commerce market was worth around US$ 615.30 billion and expected to grow to US$ 4,600 billion by 2005. An estimate by Forrester Research indicated that global online sales accounted for approximately US$ 2,293.50 billion of world trade during 2002 (Nasscom, 2003). The study of organizational performance has long being conducted by many research groups. Growth, profit, net income, Earning per share are some indicators that are looked at when inquiring about a company's performance. A good performance indicates that a company is successful and poor performance indicates otherwise. In general, “success” as defined by Oxford Advanced Learners Dictionary (1989) is “an achievement of a desired end, social position or wealth. In relation to an organization, success is when an organization accomplishes its objectives that cover long-term achievements in terms of survival, effectiveness and efficiency, and productivity (Mescon, 1985). Some of the frameworks on company performance that have received much attention are Information Economics (Parker and Benson, 1988), The Balanced Scorecard (Kaplan & Norton, 1992,1996), Business Excellence Model (EFQM, 1999) and Performance Prism (Neely et. al, 2001). Each framework identifies a set of variables or indicators that contribute to the performance of ECC. These variables were gathered from different aspects of an organization. Parker and Benson looked at an organization through adjusted ROI, business value, IT (Infrastructure) value, and risks and uncertainty. Performance Prism, on the other hand, focussed on people when evaluating performance. According to Kaplan and Norton performance measurement of an organization should cover all aspects. That is why he named his method as "Balanced Scorecard". He analyzed an organization from four different perspective namely internal process, customer, innovation, and finance. Each aspect has several indicators that can influence organizational performance. All these approaches used data mainly from surveys, interviews, and observations. Investors, suppliers, partners, mergers, and external interested parties, too look at companies' performance when making decisions on investments matters. These people seek financial data to look for trends, ratios, and other numerical information. Financial data has been a popular source and used to analyze companies' performance by many research companies such as Multex Investor, Media General Financial Services Corporate, Nasdaq,and Reuters. Financial analysis, which focuses on financial information, in general, can be categorized into profitability ratio, efficiency ratio, and price ratio. Measures from these categories are many and among these are current ratio, quick ratio, net income, working capital, operational income, revenue, sales growth, earnings per share, gross profit, book value, stock price, stock volume, and others (Corrado & Jordon, 2000). These measures have been found to have great influence on the performance or companies and are indicators of the success of companies. All these measures are relevant indicators in measuring success. However, to include all relevant indicators in measuring success would present a tremendous burden in terms of data collection, analysis, and cost. Evidence in the literature, indicates that there are a limited number of critical areas necessary to the successfulness functioning of organizations (Rockart, 1979). Globerson (1985) found that three indicators were commonly used. While there is no hard and fast rule as to the correct number of indicators to use, it was recommended that not more than seven indicators should be used in measuring performance (Globerson et. al, 1991).
The Implementation of Total Quality Management Strategy in Australia: Some Empirical Observations
Dr. Richard Yu-Yuan Hung, Toko University, Chia-Yi, Taiwan
This paper reports a study designed to examine the key concepts of TQM implementation and their effects on organizational performance. Process Alignment and People Involvement are two key concepts for successful implementation of TQM. The purpose of this research is to discuss how these two constructs affect organizational performance. The research hypotheses were empirically tested using a cross-sectional mail survey. Based on 207 responses from Australia’s top 1000 companies and perception data from CEO’s and MD’s, this research confirms several existing research findings and presents some new results. This research provides useful insight into the organization that uses TQM as an organization development program. TQM strategies represent a paradigm shift from the earlier strategies of the 1980s in the approach to management science. Studies showed that TQM was positively associated with performance outcome, such as financial performance and profitability (Cummings & Worley, 2001; Lawler et al, 1995) as well as with human outcomes, such as employee satisfaction, employee relations, and customer satisfaction (Lawler et al, 1995). Although many TQM studies have been done to discuss the concept and principles, the key to successful TQM program is not fully understood (Weintraub, 1993). According to Rivers and Bae (1999), successful implementation of TQM require a transportation of organizational information system infrastructure and other management systems so that they are aligned with the new TQM environment. Powell (1995) suggested that tacit resources such as organizational culture, commitment, empowerment and business processes drive TQM success. Sahney (1991) pointed out key concepts to implementation TQM, which included: top management leadership, creating a corporate framework for quality, transforming corporate culture, a collaborate approach to process improvement, integration with the process etc. However, for TQM to be successful, management processes must be aligned and integrated within a TQM environment. For example, the bureaucratic system must be transformed, strategies must be aligned, and information system must be integrated to make sure the TQM success. Some studies showed that it is important for top management take a leadership role and shows a strong commitment at the time of implementing TQM (Lee & Asllani, 1997; Rivers & Bae, 1999; Weintraub, 1993). Also Weintraub (1993) pointed out that the quality management process will be successful only when it becomes integrated with every employee’s activities. Although reports of TQM success are plentiful in the popular literature, there are also reports of problems (Cummings & Worley, 2001; Powell, 1995; Fortune, 1991). Many affecting factors of success TQM practices that result in performance, core processes and people are two major points to drive TQM success. However, more empirical studies are needed to show the contribution of organizational variables such as structure, strategy, information technology, human resources, leadership, culture, and employee participation on the success of TQM programs (Nadler, 1998; Tushman and O’Reilly, 2002). It is the intention of this study to examine the impact of organizational variables, specifically the alignment of structure, strategy and information technology, and executive commitment and employee empowerment, on TQM programs and the final performance outcomes. Therefore, this study suggested that Process Alignment and People Involvement are two constructs which influence the organizational performance when the organizations perform TQM initiatives. Process Alignment (PALI) consists of three variables – structure, strategic and IT alignment. People Involvement (PINV) consists of two variables – executive commitment and employee empowerment. The purpose of this research is to examine relationship between two TQM concepts and their impact on organizational performance. Specifically, this study addresses the following research questions: What is the relationship between Process Alignment and Organizational Performance, especially, when organizations practice TQM initiatives? To what extent, do the components of Process Alignment affect Organizational Performance? What is the relationship between People Involvement and Organizational Performance, especially, when organizations practice TQM initiatives? To what extent, do the components of People Involvement affect Organizational Performance? The theoretical foundation for this study is the TQM, Process Alignment, and People Involvement. Process Alignment can be interpreted as the organizational effort needed to make the processes the platform for organizational structure, for strategic planning, and for information technology (Hammer, 1996; Spector, 1995). The aim of Process Alignment is to arrange the various parts of the company to work in harmony in pursuit of common organization goals, in order to improve performance and sustain competitive advantage (Weiser, 2000). According to Organizational theory, organizations are required to design their structures and systems to align the contingencies of environment, strategy, technology, and so on for survival and success (Daft, 1998; Lewin, 1999). Many previous studies have empirically demonstrated the positive effect of alignment on organizational effectiveness (Lawrence and Lorsch, 1967; Gresov, 1989; Roth et al., 1991). Alignment theory (Semler, 1997) suggested that employee behavior consonant and organizational goal work together through structural change, strategy usage and culture transformation. Specifically, Weiser (2000) stated that in order to link all areas of the organization and serve as an informational lifeline throughout the change and alignment process, the organizational structure needs to be redesigned to cross-functional. Grover et al. (1997) pointed out that IT as a transformational subsystem is imperative in culture transformation. Therefore, when an organization is appropriately aligned, organizational structure, strategic planning and IT correspond to organizational core processes and objectives, ensuring competitive advantage.
Urban Land Pricing Under Uncertainty: An Introductory Model
Dr. Bruce Lindeman, University of Arkansas at Little Rock, Little Rock, AR
In the downtown areas of smaller cities, large buildings are rarely built. However, when the pressures of rising rents become great enough, it becomes likely that at some time in the near future a developer will purchase a suitable plot of available land and construct a new building. As the likelihood of this event becomes greater, speculators will begin to show interest in available plots of land, hoping to reap a profit by buying one (or more) and ultimately selling to the developer. However, until the developer ultimately decides upon a single plot, uncertainty will prevail among speculators as to which it will be. This paper develops a simple model involving two acceptable plots of land and varying numbers of speculators. The analysis shows that when only two speculators are involved, it is possible for them to buy the available plots at “bargain” prices. However, when three or more speculators show interest, the plots will be either “fully” priced, or overpriced. Further, the more uncertainly that prevails among speculators, the greater is the likelihood of more significant overpricing. Speculation in land is a common practice. The objective is to buy land cheaply, when its immediate development prospects are poor, and to hold it for some time while it “ripens” into more valuable property suitable for development. This paper develops a simple model appropriate to the downtown areas of smaller cities where large buildings, such as high-rise office structures, are only occasionally built. From time to time, however, it becomes apparent that new construction will occur, and that in the near future a developer will seek a suitable plot of available land and construct a new building. The model focuses upon this interim speculative period. We define this period as the time during which it is certain that a new building soon will be built, but before a developer actually purchases a site. During this period, speculators (whom we will call buyers) bid on and try to buy suitable sites. This simple model assumes only two available sites, and examines situations involving varying numbers of buyers, varying situations of buyers’ bids for the two sites, and the effect of uncertainly upon the process. We assume that there exist two plots of land (1 and 2) equally suitable for such construction. During the speculative period, it is unknown (and therefore uncertain) among buyers as to which one ultimately will be chosen for development. However, it is known to all buyers what price developers will pay for the chosen parcel. For the sake of simplicity, we will designate this ultimate price of the chosen parcel to be 1; once it is chosen, the value of the other parcel will become 0, since it will no longer be of any use. For simplification, we will ignore time as a factor, along with the time value of money. During the speculative period, each buyer must determine how much he is willing to pay for either parcel. This is done using an expected value approach, based upon the buyer’s view of the probability that each individual parcel will be chosen. Each buyer will have his own view of those probabilities; they will not necessarily agree on these probabilities because they may have differing subjective interpretations of all widely-known information. Also, they may believe that they have what they consider to be unique (inside) information that can give them an edge; they would not want to share such information with anyone else. We designate O1i and O2i to be, respectively, the prices offered by buyer i for plot 1 and plot 2 respectively. Each buyer will assign to each plot a probability of ultimately being chosen for development. If buyer i assigns to plot 1 a probability of pi1, then he must necessarily assign to plot 2 a probability of 1 - pi1; since there are only two plots available, and it is assured that one ultimately will be chosen for development, then the probabilities of being chosen for development must sum to 1. The ultimate value of the chosen plot will be 1, and that of the other will be 0, so the ultimate summed value for both plots together will be 1. Therefore, a rational expected value approach requires that O1i + O2i = 1, since the maximum return to a buyer owning both plots would be 1. Bidding: During the speculative period, buyers bid on each plot; the highest bidder for each plot acquires it. We can rank-order, from highest to lowest, the buyers’ offers for, say, plot 1 as follows: O11 ≥O12 ≥. . . ≥O1i. We use the sign because it is not required that buyers each have different bids for a given plot. We will designate the “winning” bids for plots 1 and 2 to be P1 and P2 respectively. In the situation where the two highest bids are different (so that, for plot 1 for example, O11 > O12), then P1 (the winning bid for plot 1) depends upon the amount of O12, the amount bid by the second-highest buyer, since the “winning” bidder is required only to outbid the second-highest, and not necessarily to make the highest bid he is willing to. Because the “winner” must outbid the next-best offer, there must be some minimum increment by which the winning bid must exceed any other; we will call that increment . Therefore,
Identifying Global Leadership Competencies: An Exploratory Study
Cristina Moro Bueno, Grupo Antolin, North America
Dr. Stewart L. Tubbs, Eastern Michigan University, Michigan
The influence of globalization and technology requires new business paradigms and new leadership competencies. The goal of this study was two-fold. First, to test the Global Leadership Competencies Model developed by Chin, Gu, and Tubbs (2001) and secondly to identify Global Leadership Competencies. The model consists of a pyramidal hierarchy that represents developmental phases analogous to Maslow’s need hierarchy. The phases are (1) ignorance, (2) awareness, (3) understanding, (4) appreciation, (5) acceptance/internalization, and (6) transformation as leaders mature as a result of their international experiences. For this qualitative study, 26 interviews were conducted with international leaders from several countries whose average international expatriate experience was 48 months. Results obtained demonstrated that the model was predictive. The results presented also indicate that leaders consider the following to be some of the most important global leadership competencies: (1) communication skills, (2) motivation to learn, (3) flexibility, (4) open-mindedness (5) respect for others, and (6) sensitivity. For the full text see; Cristina Bueno. Global Leadership Competencies (GLC) Model. MBA Thesis, Eastern Michigan University, Ypsilanti, Michigan, 2003. The research in leadership development has recently turned toward identifying leadership competencies (knowledge, skills, abilities and behaviors); Charan, Drotter and Noel, (2001); Fulmer and Goldsmith, (2001); Goleman, Boyatzis and McKee, (2002); Tubbs and Moss, (2003); Tubbs, (2004); Vicere and Fulmer, (1997). The logic is that once the competencies can be identified, the leadership development process can more effectively focus on improving the deficiencies identified in each individual. It is also known that all leadership occurs in some context. The word competency comes from a Latin word which means “suitable.” An individual’s competency refers to an individual’s ability to respond to the demands placed on them by their environment. The most important leadership competencies are those that can best transfer across cultures, both within organizations and from one country to another, Acuff, (1997); Deal and Kennedy, (1982); McGee, (2003); Rogers, (1995); Trompenaars and Hampden-Turner (1998). The purpose of this study was to further investigate the Global Leadership Competencies Model developed by Chin, Gu and Tubbs (2001). It was an attempt to advance the research and development on this topic. The purpose was to study different leadership styles and to identify the competencies required for global effectiveness. Leaders must improve from deficiency levels to competency levels in order to succeed in conducting international business. (See also Rosen, et. al., 20000, and Hampden-Turner, et. al, 2000). Hampden-Turner and Trompenaars (2000) have found that global leadership competencies develop over a long period of time. If research can identify the most important leadership behaviors, practitioners can perhaps shorten the process of developing the most important competencies (McCall and Hollenbeck, 2002). The GLC model hypothesizes that the levels are learned in a predictable sequence as described below. “At the lowest level of the pyramid the individual begins with a state of global leadership deficiencies. In other words, it is difficult to move to the next level higher in the hierarchy until one has moved through the lower level. In addition, through negative experiences, it is possible to have individuals ‘backslide’ and move from a higher level on the pyramid to a lower level. At the highest level of the pyramid, an individual can achieve some level of global leadership competencies” (Chin et al., 2001, p. 23). In other words, various stages are involved for a successful adjustment (Sanchez et al., 2000). The pyramid levels (from lower to higher) are: (a) ignorance, (b) awareness, (c) understanding, (d) appreciation, (e) acceptance/internalization, and (f) transformation (Chin, Gu & Tubbs, 2001). A better understanding of the stages involved in a successful adjustment to a foreign environment should help in the development of a global mindset (Sanchez et al.).
Public Policy Failure in Health Care
Dr. W. Guy Scott, Massey University at Wellington, New Zealand
Most governments in developed countries have evolved a health policy to improve allocative efficiency and distributive equity in the delivery of health care. Many of these policies fail. This paper discusses policy formulation and analysis in health care, why policies fail, and suggests some solutions. A range of actions is necessary to minimise policy failures. Policy makers must take into account the multidimensional nature of health status and the complex web of both health status and health policy determinants. Appropriate perspectives must be adopted. Tradeoffs between social equity and economic efficiency objectives and between stakeholder interests must be addressed. All important costs and effects must be identified and taken into account. Policy initiators, implementers, evaluators and consumers must communicate with each other. Monitoring the effectiveness of a policy in achieving its equity and efficiency objectives should be an essential step in the policy cycle. The primary objective of public policies for health should be to improve the health status of a nation’s population in an equitable and cost-effective manner QUOTE "(Evans, 1977)" ADDIN PROCITE ÿ\11\05‘\19\02\00\00\00\0D(Evans, 1977)\00\0D\00Ý\01\00\00LC:\5CDocuments and Settings\5Cgscott\5CMy Documents\5CProCiteData\5CGuyProCiteData.pdt\0FEvans 1977 #478\00\0F\00 (Evans, 1977). Delivery of health care is regulated and predominantly funded by the state (1) in the majority of developed economies. Public policies for health have evolved because it is unlikely that health care will delivered both efficiently and equitably if resource allocation decisions are left entirely to the free market. Governments may impose regulations, introduce a national health insurance scheme, establish a national health service, or state fund or subsidise private providers of health care. Health care is but one of the many determinants (2) of the health status (3) of individuals and populations. The influence diagram (Figure 1) summarises the main determinants of health status and shows the dominant directions of causation. In this context demographics embrace a wide range of factors including; age, gender, marital status, education, occupation, religion, ethnicity, family size, income, employment status and geographic location. Although the prime objective of health care should be to improve the health outcomes of the population in an equitable and cost-effective manner, the health system produces health services (4) (Figure 2) (such as medicines, hospital admissions and medical consultations) not health outcomes. Uncontrolled and unregulated free markets may fail to yield the optimal outcome for society (with respect to society’s goals of efficiency and equity). Where there is market efficiency (in all markets) and if markets exist for all goods (including leisure time), the “invisible hand” of the market place ensures that marginal social benefit and marginal social cost are equated. However, it is unlikely that such markets exist in the provision of all forms of health care (5). In addition, the market for health services will have no influence on many of the determinants of health status (for example, the physical environment is a major determinant of health status but cannot be changed directly by the healthcare market). State intervention is thus necessary to correct for market failure, or for lack of markets. But even where efficient and complete markets exist government involvement may be necessary to achieve society’s equity goals. A government may therefore wish to intervene and attempt to correct for these shortcomings by developing and implementing a health policy. (Figure 3)
From Cultural Models to Cultural Categories: A Framework for Cultural Analysis
Dr. Nitish Singh, California State University, Chico, Chico, CA
In the marketing literature, culture has been predominantly measured by cultural values. But cultural values only measure the behavioral aspect of the culture. To understand and analyze culture in its totality we need to take into account not only cultural values, but also cultural forms, propositions, routines, customs, symbols and artifacts. The main objective of this paper is to propose a conceptual framework, which derives cultural categories by analyzing various stages of cultural formation, so as to provide a broader and more complete framework to analyze culture. In other words, this paper proposes a conceptual framework for cultural analysis that takes into account perceptual, behavioral and symbolic dimensions of culture and puts forth operational constructs to measure them. In an attempt to analyze cultural phenomenon, researchers have proposed cultural categories, which can in some way operationalize and measure culture. One of the earliest attempts towards proposing cultural categories for analyzing culture came from Kluckhohn and Strodtbeck (1961). They proposed six cultural dimensions, namely (1) the nature of people (2) the persons relationship to nature (3) the person’s relationship to others (4) the modality of human activity (5) temporal focus of human activity and (6) conception of space. Similar attempts to categorize culture in terms of unique value orientations have come from Hall (1976), Hall & Hall (1990), Hofstede (1980), and Trompenaars (1994). The four cultural value dimensions of individualism-collectivism, power distance, uncertainty avoidance, and masculinity-feminity, proposed by Hofstede (1980) have been extensively used in marketing and advertising literature to study cross-national differences. One limitation of all these cultural categorization studies is that they categorize culture only on basis of dominant cultural value orientations. In fact, culture can be studied not only at the level of cultural values, but also at the level of cultural forms, propositions, recipes, routines, customs, and systems of customs (Goodenough, 1981). According to D’Andrade (1984) and McCort and Malhotra (1993) culture should be understood from both behavioral and public aspects and from cognitive and private aspects. Moreover, cultural meanings are created and maintained by interaction between an extrapersonal world of objects and symbols and intrapersonal world of individual's mind (Strauss and Quinn, 1997). Thus, to understand and analyze culture in its totality we need to take into account both the intrapersonal world of cultural values and the extrapersonal world of cultural symbols and artifacts. The main objective of this paper is to propose a conceptual framework, which derives cultural categories by analyzing various stages of cultural formation, so as to provide marketers and academics with a broader and more complete framework to analyze culture. In other words, this paper proposes a conceptual framework for cultural analysis that takes into account perceptual, behavioral and symbolic dimensions of culture, in an attempt to provide a more holistic understanding and analysis of culture. The paper in quest of unique cultural understanding, borrows concepts from cultural anthropology, psychology, and sociology to enrich the marketing literature. Finally an attempt is made to put forth three levels of cultural analysis and operational constructs to measure them. To understand how culture is formed, shared, and interpreted, it is important to analyze various schools of cultural thought. The fields of anthropology and to an extent psychology and sociology have contributed enormously towards cultural understanding. There are four main schools of thought in cultural anthropology, the structuralists (Levi Strauss, 1963), the interpretevist (Geertz, 1973), the cognitivists (D’Andrade, 1984; Strauss and Quinn, 1997), and post-structuralists (Butler, 1990; Clifford, 1986). Each of these schools limits the explanation of culture to its own breadth and scope. Thus, to understand culture from various perspectives, the paper attempts to compare and contrast various schools of cultural thought, and provides a new synthetic approach to the study of culture. Structuralists: The structuralists have a superorganic-cohesive view of culture. According to structuralists, culture is a stable system. Levi-Straussian structuralists (1963) place emphasis on stability of the structure of ideas in the form of texts and symbols rather than behavior. To them social reality exists in verbal statements Leach (1976). For Example, a structuralist may interpret and analyze the cultural phenomenon by studying the codes of ethics or other verbal texts of the society, that are passed from generation to generation. The preoccupation of structuralists with signs and texts, and stability of systems makes their approach rigid, and discounts the possibility of intracultural variation. Interpretivists: According to interpretivists to categorize culture as self-contained "superorganic" reality with forces and purposes of its own is a reductionist approach (Geertz, 1973). To interpretivists like Greetz, "Culture is not a power, something to which social events, behaviors, institutions or processes can be casually attributed; it is context, something within which they can be intelligibly -that is thickly described" (Geertz, 1973, p.14). Moreover, to interpretivists, culture is public because meaning is and the meaning is stored and transmitted through symbols of society. For example one of the Greetz’s famous works analyses the cultural phenomenon by studying the ‘cock-fighting’ in Balinese culture. The drawback with this approach is the preoccupation with expressive role of symbols and the world of cultural objects, to the extent of overshadowing the importance of cultural symbols serving as an external stimuli for cultural internalization. Another important criticism of the interpretevist-school is that it discounts the importance of unobservable psychological states in cultural formation (Strauss and Quinn, 1997).
Understanding the Location Strategies of the European Firms in Asian Countries
Dr. Rizwan Tahir, The University of Auckland, Auckland, New Zealand
Dr. Jorma Larimo, University of Vaasa, Vaasa, Finland
The purpose of this paper is to empirically investigate how the location-specific variables and strategic motives have influenced the location strategies of the Finnish firms in ten South and Southeast Asian countries from 1980 to 2000. Despite the increased interest in FDIs, very few studies have been undertaken to empirically analyze the influential location-specific variables together with the strategic advantages in order to analyze the FDI choices of foreign investors. To the best of our knowledge, particularly the strategic motives have remained primarily anecdotal. This is apparently the first study to empirically analyze how the location-specific variables and strategic motives have influenced the location strategies of Finnish manufacturing firms in Asian countries. The research results indicate large market size of the host country low cultural distance between the host and home countries and low wage rates in the host country increase the probability of undertaking market-seeking and efficiency-seeking FDIs. Similarly, it has been found that low levels of inflation, low levels of risks and the high level of exchange rate fluctuations in the target country increase the probability of undertaking risk-reduction seeking FDIs. Foreign direct investment (FDI) has always played an important role in the development of global economy. In the early 1980s, world economy was weakened by the two oil shocks of the 1970s, which caused deterioration in the balance of payment and resulted in an increased external indebtedness and domestic inflation in many countries around the globe. One of the key strategies for the economic recovery in most of the countries was the promotion of foreign private investment and manufactured exports. The impressive growth particularly in East Asian economies could not be achieved without the flow of FDI that came from Japan, EU and US. Due to long-term benefits of private investments, countries around the globe cannot afford to lose the foreign direct investment, given their pressing employment problems, to sustain economic growth and industrialization. In Asia, FDI inflows reached a record level of $143 billion in 2001 (World Investment Report 2002). Most of the Asian countries have replaced their traditional inward-oriented import-substitution policies with the export-oriented development strategies. The prevailing view is that FDI constitutes a combination of resources much needed in developing countries such as technology, capital, management and marketing techniques. Asian countries have increasingly recognized these advantages and a number of countries has either reviewed the existing policies or introduced new policies to create favorable investment environment and thus attract FDI. All are the reasons to identify the determinants and motivations of FDI. The purpose of this study is to identify how the location-specific variables and strategic motivations could influence the location strategies of Finnish manufacturing firms in ten South and Southeast Asian countries from 1980 to 2000. Behrman (1962) and Dunning (1993:56) identified the strategic motives of FDIs: market-seeking (MS), efficiency-seeking (ES) and risk-reduction seeking (RRS). This study differs from previous research in two aspects. First, to date little FDI research have been undertaken to empirically analyze the influential location-specific variables along with the strategic motives in order to analyze the location choices of the investing firms. Empirical analysis of the strategic motives together with the location-specific variables can not only add to our understanding of the eclectic paradigm but also enrich our knowledge of FDI in general. Secondly, this study focuses on firms based in Finland, a small-industrialized country, where the domestic market conditions are very different from those of multinationals that have dominated the past attention. Moreover, studies on the determinants of FDI rarely combine location-specific variables with strategic motivations of the investing firms in Asian markets. To the best of our knowledge, this is apparently the first study trying to analyze how different location-specific variables and strategic motives have influenced the location choices of the Finnish manufacturing firms in Asian countries. The remainder of the paper is as follows. The following section considers the previous literature to location-specific variables and strategic motives of the investing firms in foreign markets and sets out the hypotheses of the study. Next the methodology of the study is set out and the characteristics of the sample reported. Empirical results are in the fifth section. Finally, summary and conclusions are presented in the last section.
A Transaction Cost Perspective on Motives for R&D Alliances: Evidence from the Biotechnology Industry
Yongliang “Stanley” Han, California State University, Sacramento, CA
In this paper, we examine the motives for R&D alliances formed by large pharmaceutical companies (LPCs) with new biotechnology firms (NBFs). Using a sample of 638 R&D alliances formed by 15 global pharmaceutical firms between 1985 and 1998, we seek to interpret empirical evidence in the light of a transaction cost explanation for the motives behind R&D alliances. The results seem to be inconsistent with the transaction cost explanation. With the emergence of the “new biotechnology”, which differs from earlier biotechnology in its focus on engineering specific changes in the genetic structure of microorganisms, over a thousand “new biotechnology firms” (NBFs) have been founded in the United States since the 1970s (Kenny, 1986; Pisano, Shan and Teece, 1988). As drug research is switching from a chemical to a biological basis, biotechnology has been widely perceived as a destructive or “competence-destroying” innovation for the pharmaceutical industry (Tushman and Anderson, 1986; Powell, Koput and Smith-Doerr, 1996). Large pharmaceutical companies (LPCs) entered relatively late into the biotechnology industry. The emergence of biotechnology has changed to a great extent the way in which LPCs obtain critical R&D capabilities. Due to the complex nature of biotechnology, knowledge transfer in biotechnology R&D often entails severe problems such as uncertainty and weak appropriability (Pisano, 1990). Therefore, exchange of knowledge in biotechnology cannot be mediated by arm’s-length market transactions. Instead, it requires stronger governance structures such as strategic alliances (e.g., R&D contracts, R&D collaborations, joint ventures) and vertical integration (Williamson, 1985, 1991). Before the new biotechnology was invented, LPCs had few if any strategic alliances with small R&D firms (Pisano et al., 1988; Barley, Freeman and Hybels, 1992). To catch up with the new technological wave, LPCs have not only invested considerable resources in internal R&D projects in biotechnology, but also built various linkages with other firms and research institutions. Among all the alliance partners, NBFs were the most critical ones, due to their comparative advantage in conducting biotechnology R&D projects (Gambardella, 1995). While there have been a number of studies of NBFs’ alliance strategies (e.g., Barley et al., 1992; Kogut, Shan and Walker, 1992; Shan, Walker and Kogut, 1994; Liebeskind et al., 1996; Powell et al., 1996; Baum, 2000), only few studies to date have examined the alliance strategies of LPCs in biotechnology (e.g., Arora and Gambardella, 1990; Gambardella, 1995). In this study, we attempt to fill this gap by examining R&D alliances established by LPCs with NBFs. These R&D alliances include R&D contracts, R&D collaborations, minority-equity-based R&D projects, and joint ventures. Using data on 638 R&D alliances formed by 15 LPCs and their patenting activity in biotechnology, we conduct extensive demographic analyses to explore issues including the patterns of the development of LPCs’ internal R&D capabilities in biotechnology, their R&D alliance formation behavior, and the relationship between these activities over time. We seek to interpret empirical evidence in the light of transaction cost economics for the motives behind R&D alliances. The results seem to offer no support for the transaction cost explanation. Transaction cost economics (TCE) provides a set of coherent arguments as to when contracts will be organized within a “firm” as opposed to taking place between separate parties. According to TCE, the firm is seen as a “nexus of contracts” between a multitude of parties. The main hypothesis of TCE is that contractual designs or “governance structures” are created to minimize the sum of production costs and transaction costs between specialized factors of production (Coase, 1937; Klein, Crawford and Alchian, 1978). Williamson (1975, 1985) has identified uncertainty and asset specificity as two factors that play a critical role in the choice of governance structure. If transactional features do not match with governance structure, then either inefficiency or hold-up hazards will ensue. Specifically, if transactions with low level of uncertainty and asset specificity are conducted exclusively in a hierarchical organization, then the organization may not be able to achieve the same efficiency as market does due to slow external adaptation and lowered incentive. On the other hand, if transactions with high level of uncertainty and asset specificity are conducted in a governance structure without sufficient administrative controls and safeguard mechanisms, then hold-up will become a severe problem. A second source of transaction hazards, namely, the hazard of misappropriation, has been identified more recently by Teece (1980, 1982, 1986). Arrow (1962) points out that knowledge is inherently a public good. In order to garner profits from knowledge, the firm must prevent its dissipation to, and its use by, its competitors. In other words, the firm’s knowledge must be protected by a tight “regime of appropriability”. Misappropriation hazards arise when profits generated from knowledge is improperly captured by competitors of the original owner of knowledge.
A Structural Equation Modeling of CEO Pay-Performance Relationships
Dr. Freddie Choo, San Francisco State University, San Francisco, CA
Dr. Kim B. Tan, California State University, Stanislaus, Turlock, CA
Previous CEO pay-performance research found a contemporaneous relationship between CEO pay and firm performance. We extended this contemporaneous relationship into its synchronous and lagged causal relationships by using structural equation modeling (SEM) to analyze (1) the direction of causality between pay and performance, (2) the effect of prior pay on future pay, and (3) the effect of prior performance on future performance. We found (1) the direction of causality was from pay to performance, and not vice versa, (2) prior pay affected future pay, and (3) prior performance did not always affect future performance. Previous CEO pay-performance research (e.g., Murphy, 1985; Lambert and Larcker, 1985; Jensen and Murphy, 1990; Hall and Liebman, 1997) found a contemporaneous relationship between CEO pay and firm performance. We extended this contemporaneous relationship into its synchronous and lagged causal relationships by using structural equation modeling (hereafter SEM) to analyze the synchronous relationships regarding the direction of causality between pay and performance; specifically, did pay affect performance, or was it vice versa? We also used SEM to analyze the lagged relationships regarding whether prior pay affected future pay, and whether prior performance affected future performance. A contemporaneous pay-performance relationship is a correlational relationship measured at one point in time. Hall and Liebman (1997) argued that a significant contemporaneous pay-performance relationship does not imply an efficient relationship. They suggested that an efficient pay-performance relationship could be determined by the direction of causality within the contemporaneous relationship. This direction of causality can be determined by examining the synchronous (1) relationships (Felson and Bohrnstedt, 1979; Arbuckle, 1995; Finkel, 1995) of whether pay affects performance and/or performance affects pay. An efficient relationship is when the direction of causality is from pay to performance, which indicates that better pay leads to higher firm performance. In other words, pay is a performance motivator. If the direction of causality is from performance to pay, then it simply shows that higher performing firms pay better. The effects of lagged or longitudinal relationships (Bentler, 1990a; Rosenthal and Rosnow, 1991) also may impact the pay-performance relationship. Rosenthal and Rosnow stated that, "The longitudinal [lagged] measurements of the same two variables, A and B [such as pay and performance], should potentially provide information about any causal relationships between them" (1991, p. 98).
Surveying the Topic of “Effective Leadership”
Dr. Xin-An Lu, Shippensburg University of Pennsylvania, Shippensburg, PA
With our world becoming more and more complicated, scholars of organizational studies are realizing that management alone cannot solve our problems. More and more of their attention is turning to leadership, which, scholars believe, may promise a solution to the myriad of our problems. Many (e.g. Covey, 1996; Deming, 1993; and Senge, 1990) believe that management deals with the area of things, control, and efficiency, all of which only strike at the branches of the evil. Leadership, on the other hand, deals with the area of people, “release” (a tapping of the energy reservoir of the people, the opposite of the concept of “control”, see for example Covey, 1996) and effectiveness, all of which represent an effort to strike at the root of the evil. Although scholars have some agreement on leadership, there is not much congruence in their opinions on the question of what effective leadership is or what constitutes real leadership. After reading the prominent articles and books on this topic, I feel that four categories of ideas may emerge within the scholarly studies and opinions on leadership. Those that don’t quite fit neatly into a specific category are usually a combination of two or more of these categories. These categories may be designated as the following: (1) Leaders are people who know what to do with themselves; (2) Leaders are people who know what to do with their people; (3) Leaders are people who know what to do with the communication channels/environment within their organizations; and (4) Leaders are people who have a holistic picture of what is going on within their organization. Before elaborating on each category of ideas, I’d like to talk a bit about how these four categories fit with each other. If you look at these categories from a bird’s eye viewpoint, you will see that the first three make up the components of an organization--- the leader, the led, and the connection between them, i.e., the organizational communication channels. The fourth category is a synthesis of the first three, looking at the organization in a systemic manner.
Methodological Issues in Research on Business Casual Dress
Dr. Steven D. Norton, Indiana University South Bend, South Bend, Indiana
Dr. Timothy M. Franz, St. John Fisher College, Rochester, New York
Questions with different format regarding mode of dress at work were administered to 91 MBA students. The questions were based on an earlier factor analytic study by the authors. The predominant mode of dress was Business Casual. Mode of dress was correlated with a number of personal and job characteristics. Although the correlations among the various Mode of Dress questions were quite high, the format of these questions did have a substantial impact on correlations of dress with personal and job characteristics. The more formal the reported dress policy, the more likely employees are to report a higher level Conscientiousness. Employees who prefer a more formal dress policy report a higher level of Time Commitment, Conscientiousness, and Job Satisfaction. They are more likely to report having a Higher Level Job and to directly dupervise more Employees. Employees who prefer to wear more formal clothes themselves report a higher level of Time Commitment, Work Intensity, feeling of Fairness, Conscientiousness, and Job Satisfaction as well as lower Stress. They directly supervise more employees. We provide suggested items and approaches for further research on Mode of Dress at work.During the past decade, a majority of the workforce has shifted from traditional, formal business attire to business casual or casual dress. For example, a poll by the Society for Human Resources Management shows that 90% of U.S. office workers go to work in business casual clothes at least once per week (Walter, 1996). The poll found that employees were more satisfied with their company after it moved to a casual dress program. These practitioner surveys and similar popular press articles overwhelmingly portray casual dress policies as positive. For example, an article by the New York Times suggests that, when compared to traditional business attire, dressing casually eases tensions, improves communication between management and employees, and instills a sense of togetherness in organizations (Bragg, 1994). ). An article in the San Francisco Chronicle suggests that casual dress policies help eliminate the natural communication barriers between managers and employees (Kazakoff, 1996). A handful of studies have examined the impact of types of dress . Research has examined the effect of dress policies on bottom-line results, but ignored how they affect individual employees. Specifically, Yates and Jones (1998) found that companies who initiate casual dress report a subsequent decrease in employee absenteeism. They argue that this is because it humanizes the workplace, eliminates barriers to communication, and fosters work-family harmony. However, they did not examine the impact of dress on employee productivity. According to Franz and Norton (2001), business casual dress may result in different outcomes in the workplace. On one hand, business casual may lead to enhanced productivity because employees feel more comfortable in and positive about the workplace (e.g., Yates & Jones, 1998) If this is the case, people who dress casually should work harder, regardless of the task. There is no other scientific research that we know of that has carefully investigated whether casual dress affects organizational outcomes
A Study of Golf Courses Management: The In-depth Interview Approach
Tai-heng Chen, University of South Dakota, Vermillion, SD
With the increasing popularity of golf in Taiwan, many new golf courses are being developed, creating a need for knowledgeable people to manage these courses. Golf course managers are responsible for the entire golf course. They are responsible for golfers’ motivation, grounds management, pest control, and environmental protection. They are in charge of making the game fun for golfers. A professional golf course manager uses not only what he or she has learned from formal education but also what he or she has learned in the field, including matters of technology, planning, and the environment. The purpose of this study was to construct golf course management interview questions to examine the opinions of golf course managers on trends in golf and golf course management in Taiwan. With the increasing popularity of golf in Taiwan, many new golf courses are being developed, and this creates a need for knowledgeable people to manage these courses. This education is necessary for the people of Taiwan, espcially for international students majoring in recreation. The hope is to raise the standards of these new golf courses in order to make the links internationally competitive. Even in the United States, golf managers are more educated than ever. According to Landscape Management, “Approximately 75 percent of GCSAA [Golf Course Superintendents Association of America] members have two or four-year degrees or have attended graduate school” (McGinnis, 1997, p. 6G). The degree of professionalism is increasing in this field, and golf course managers face new and post challenges. This paper indicates what these challenges are and points out solutions discovered by practicing golf course managers.
A Structural Equation Modelling Analysis of Fairness Heuristic Theory
Douglas Flint, University of New Brunswick, Fredericton, NB
Pablo Hernandez-Marrero, University of Toronto, Toronto, Ont.
Prior research on fairness heuristic theory of organizational justice has shown that procedural information is used when distributive information is lacking. This study extends consideration of fairness heuristic in two ways: By testing fairness heuristic effects across four different combinations of procedural and distributive justice; and by testing an ambiguous distributive outcome. Structural equation modeling is used to measure the directionality of procedural and distributive justice effects. Procedural justice is of interest to organizations because of its impact on important organizational outcomes. These include: performance (Ball, Trevino & Sims, 1995; Gilliland, 1994; Konovsky & Cropanzano, 1991; Welbourne Balkin & Gomez-Mejia, 1995), organizational commitment (Brockner, 1992; Konovsky & Cropanzano, 1991; Schaubroeck May & Brown 1994), job satisfaction (Schaubroeck et al, 1994), organizational citizenship behavior (Ball et al, 1995), commitment to organizational decisions (Greenberg, 1994; Korsgaard, Schweiger & Sapienza, 1995; Lind, Kulik, Ambrose & de Vera Park, 1993), turnover intentions (Schaubroeck et al, 1994, Olson-Buchanan, 1996), theft (Greenberg, 1990, 1993), and retaliation against organizations (Skarlicki & Folger, 1997). Organizational systems that have been linked to procedural justice include: Employee discipline (Cole & Latham, 1997), inter-group conflict (Huo, Smith, Tyler & Lind, 1996), institutional racism (Jeanquart-Barone, 1996), performance appraisal (Barclay & Harland, 1995), pay for performance (St. Onge, 2000), and employee benefits (Tremblay, Sire & Balkin, 2000). There are some conditions when procedural justice is more salient than others. Fairness heuristic theory provides an explanation for these conditions. Fairness heuristic theory deals with the impact of perceptions of procedural and distributive justice on formation of organizational justice judgments. The theory argues that “in incomplete or insufficient information conditions, people process information heuristically; for example, they use other information—such as procedural or outcome fairness—to substitute for information that would be most directly relevant but that is actually missing” (Van den Bos, 2001). This study seeks to expand consideration of fairness heuristic theory in two ways. First, the effects of fairness heuristic are considered across different combinations of procedural and distributive justice. Second, considerations of the effect of fairness heuristic theory are extended to ambiguous distributive outcomes. This theory has traditionally been tested only in the absence of distributive outcomes.
Decolonization and International Trade: The Ghana Case
Dr. Albert J. Milhomme, Texas State University at San Marcos, TX
Many countries, former colonies of some colonial powers like Great Britain, and France, have acceded in the second part of the twentieth century to their political independence. What about their economic independence? A measure of this economic independence could be reflected in the evolution of their international trade, exports as well as imports, and the room of this trade within their gross domestic product. This study, centered on Ghana, a former colony of Great Britain, might put some light on the rate of the evolution and the achievement or non-achievement of this economic independence of this country. In 1957, as a colony of Great Britain, Ghana did export 38% of its total exports to Great Britain and did import from Great Britain 45% of its total imports. The United Kingdom had then at that time a dominant position which was the result of more than a century of effort to create and protect trade, to pump in finished products and pump out raw materials. Has the United Kingdom kept an important position in Ghana today in 2003, 45 years after the independence? This is the type of question some people have definitely answered by “yes”. British companies are still very active in many formerly colonized countries and do a majority of their “International Business” in their old colonies. The reasons are basically to be found in the cultural ties and traditions established during colonial rule. Other people have different feelings. Because of historical events preceding independence, they believe that many formerly colonized countries would spurn companies from the former colonial powers. Ostracism was everywhere. If dependence may have existed for a short while, it did not last, a former colonizer losing very quickly its historically acquired economic advantages. Study of the evolution of Ghana international trade with its former master, with some individual industrial countries, or with the world for the past 45 years might provide some interesting information on the decolonization process and the achieved, if any, economic independence.
The Effects of Mentoring on Perceived Career Success, Commitment and Turnover Intentions
Dr. Therese A. Joiner, Dr. Timothy Bartram, and Terese Garreffa, La Trobe University, Australia
Few studies have empirically examined the relationship between mentoring and protégé turnover intentions. This paper which is largely exploratory, examines the relationships among mentoring, perceived career success, organizational commitment and the effect on protégés’ turnover intentions. Empirical data is drawn from an Australian subsidiary of a large US multi-national firm. Results suggest that a successful mentoring program may be an important factor in positively influencing protégés’ perceptions of career success and organizational commitment, which inturn is likely to reduce their turnover intentions. Additional qualitative data also revealed that both career enhancement and psycho-social functions of the mentoring process were valued by the protégé. Implications for practitioners and future research are discussed. Mentoring in organizations can be viewed as a developmental relationship whereby managers provide assistance and support to particular subordinates (protégés) on an individual basis (Kram, 1985, Orpen, 1997; Higgins and Kram, 2001). The mentoring process can serve both career enhancement and psycho-social functions for the protégé. Career enhancement roles in mentoring include sponsorship, coaching, exposure, protection and provision of challenging assignments. The psycho-social functions include acceptance, counselling, emotional support and role modelling (Kram, 1985). Practitioners and academics alike have underscored the importance of mentoring because of the benefits that accrue to the protégé as well as the organization (Dansky, 1996; Broadbridge, 1999; MacGregor, 2000). Organizational benefits include improved recruitment and induction procedures (Clutterbuck, 1991), leadership development, improved succession planning (Clutterbuck, 1991; Zey, 1984), and increased organization commitment (Baugh, Lankau and Scandura, 1996; Orpen, 1997; Scandura, 1997). Few studies have empirically examined the relationship between mentoring and protégé turnover intentions (Kleinman et al., 2001 and Scandura and Viator, 1994 are notable exceptions). Given that intention to leave is the best predictor of actual turnover (Lee and Mowday, 1987), and given the significant negative consequences of high turnover (e.g., increases in training costs and productivity losses), one of the aims of this study is to explore the association between mentoring and the protégé’s intention to leave the organization.
Internet Shopper Demographics and Buying Behaviour in Australia
Dr. Joshua Chang, Charles Sturt University, Australia
Dr. Nicholas Samuel, The University of Canberra, Australia
There has been a rapid growth of online shopping amongst Australian consumers. The development creates a need for a greater understanding of the association between the demographic characteristics of shoppers and their online shopping behaviour. This study suggests that gender, age, income and location are associated with different patterns of online purchasing frequency and expenditure. The findings enable a better understanding of online shoppers relevant to market segmentation variables. Businesses have developed strategies for consumer markets in order to gain leverage in rapidly expanding e-markets. Due to benchmarking processes, most companies are already interlocking business strategies and e-commerce, causing e-commerce to replace conventional and physical marketing channels for cutting edge solutions (Merrilees and Miller 1996). Many changes have occurred in the area of retailing, and these include changing retail structures, improving technological developments, changing market conditions, and the emergence of more affluent, mobile and time-scarce consumers (Shim and Eastlick 1998). Changing consumer lifestyles and lack of time may make it more difficult for consumers to shop at physical locations such as stores and shopping malls, making the option of online shopping a viable alternative to shopping at physical locations. The changing nature of consumer lifestyles at home and at work are altering where, how, and when consumers shop (Davies 1995). The Australian household structure has undergone dramatic change since the 1970s. According to Cheeseman and Breddin (1995), nearly half of all Australian families are two-income families. While in the early 1970s, around 40 percent of married women participated in the workforce in some capacity, this is now well over 60 percent. This changing household structure has led to an increased premium on leisure time.
Internationalizing the Business Curriculum: Developing Intercultural Competence
Dr. A. G. Cant, Central Washington University, Ellensburg, WA
American businesses are confronted with the need to operate outside the comfort of their own cultural environment. To be successful global managers requires the development of five key global cultural competencies; cultural self-awareness, cultural consciousness, ability to lead multicultural teams, ability to negotiate across cultures and a global mindset. Given U.S. business students very limited understanding of other societies and their cultures, business colleges face a major challenge to prepare students for global assignments. Of the three methods used by colleges to internationalize the business curriculum, only one approach provides the opportunity for students to develop the global cultural competencies necessary for them to succeed in a global career. The creation of an international business degree or major allows students to gain insight from humanities and language courses and from actual international experience. While business has always operated between communities and across national boundaries, the world now faces a new era of unprecedented global economic interactions. This highly competitive marketplace requires sophisticated management competencies necessary to work with staff, customers, suppliers, and government officials with fundamentally different values, assumptions, beliefs and traditions. Managers in domestically focused firms have had the relative comfort of working within their own culture, whereas in the international marketplace cross-cultural management is the norm.
Mominka Fileva, Ph.D., Davenport University, Dearborn, Michigan
This paper reveals an experiential approach to teaching a graduate course in Organizational Behavior. Based on Kanter’s (1997) concepts of change-adept organizations in business, the whole course was designed as an experiment creating a change-adept organization in the classroom environment. The end result was a learning environment that was flexible, less structured, and flatter in hierarchy. The key elements included empowerment of students, learning contracts drafted by students, and a grading process involving a peer review system, designed by students. The role of the instructor in such a classroom naturally shifted to balancing, juggling contradictions, and providing guidance. Creating a classroom based on the key characteristics of Kanter’s (1997) concept of business change-adept organizations gave the students the opportunity to experience empowerment and the increased responsibilities that come with it, self-discipline and self-management, autonomous decision-making, peer review of performance, freedom and desire to innovate, decentralization, and learning in all directions. The change adept classroom provided students with better learning power to understand the nature of empowerment and decentralization and when, how, and to what level they would efficiently work in a business environment. There has been no doubt that the traditional way of teaching management classes – lectures, case analysis, role plays, research, etc. – gives the students adequate theoretical knowledge and practical insights of the real business world. However, they all remain secondary level information. In most cases, the learning process is most successful when students have first-hand knowledge or can experience the phenomenon being studied (Obach, 2000; Wedell and Wynd, 1994)). This report presents an experiential approach towards teaching Organizational Behavior by creating a classroom that is a model of a change-adept organization. The purpose of this experiential approach was to create an environment that allows students to experience first hand the effects of empowerment on the organization and on the innovation process and to analyze the preexisting conditions and factors that make flatter in hierarchy organizations successful. In the change-adept classroom, given specific objectives and guidance from the instructor, the students were totally empowered to participate in the designing of the syllabus, which included choosing instructional methodologies and the types and topics of assignments; to design and implement a peer review evaluation system that was an integral part of the grading process; to draft and discuss with the instructor their learning contract; to plan and conduct or facilitate discussions and brainstorming sessions, and to organize themselves in teams in order to accomplish chosen assignments.
Critical Success Factors of Transferring Nursing Knowledge in Hospital’s Clinical Practice
Dr. Ming-Tien Tsai, National Cheng Kung University, Taiwan, R.O.C.
Ling-Long Tsai, Meiho Institute of Technology & Ph.D. Candidate, National Cheng Kung University, Taiwan
This paper aimed to explore the critical success factors of transferring nursing knowledge during the hospital’s clinical practice. The researchers conducted 3 focus group interviews and consulted with 17 clinical instructors, resulted in a 78-item questionnaire. 460 nursing students were selected as samples, 443 nursing students turn in questionnaires, the completed questionnaires were 422. The analysis of their responses was a combination of means and factor analysis. The results indicate three critical factors that make nursing knowledge transform successfully. The first factor motivation implies that efficient knowledge transfer is based on nursing students’ willingness to learn. The second factor link to practice explained that the techniques to connect the theory and practice is necessary. The third factor nursing skills expressed that practicing sufficient nursing skills before clinical practice is prerequisite for nursing students. It is wildly accepted that school’s primary function is education. In professional education, School offers students curriculums to gain knowledge, while apprenticeship system proffers the opportunity to utilize their knowledge on the target issue. In traditional learned professions, always include two dimensions: theory and practice. In nursing programs, schools teach nursing students about nursing theories and skills. However, they may not realize how to apply their knowledge until hospital’s clinical practice. Severinsson (1998) found there exists a gap between nursing theory and practice. In order to improve the integration of theory and practice, a high standard of clinical practice is necessary. Clinical supervision may assist nursing students to digest the nursing process. Clinical training plays as a bridge to connect theory and practice. Performing nursing skills in the hospital is regarded as apply nursing knowledge according to an theoretical-based framework. It is crucial to know how nursing knowledge could be transferred to nursing students. To date, only minority of studies investigated nursing knowledge transfer, especially on the individual level. The purpose of the present study was to explore the factors that could make nursing students transfer nursing knowledge successfully during hospital’s clinical practice.
An Investigation of Critical Success Factors in the Adoption of B2BEC by Taiwanese Companies
Dr. Hsiu-Yuan Tsao and Dr. Koong H.-C. Lin, Ming Hsin University of Science & Technology, Taiwan
Dr. Chad Lin, Edith Cowan University, Joondalup, Western Australia, Australia
This exploratory study examines what critical success factors are relevant to adopting business-to-business electronic commerce (B2BEC) in the small and medium-size enterprise (SME) sector in Taiwan. Well known for the vibrancy of this sector, Taiwan is embracing and aggressively promoting information technology and e-commerce (i.e., B2BEC) and this is particularly true among many of the leaders in the electronic manufacturing industry. Since the economy of Taiwan is so heavily dependant on the performance of small and medium-size electronic enterprises, enhancing their competitiveness is a major, pressing issue. We propose some critical factors and examine whether or not the SME sector uses them to leverage the Internet and realize the benefits of adopting B2BEC. Business-to-business e-commerce (B2BEC), in the form of Electronic Data Interchange (EDI), serves as a cheaper alternative for small-medium enterprises (SME) to do business online and reach potential customers worldwide. Electronic commerce technology is particularly useful in allowing businesses in the SME sector to collaborate on providing better service to customers as well as to compete more effectively against major competitors (Loughlin, 1999). Many electronic markets have been set up to facilitate inter-organization transactions and to increase market access for both suppliers and buyers (Giaglis et al., 2002). However, despite the widespread use of B2BEC in the SME sector, few organizations have realized much of the benefit expected from its adoption (Hart and Estrin, 1991; Lee et al., 1999). Moreover, its progress has been particularly hampered in Asia by many unforeseen technical, organizational, legal and economic difficulties, which have diminished its value (Lynch and Beck, 2001). Although the Internet and electronic commerce have attracted considerable research interest in Taiwan, relatively scant attention has been paid to developing comprehensive methods of designing, applying, and implementing B2BEC by small-medium enterprises. In this paper, therefore, we aim to identify the critical success factors behind B2BEC in that industrial sector.
A Model to Estimate the Default Risks for Callable Corporate Bonds: Evidence from the U.S. Market
Dr. David Wang, Golden Gate University, San Francisco, CA
This paper presents a model for estimating the default risks implicit in the prices of callable corporate bonds. The model considers three essential ingredients in the pricing of callable corporate bonds: stochastic interest rate, default risk, and call provision. The stochastic interest rate is modeled as a square-root diffusion process. The default risk is modeled as a constant spread, with the magnitude of this spread impacting the probability of a Poisson process governing the arrival of the default event. The call provision is modeled as a constraint on the value of the bond in the finite difference scheme. The empirical results are encouraging. First, the estimated default probabilities are consistent with Moody’s ratings. The estimated default probabilities rise with lower ratings and fall with higher ratings. Second, the relationship between the estimated default probabilities and other bond characteristics is consistent with the intuition. The estimated default probabilities are negatively correlated with maturity and positively correlated with coupon payment, age, and issue size. This paper can be used both as a benchmark for models for estimating the default risks associated with callable corporate bonds and as a direction for future research. Default risk has always been a major topic of concern for financial intermediaries and any agents committed to a financial contract. The standard theoretical paradigm for modeling default risks is the contingent claims approach pioneered by Black and Scholes (1973). Much of the literature follows Merton (1974) by explicitly linking the risk of a firm’s default to the variability in the firm’s asset value. Although this line of research has proven very useful in addressing the qualitatively important aspects of estimating default risks, it has been less successful in practical applications. The lack of success owes to the difficulty of modeling realistic boundary conditions. These boundaries include both the conditions under which default occurs, and in the event of default, the division of the value of the firm among claimants. Firms’ capital structures are typically quite complex and priority rules are often violated. In response to these difficulties, an alternative modeling approach has been pursued in a number of articles, including Madan and Unal (1994), Jarrow and Turnbull (1995), Duffie and Singleton (1999). At each instant, there is some probability that a firm defaults on its obligation. This is called the instantaneous probability of default.
Purchasing Power Parity: Evidence from Asia Pacific Countries
Dr. Shyam Bhati and Dr. Michael McCrae, The University of Wollongong, Wollongong, Australia
Long run purchasing power parity between countries of Asia pacific region is investigated using a cointegration approach. Quarterly data on exchange rates of Australia, Indonesia, Malaysia, Philippines, New Zealand, Singapore, South Korea and Thailand are used in this study. The result provides evidence of the existence of purchasing power parity between Australia and other countries in the Asia Pacific region. We have also compared the results using CPI and WPI of these countries for the existence of purchasing power parity. We are not able to confirm if wholesale price index is a better indicator of purchasing power parity as compared to consumer price index for the countries studied. The theory of purchasing power parity (PPP) plays an important role in the determination of exchange rates. PPP explains the relation between the price levels of any two countries and the exchange rates between their currencies. Although a number of studies have been conducted, there is no agreement among various authors [Diebold, Husted and Rush (1991); Cheung and Lai (1993); Liu and Burkett (1995); Veramini (1998) and Bahmani-Oskoee (1998)] on whether PPP holds in short term or in long term. One of the significant issues that emerges from some of the country specific studies, however, is that US dollar, Japanese Yen and German Mark do not comprise an optimum currency area for testing the theory of PPP. This view is supported by the study of Sarno (1997) for European Monetary System countries. Recently Bleaney (1998) has come to a similar conclusion that the Sterling–U.S. dollar–SF triangle tends to yields unfavourable results, with the evidence favouring PPP if not more than one of these currencies is used in any bilateral comparison. Bleaney's (1998) investigations followed the study of Taylor and Macmohan (1988) who conducted tests on bilateral rates between French franc, Sterling and U.S. dollar and concluded in favour of PPP except in the case of Sterling-U.S.dollar rate where their evidence was found to be ambiguous. If the arguments of Bleaney(1998) and Sarno (1997) are extended, then it should be possible to observe PPP in any bilateral comparison of currencies which does not involve four currencies - US dollar, British pound, Japanese yen and Swiss franc. A combination of currencies other than U.S. dollar, British pound, Japanese Yen and Swiss Franc should be considered a good combination for the study of PPP. It may therefore be appropriate to examine the PPP between Australian dollar as "base currency" and the currencies of its trading partners in the Asia Pacific Region. If the PPP is observed between Australian dollar and the currencies of other countries in the Asia Pacific region then Bleaney's(1998) argument about the suitability Sterling- U.S. dollar-SF-Yen quadrant for the study of PPP will be supported. If no PPP is observed between the Australian dollar and the currencies of its trading partners in the Asia Pacific region, then the argument of Bleaney (1998) about the unsuitability of Sterling- U.S. dollar-SF-Yen quadrant may not hold. The selection of the currencies in the Asia pacific region may however be questioned on the ground that some of these countries (e.g. Indonesia, Thailand) have experienced high inflation in recent past (from 1997) and therefore the choice of the currencies of these countries for the study of PPP may not be appropriate. This ambiguity about the choice of the currencies of the countries in the Asia Pacific region can be eliminated if the time period selected for the study is prior to 1997, when these countries started experiencing financial problems.
Is Technology Preemption a Motive for R&D Alliances? Evidence from the Biotechnology Industry
Dr. Yongliang “Stanley” Han, California State University, Sacramento, CA
Using data on 638 R&D alliances formed by 15 large pharmaceutical companies (LPCs) with new biotechnology firms (NBFs), we conduct extensive demographic analyses of the nature, frequency and exclusivity of these alliances. We seek to examine the possibility of technology preemption as a motive for R&D alliances formed by LPCs with NBFs. We argue that equity investment in NBFs may enable LPCs to monitor the technological advancements made by the NBFs more closely, and possibly block access by competing LPCs to the same technologies. The empirical results presented in this study offer very weak support to the argument that LPCs' R&D alliances with NBFs are motivated by their desire to technologically preempt competitors. Strategic alliances have been an increasingly important mode of interorganizational collaboration for firms to gain competitive advantage in their current industries, or to explore fresh opportunities in new areas (Hagedoorn, 1993; Powell, Koput and Smith-Doerr, 1996). A number of motives for alliances have been identified. They include the need to share the costs and risks of innovation (Mowery, 1988; Mowery, Oxley and Silverman, 1997); obtaining access to new markets and technologies (Powell et al., 1996); combining complementary skills (Teece, 1986; Arora and Gambardella, 1990); and preserving prospective learning opportunities (Hamel, 1991). The biotechnology industry provides a dynamic and rich setting where we can examine what forces fundamentally drive the formation of R&D alliances between large pharmaceutical companies (LPCs) and new biotechnology firms (NBFs). Since the 1970s, over a thousand NBFs have been founded and hundreds of R&D alliances have been established between NBFs and LPCs. Several explanations for the motives behind R&D alliances in biotechnology have been offered. Among them the most prominent are the transaction cost explanation (Pisano, 1990) and the learning with flexibility explanation (Arora and Gambardella, 1990; Powell and Brantly, 1992; Powell et al., 1996). However, in the previous literature, one possible motive for LPCs to form R&D alliances with NBFs has been relatively less studied; that motive is technology preemption.
Enhancement of Customer Network Relationship via Governance Mechanism of Inter-Organizational Core Resource and Core Knowledge Strategic Alliance
Tsai–Lung Liu, I-Shou University and Tajen Institute of Technology, Taiwan
With an attempt to review the nature of inter-organizational strategic alliance, resource-based perspective, knowledge management and customer network relationship, this paper tries to integrate five different formation factors of inter-organizational strategic alliance, namely degree of inter-industrial competition, market demand uncertainty, task knowledge ambiguity, resource complementarity and degree of marketing intensiveness. Furthermore, this paper endeavors to explore some research questions, such as: How do different formation factors of inter-organizational strategic alliance cause impact on core resource and core knowledge strategic alliance? How do different formation factors of inter-organizational strategic alliance cause impact on customer network relationship? How do enhance customer network relationship via governance mechanism of inter-organizational core resource and core knowledge strategic alliance? After employing the related variables from past literature, analysis and inference, this paper develops 14 propositions and builds up a conceptual model. Simultaneously, this paper finds out different formation factors of inter-organizational strategic alliance, which have significant positive and negative effects on core resource and core knowledge strategic alliance, and customer network relationship. Other important findings of this paper include the governance mechanism of inter-organizational core resource and core knowledge strategic alliance, which is mature intermediator effect enough and more helpful to enhance customer network relationship.
Determinants of Satisfaction with Pay Among: Nursing Home Administrators
Douglas Singh, Frank Fujita, Indiana University South Bend, South Bend, Indiana
Dr. Steven D. Norton, Indiana University South Bend, South Bend, Indiana
Satisfaction with pay, controlled for salary level, was studied in 258 non-owner nursing home administrators in Indiana and Michigan. Theories of satisfaction with pay are applied to nursing home administrators and the likely consequences of low satisfaction are discussed Significant differences in actual salaries or satisfaction with pay occurred when comparing the administrators of public sector, private, and not-for-profit nursing homes, and those in Michigan vs. Indiana. Significant differences were also associated with the administrators’ level of education, gender, and marital status. In a stepwise multiple regressions, satisfaction with pay, controlled for salary level was best predicted by professional development, commitment, bonus, skill compatibility, young age, small facility size, academic training not in nursing, opportunities for career advancement and lower hours worked. Although women and non-married administrators were paid less, factor s other than gender and marital status explain satisfaction with pay controlled for salary level. Implications of our findings for management practices regarding nursing home administrators are discussed. Employment represents an exchange relationship between the employer and the employee (Young 1997), in the context of this paper, nursing home owners and corporations being the employers, and employed nursing home administrators (NHAs) being the employees. At the center of this relationship are certain inputs and outcomes. The inputs include first the human capital an employee brings to a new job in the form of training, education, and prior experience, and secondly, personal factors such as age, gender, marital status, and race as well as certain personality traits and attitudes. Once on the job, the employee must provide additional inputs in the form of hours worked, performance, and tenure in the position. Outcomes are what the employee both anticipates and receives from the organization, such as monetary rewards, recognition, and job satisfaction. In case of NHAs, input measures such as education, prior experience, previous number of jobs held, and length of employment in current position have been found to be correlated with salary levels (Singh, 2002). The study did not evaluate fringe benefits, which are an important, but very complex, element of compensation.
Dynamics of Business Network Embeddedness
Dr. Chung-Jen Chen, National Cheng Kung University, Tainan, Taiwan, R.O.C
Lien-Sheng Chang, National Cheng Kung University, Tainan, Taiwan, R.O.C
This study explores the complex relationship between inter-firm characteristics and business network embeddedness. This study suggests that firms strive to increase resource value and reduce transaction costs through inter-firm specialization, relational capital and routines, and letting business network become gradually embedded in an evolutionary process, which facilitates incremental innovation but hinders radical innovation. Firms have been aggressively building business networks since 1980. Inter-firm collaboration has increased rapidly and shaped a tide of collective competition. These firms are aggressively engaged in innovation. Traditional businesses have striven to fight back and are often unable to maintain their leadership position (Moore, 1993). New questions are raised. Why can most incumbent business networks not rapidly respond to radical environmental change? Are business networks rigid like individual firms? Most scholars have concentrated on the advantages of business networks. Recently, some researchers have begun to emphasize their limitations and disadvantages (Gulati et al., 2000). Paradoxically, however, both the advantages and the disadvantages of business networks are related to their embeddedness. This study explores the complex relationship between inter-firm characteristics and business network embeddedness. Important factors that affect the embeddedness of a business network are identified. These include inter-firm specialization, relational capital and routines. The dynamics of embeddedness are explained from the perspectives of transaction cost economics and resources. This study suggests firms strive to increase the value of resources and reduce transaction costs through inter-firm specialization, relational capital and routines and letting business network become gradually embedded in an evolutionary process, which facilitates incremental innovation but hinders radical innovation.
The Economic Impact of a One-Time Sporting Event: The Breeders’ Cup Thoroughbred Racing Championship Day
Dr. Ralph Haug, Roosevelt University, Schaumburg, IL
Dr. Alan Krabbenhoft, Roosevelt University, Schaumburg, IL
Dr. Steven Tippins, Roosevelt University, Schaumburg, IL
Using a large sporting event, the 2002 Breeders’ Cup, this paper reports the economic impact of such an event. The methodology provides a standard of comparison for other studies and begins a line of research that, over time, will develop into a body of research that will have academic importance as well as practical application when large events are considered under the criteria of economic justification. Large events have always taken place. In recent times those that fund these events have tried to determine the economic impact of these events. While not an exact science, methods have evolved to do just that. This paper explores the impact that a one time, annual event in horse racing, the Breeders’ Cup Thoroughbred Racing Championship (hereafter referred to as Breeders’ Cup), has on the host locations. In the early 1980s the image of thoroughbred racing was mixed and confusing. On the one hand, the American sportsperson could follow horse racing by watching the triple crown races over three Saturday afternoon on television (Kentucky Derby, Belmont Stakes, Preakness), and if he or she lived near a major city which had a race track, attend local racing meets. As a result, the fan received a very narrow and limited view of the sport. The other side of racing’s image was one that was darker. Press coverage often included stories about the possible drugging of horses. In addition, it was perceived by some that those who attended the races had less than sterling images. The thoroughbred racing community felt that the image had to be changed. Led by John Gaines owner of Gainesway Farm in Lexington, Kentucky, and other major breeders, a vision was created where owners, trainers, and racetracks could work together to create a series of championship races all held on the same day at the same track. The hope was that the resulting national exposure would lift the image of the sport, increase the television viewing audience and resulting track attendance at racecourses around North America, and increase the profits of all involved. The result was the creation of the Breeders’ Cup Thoroughbred Racing Championship Day. The first meeting was held in 1984 at Hollywood Park in California. Each of the seven races had a purse of $1,000,000. There were races for Juveniles, Juvenile fillies, older filly or mares, sprinters, horses trained to race on turf, and the traditional mile. The first Breeders’ Cup was an outstanding success, with over 64,000 fans in attendance. NBC had four hours of coverage, scored high ratings, and even won the Eclipse Award for national television coverage. (Privman, J., 2000)
Are Strategic Assets Contributions or Constraints for SMEs to Go International? An Empirical Study of the US Manufacturing Sector
Dr. Chiung-Hui Tseng, National Cheng Kung University, Taiwan, R.O.C.
Dr. Patriya S. Tansuhaj and Dr. Jerman Rose, Washington State University, Pullman, WA
It is widely perceived that most small and medium-sized enterprises (SMEs) go international in a passive manner without any proactive plan. This study seeks to go beyond this conventional viewpoint by focusing on the strategic assets available to SMEs, which are closely tied to their international expansion, rather than on their resource limitations. Building on the international business and entrepreneurship literatures, we develop three hypotheses that relate technology capability, personal networks, and owner/manager experience, to multinationality of SMEs. Using a sample of 117 US SMEs to test these hypotheses, we found that the more technology capability the firm possesses, the more it has expanded internationally. The data also supports our hypothesis that the more firms concentrate on domestic networks, the lower their degree of international expansion. In addition, the hypothesis that owner/manager international experience leads to more multinationality is also supported. To be successful in international expansion in the long run, SME executives are urged to invest in technology capability and to gain more international experience while not being so locked to building domestic networks. The topic of international expansion has long been at the heart of international business and strategic management research. Due to the historical dominance of large, well-established firms in international markets, previous scholars have concentrated on the behavior of such firms (e.g., Contractor et al., 2003; Hitt et al., 1997; Kotabe et al., 2002; Tallman and Li, 1996). As small and medium-sized enterprises (SMEs) have become increasingly active in the international arena, investigation of the topic in the small business context has attracted more attention during recent years (e.g., Lu and Beamish, 2001; Qian, 2002). To date the efforts have been mostly directed toward impact of international expansion on SME performance. Understanding of the fundamental issue regarding SMEs’ ability to engage in international business activities is relatively lacking.
3n-p Fractional Factorials with Blocking
Dr. C. P. Kartha, University Michigan –Flint, MI
A systematic method of construction of fractional factorial designs split into blocks when the factors are each at three levels is discussed in this paper. The method consists of first getting the independent treatment combinations in the ‘key’ block by adding columns to a unit matrix of appropriate order using Galois Field theory and then deriving from this the rest of the blocks. By this method designs can be constructed in which a minimum number of lower order interactions are confounded. When a factorial experiment involves several factors, each of which are tested at various levels, it is well known that economy of space and material may be attained by observing only a fraction of all possible combinations of the factor levels. This technique is known as fractional replication  of a factorial experiment. One essential assumption to make such designs useful is that the higher order interactions are negligible. Though several methods of constructing symmetrical fractional factorials are available in literature (, , , ) it appears that there has not been any attempt for obtaining such designs split into blocks following some systematic method of construction. Moreover, almost all the available methods are for two level factors and relatively very little work has been done for construction of designs with factors at three levels. In this paper a method is presented for constructing fractional factorial designs with blocking for experiments with factors each at three levels. These designs are optimum in the sense that a minimum number of lower order interactions are confounded. A convenient way to represent the treatment combinations of the general sn factorial arrangement is by x1, x2, …, xn where xi is the level of the ith factor and takes on values from 0 to (s-1). The sn-1 degrees of freedom among the sn combinations may be partitioned into (sn-1)/(s-1) sets of (s-1) degrees of freedom. Each set of (s-1) degrees of freedom is given by the contrasts among the s sets of sn-1 treatment combinations specified by the following equations α1 x1 + α2 x2 + … + αn xn =0 α1 x1 + α2 x2 + … + αn xn =1 .α1 x1 + α2 x2 + … + αn xn =s-1 where the right hand sides of these equations are elements of the Galois Field GF(s). The αi’s must be positive integers between 0 and (s-1), not all equal to zero and, for uniqueness, the coefficient of the first αi that is not zero equals unity. Thus the interactions Aα1 Bα2 … Kαn corresponds to the equation whose left hand side is α1 x1 + α2 x2 + … + αn xn. All additions and multiplications are done within GF(s),
Money Laundering: A Global Challenge
Dr. Philip S. Russel, Philadelphia University, Philadelphia, PA
Money laundering poses a growing threat to the global financial and economic system. Recent terrorist incidents (such as the September 11 tragedy) and international financial scandals (such as the BCCI collapse in 1991, Bank of New York Scandal, laundering of millions of dollars by former Nigerian dictator) have exposed the fragility of our financial structure. Today money laundering has expanded its reach to include drugs and non-drug crimes, banks and non-banking institutions, physical transfer and cyber transfer. In this paper, we examine the money laundering process and review some of the global initiatives that have been taken to combat the problem. Money laundering experts estimate that perhaps US $1 trillion is laundered globally (a significant portion of it through the United States) every year, making it one of the largest industry. The dollar value of money laundered demands that it be controlled as otherwise it could seriously distort domestic and international macro economic policies and optimum allocation of resources. Lack of effective measures to combat this menace pose a threat to the economic, moral and social fiber of our society. The fight against money laundering has now attained global status and cuts across different drug and law enforcement agencies around the world, who have called for the cooperation of not only banks but also accountants, lawyers, and other professionals. These steps have had a noticeable impact but there are still many loopholes and money laundering continues to flourish (almost) unhindered. As the international narcotics control strategy report (1997) cynically observed “the race between criminals seeking new venues and oversight bodies seeking more widespread compliance still goes to the crooks”. In this paper we review some of the initiatives taken by United States and international organizations to combat the money laundering business. We observe that a global problem such as money laundering requires an integrated global solution involving diverse institutions. Only when different countries and institutions are able to unite and launch a concerted attack will we see any dent in the money laundering business. Isolated efforts by individual countries or law enforcement agencies will simply move the center of operation from a “tough” country with strict supervisory regulations and penalties to a lenient country with less vigilant detection infrastructure.
Research on Impacts of Team Leadership on Team Effectiveness
Chia-Chen Kuo, Graduate School of Management, I-Shou University, Taiwan
From a leadership perspective, this research categorizes leadership as transactional, transformational and paternalistic leadership style to examine how different kinds of leadership behavior cause impact on team effectiveness. This research employs the variables of team social capital and team diversity as the moderators to examine how they cause impact on team effectiveness. Basically, this research has four purposes: 1. Research on how transactional, transformational and paternalistic leadership styles cause impact on the forecast effect of team effectiveness. 2. Research on how these three leadership behaviors forecast team effectiveness and what the most significant factor of team effectiveness is. 3. Research on whether team social capital has moderator effect towards team effectiveness? 4. Research on whether team diversity has moderator effect towards team effectiveness. Having made the related literature review, analysis and integration, this research develops a total of seven propositions and their related propositions, and also builds up a conceptual research model. Some research results are made as follows: 1. Among transactional, transformational and paternalistic leadership styles which all have positive forecast effect towards team effectiveness, transformational leadership causes the most significant impact on team effectiveness. 2. Team social capital has moderator impact on team effectiveness. Its communication frequency, degree of informal interaction, total feeling of trust and perspective of value sharing positively raise the team effectiveness. 3. Team diversity also has moderator impact on team effectiveness. But only the variable of job functional background has positive impact on team effectiveness. Other variables of the age, gender, experience and education level of team cause negative impact on team effectiveness. Finally, some expected outcomes of this research are provided as the valuable implications for practical training or implementation of team leadership, and offered as the useful solutions for problems of team effectiveness to researchers.
Making Appropriate Decision on Organizational Boundary and Creating Organizational Value of Foreign Investment of Multinational Enterprise (MNE)
Chia–Chen Kuo, I-Shou University, Taiwan
This research paper aims at exploring the formation factors for the motivation of multinational enterprise’s (MNE’s) engaging in internationalization, and the influential factors for MNE’s strategic choice of foreign market entry mode. Furthermore, this research develops for MNE three foreign market entry modes, including export, contractual cooperation and direct investment. Meanwhile, by means of literature review and analysis, this paper studies the problems of transaction cost and agency cost which are especially found in foreign investment activity of MNE. After inference of the related variables, this paper proposes that the problems of transaction cost and agency cost have significant impact on MNE’s strategic choices of foreign market entry mode, organizational boundary and value creation. Finally, it proposes a conceptual inter-organizational model and shows its implications for academic research and business practices. The model concludes that when there is a great numerical difference in subtracting firm governance cost (FGC) from market governance cost (MGC), i.e. (MGC - FGC), and when there is a great numerical difference in subtracting firm agency cost (FAC) from market agency cost (MAC), i.e. (MAC - FAC), then the foreign operation mechanism of MNE tends to develop internalization process, enjoy minimum transaction cost and agency cost, fix appropriate organizational boundary, and create organizational value. The business activity of foreign investment is a complicated decision-making process. Since domestic business is unfamiliar with the environment of foreign market, foreign investment always encounters cross-cultural, economic and political problems, and also some trading obstacles, industrial competition, etc. These problems cause impact on making inter-organizational strategic decision. Why does business undertake international activity? What is its motivation factor? The main reasons are that as a domestic business is confronted with the sufferings and threats of high production cost and loss of competitive advantage in the current economic and trading environment, the business has to seek for an appropriate foreign market entry mode or strategy so as to create the maximum value in the foreign country. After making literature reviews on the foreign investment theory of international business, this paper finds that most of the research papers focus on the issue of the formation factors of foreign investment, as shown in Table 1. T
The Federation of Euro-Asian Stock Exchanges: Returns Distribution, Volatilities and Performance (1)
Dr. Francisca M. Beer, California State University, San Bernardino, CA
Dr. Mo Vaziri, California State University, San Bernardino, CA
This study investigates the returns distribution, volatility and performance of eleven of the twelve (12) founding members of the Federation of Euro-Asian Stock Exchange (FEAS). The performance of the FEAS exchanges is captured using the traditional measures of performance of Treynor (1965), Sharpe (1996, 1994) and Jensen (1968). Using monthly data from January 1995 until December 200, results show only one of the FEAS exchange, Teheran outperformed the S&P500. Results, however, also show than when domestic securities are combined with FEAS securities, the combined portfolios significantly outperformed a portfolio including solely domestic securities. This study focuses on a new and still under-research group of emerging markets. Specifically, this study investigates the returns distribution, volatility and performance of eleven (11) of the twelve (12) founding members of the Federation of Euro-Asian Stock Exchange (FEAS). The FEAS was established May 16, 1995 with 12 founding members and has grown to 24 members in 22 countries. Membership in the Federation is open to emerging stock exchanges in Europe and Asia (2). The literature on emerging stock exchanges can be divided into three categories. The first category studies return distributions of emerging equity. The second category examines the adequacy of the standard global asset pricing models when using emerging markets data. The third and last category tries to explain why stock markets are interdependent, by either decomposing or modeling stock market correlations. A number of studies have examined the return distributions of emerging equity markets in comparison with those of developed ones (e.g. Harvey (1995), Bekaert, Erb, Harvey, and Viskanta (1997, 1998), and Bekaert and Harvey (1995)). Five distributional characteristics have been documented: (1) high long-horizon returns; (2) high volatility; (3) time-variation of skewness and kurtosis; (4) high autocorrelation; and (5) low correlation with both developed markets and with other emerging markets (Niu and Cui (2002)). Previous studies also indicate that standard global asset pricing models, which assume complete integration of capital markets, fail to explain the cross section of average returns in emerging countries. An analysis of the predictability of the returns reveals that emerging market returns are more likely than developed countries to be influenced by local information. Most of these studies, however, date prior to 1997 (Harvey (1995)).
Relationship Marketing in the Export Sector: Empirical Evidence from Dubai’s Jebel Ali Free Trade Zone
Dr. Ali Hammoutene, University of Sharjah, United Arab Emirates
Despite the recent growth in the volume of exports to and from the United Arab Emirates (UAE), scant evidence exists as to individual export company relationships with overseas customers. Based on a sample of selected Dubai Jebel Ali FTZ exporters, this paper draws a comparison between pure and hybrid types of export activities. The findings seem to suggest that, as opposed to hybrid-type exporters, companies engaged in pure exporting are more experienced, employ more people, and exhibit more active behavior toward conducting their foreign business. Such firms sell to a greater number of export markets, deal with more foreign customers, and obtain more foreign orders. The paper also shows that pure exporters are distinguished by greater dependence, trust, understanding, commitment, communication, and cooperation but less distance, uncertainty, and conflict between the parties. Finally this paper provides implications for future research. Exporting represents one of the most common means of entering the global arena. Its advantages over other market entry strategies are based on reduced financial risk, lower commitment of resources and a high degree of flexibility (Stottinger and Schlegelmilch, 1998). Exporting and export behavior- mostly from the view point of small- and medium- sized firms – have been the focus of a large body of literature (Aaby and Slater, 1989; Bilky, 1978; Cavusgil and Nevin, 1981; Douglas and Graig, 1992; Ggemunden, 1991; Leonidou, 1995a, 1995b; Leonidou and Katsikeas, 1996; Li and Cavusgil, 1991; Miesenbock, 1988). Existing export marketing literature has mainly focused on western countries’ firms i.e. US, UK, Japan and Europe. This paper, however, brings about new evidence of major international firms from their Dubai Jebel Ali export base which is increasingly gaining recognition as a major trading hub both regionally and internationally. Moreover, Jebel Ali free trade facility is the cornerstone of UAE’s diversification efforts away from traditional oil and gas exports. The volume of exports from the UAE’s free trade zones increased from USD4.0 billion in 1997 to USD6.8 billion in 2001 representing an increase of 70% (IMF, 2003). In 2001, the percentage of exports from UAE free trade zones over non-hydrocarbon exports from the same free trade zones amounted to 62.4% (IMF, 2003). This increasing contribution of the UAE export sector, particularly from free trade zones to UAE’s balance of payments, can be explained by at least two reasons: (a) UAE’s diversification efforts away from oil and gas sectors (b) considering that the UAE government granted in 2002 a fifty-year tax free incentive more international companies have since moved their operations to Jebel Ali free trade zone from Bahrain, Africa, US and UK. Company executives interviewed strongly indicated that Jebel Ali’s attraction is largely attributed to UAE government’s efforts to provide state-of-the-art infrastructure that would facilitate business transactions.
Technostress in the Workplace: Managing Stress in the Electronic Workplace
Peter E. Brillhart, California Lutheran University, Thousand Oaks, CA
Computer and technology related stressors have become a mainstay in our electronic age. While stress will be defined in general, primarily this paper will discuss a subset called technostress. Symptoms and types of technostress will be discussed along with the overall stress cycle. Personal and organizational strategies to deal with both stress itself and the subset technostress will be discussed. Statistics will be shown on how stress ties into organizational costs. You are late for work, your cell phone is ringing off the hook, and your pager is going off every five seconds. It’s 7:30 am and you are virtually already at work although you are stuck in traffic trying to talk on the cell phone and drive your vehicle. You worked half the night from home on a key project that is due this morning. The company has provided you a laptop to allow you to work from anywhere as needed. A virtual private network that you can access from anywhere provides access into the company network if you have an Internet connection. In essence, your workday from yesterday never really ended. Does this scenario sound familiar? If so, welcome to the world of technostress, a world where you can work from anywhere, be called upon at any time and have virtually no down time from the stress and rigors of the job. Do you think computers and advanced communications have made your job easier or just allowed you to work harder and longer by the multitasking capabilities they help you perform? If you can perform three tasks at once, you are working efficiently for your employer, right? Yet, what does multitasking do to you and your well-being on a daily basis? Are you constantly tired, find sleeping difficult, always thinking of work tasks to be completed or wondering how you are going to get the numerous projects you have going done in the time allotted by your employer? If you have any of these symptoms, you are now in the world of stress as related to technology and its effect on your over-all stress level. All of these symptoms are tied together in a group of stressors called technostress.
Compounded Agency Problem: An Empirical Examination of Public-Private Partnerships
Dr. Jeff W. Trailer, The California State University, Chico, CA
Dr. Paula L. Rechner, The California State University, Chico, CA
Dr. Robert C. Hill, The California State University, Chico, CA
A compounded agency view is proposed in this study intended to add a new dimension to the agency theory of the firm by addressing the complexity of multiple, simultaneous conflicting interests associated with the multiple constituency view of the firm. This view is applied to public-private partnerships. Public-private partnerships have been in existence throughout the history of the U.S. (Olasky, 1986). A significant increase in the use of these hybrid organizational forms appears to have started during the 1980's. Several authors have argued that increasing global competition during the 1980's, and the corresponding perception that national technological competitiveness was declining, motivated greater government intervention in the market (Brown 1993; Scott, 1993; Spencer and Grindley, 1993). Partnerships between public and private firms offered a means of addressing national competitiveness issues without direct, highly visible manipulation of the market by government. Further reinforcing this trend was the apparent overwhelming success of Japanese international firms which employed these hybrid, public-private organizational forms. Thus, it has been argued that to remain competitive nationally, the U.S. government has to "cooperate" with business, not just standby and enforce "fair play" (MacDonald, C. 1994). Until recently, partnerships have been avoided in the U.S. based on the argument that cooperation between producers of demand-side substitutes results naturally in price fixing. Cooperation between firms, which allows the partners to appropriate consumer surplus, is not socially desirable, is labeled "conspiracy," and is a criminal offense in the U.S. (U.S. Code, 2004). The traditional argument against collusion, in terms of social welfare, has been based on the negative effects of minimizing consumer surplus, and corresponding reductions in efficient resource allocation within the market. However, Tullock (1990) has pointed out that in addition to these consequences, social welfare is reduced because consumers who would have been able to benefit from the product at the efficient market price, are unable to consume the product at the higher price. Thus, in addition to the direct economic loss to consumers and the allocative efficiency loss, there exists an opportunity cost to collusion as well.
Teaching Workloads of Marketing Program Leaders and Faculty and Criteria for Granting Load Relief
Dr. Ron Colley, State University of West Georgia, Carrollton, GA
Dr. Ara Volkan, Florida Gulf Coast University, Ft. Myers, FL
This study first examines the distributions for teaching loads and number of course preparations of marketing program leaders and faculty in two categories of marketing programs, based on degree-level and AACSBI accreditation. In addition, official maximum loads and the extent to which faculty and program leaders in given institutions teach different levels of loads are presented. Second, the reasons for granting teaching workload reductions to marketing faculty are examined. Where appropriate, statistical tests are performed to report differences among the means of the results observed in the two categories, for both faculty and program leaders and public and private institutions. The overall results show that there are no statistical differences between the responses of program leaders and faculty and between the private and public institution outcomes. While the former indicates good communications among marketing educators, the latter shows that free market competition operates as an equalizing force. When specified, the official maximum load is usually 24 semester hours (eight courses) per year, but few faculty members teach the maximum load. Also, there are significant differences between the average common teaching loads and common number of course preparations in the two categories. While there are some notable differences among the reasons cited for load relief in the two categories of programs analyzed, overall results indicate that publication activities are the main factors underlying load relief, followed by editing a journal and institutional service (e.g., directing programs). Thus, advocates of rewarding the scholarship of teaching and professional development activities at levels at least equal to research activities have not achieved their goal. Given increasing pressures for doing more with less and calls from some legislatures to mandate minimum teaching loads, marketing program leaders and faculty need to be informed how the level of their teaching workloads compares to those at institutions with characteristics similar to theirs. The nation-wide results reported in this paper can be used for such comparisons, as evidence during discussions with university administrators, and when filling or changing jobs.
Conflict Management Styles: A Comparative Study of University Academics and High School Teachers
Dr. Munevver Olcum Cetin, Marmara University, Istanbul, Turkey
Ozge Hacýfazlýoglu, University of Bahcesehir, Istanbul, Turkey
“Conflict” is inevitable when there is a human factor. Channeling it in a positive or negative way may affect the nature of the conflict whether beneficial or destructive. It is for this reason that managing conflict is more important than reducing it. The term “conflict” became one of the most important tools in the development of organizations when it is carefully managed. The purpose of this study is to determine to what extend and how conflict management styles differ in educational atmosphere by investigating academics’ and high school teachers’ conflict management styles. A series of works have been undertaken in order to collect data for the research. Related literature and different types of questionnaires on conflict developed by researchers (Rahim, 1983 ; Thomas, 1977, Olcum, Hacýfazlýoglu, 2004) have been analyzed and a draft questionnaire was prepared. “Academics’ conflict management questionnaire” previously developed by Olcum and Hacýfazlýoglu (2004) was revised and adopted for high school teachers. The questionnaire was given its final shape after having high school teachers’ and academics’ comments on the topic. The sample has been chosen randomly. 10 high schools and 4 universities in Istanbul, Turkey constitute the scope of this study. SPSS 10.0 package programme was used in the analysis of the data. Non Parametric Tests (Kruskal Wallis and Mann Whitney U) have been used to determine the significant findings related to different variables. This study is believed to give insights to administrators in terms of channeling conflict in a positive way. Conflict has been defined as a “process in which one party perceives that its interests are being opposed or negatively affected by another party” (Wall & Callister, 1995) (Bowling, Leslie, Marks, 2001). Rahim (1992) identifies conflict as an “interactive process manifested in incompatibility, disagreement or dissonance within or between social entities (Rahim, 1992) (Antonioni, 1998). Conflict can occur between individuals, groups, organizations, and even nations. Organizations are becoming increasingly dependent on groups as the central unit of work. While groups have the advantage of pooling their collective resources, their interdependent nature inevitability creates conflict (Green, Leslie, Michelle, 2001). The term conflict has been a common phenomenon since it is an inseparable part of an organization.
A Comparison Between Financial Reporting in Health Care Versus Other Industries
Dr. Nerissa M. Robert, Telica, Inc., Marlborough, MA
Dr. Saeed Mohaghegh, Assumption College, Worcester, MA
Sound financial reporting is essential in any organization particularly now that organizations are subject to increased scrutiny by regulatory agencies. Fraud and financial misconduct by some high profile organizations has resulted in the need for accurate financial reporting. This paper will first address general financial reporting requirements and analysis unique to the health care industry. It will then look specifically at one health maintenance organization (HMO) serving central Massachusetts, and compare it to a high-tech start-up company located in the same region with sales offices in various parts of the country. The results of this study shows that although there are some similarities between financial reporting of health care and non-health care organizations, there are also significant differences between them. Health care is one of the most rapidly expanding industries. By 2005, it is projected health care expenditures will make up 15.6% of the gross domestic product, as opposed to 13.1% in 1998 (Centers for Medicare and Medicaid Services, released January 2002). As with any other industry, the need for financial analysis within health care organizations is great. There are, however, some notable differences between the financial reporting and analysis that is required by health care versus other industries, with the most significant one being the reporting health care organizations must do to regulatory agencies. While there is a lack of previous research on the comparison between accounting practices and financial reporting conventions of these organizations, adequate research has been conducted on the financial reporting of HMOs, IPAs (a corporate entity that contracts with physician groups and with HMOs), group practices (a group of two or more physicians in one or more specialties in one or several locations that enters into a contract with an IPA or an HMO), and government healthcare entities. Both IPAs and group practices should monitor their accounting and cash management activities, which can negatively be affected by weak structure, management, and/or information technology. They need to make sure that their financial reporting and cost accounting techniques provide accurate and important information to make sound business decisions (Karling & Pyper, 1999).
A Field Research about Implications of Organizational Downsizing on Employees Working for Turkish Public Banks
Dr. Cemal Zehir, Gebze Institute of Technology, Turkey
Dr. Fatma Zehra Savi, Kastamonu College, Ankara University, Turkey
The strategy of organizational downsizing has been one of the most applied strategies since 1990’s. Layouts emerge due to the implementation of this strategy. These developments have some negative effects on not only the employees still working in the organization but also employees who left the organization. In this study, we investigated the restructured public banks after the economic crisis of February 2001. Organizational downsizing was applied during the restructuring process. We investigated the impact of downsizing on the emotional and behavioral commitment of current employees working at the Turkish Public Banks. A fast technological change has been experienced in 1990 due to increasing industrial and commercial competition in national and international environment. Companies prefer new strategic alternatives in order to allocate their sources rationally according to internal and external conditions. Until 1990s, growth, for a company, had been considered as a sign of health and downsizing had been considered as a precaution of recovery for an ‘ill’ company (Koçel, 2001,p..349) However; while company managers were regarding growth like this in the beginning of 1980s, downsizing has begun to be used more commonly in management and organization field in the beginning of 1990s, as too many new concepts and applications in that field. In the beginning of the 1990s, the concept of “downsizing” had a new meaning, and with the improvement in science and technology, and new approachment in management, “organizational downsizing” has become widely accepted a way of increasing competitiveness and recovery.
Living in Dilbert's World: A Cubicle Eye's View of Job Dissatisfaction
Dr. William Burmeister, Elizabethtown College, Elizabethtown, PA
"Living in Dilbert's World: A Cubicle Eye's View of Job Dissatisfaction" takes a "man in the trenches" approach to examine how organizations managed by individuals who do not understand, appreciate, or foster the employees needs for fair and equitable treatment are actually creating an atmosphere of resentment and hostility; thereby contributing to a variety of negative organizational outcomes. Anyone familiar with Scott Adams business oriented "Dilbert" comic strip, and Dilbert's pointy haired boss, is intimately familiar with the pandemic affliction now ravaging the workplace - job dissatisfaction. More than half of the employees in the United States have negative feelings about their work, according to a study by global personnel consultant Towers Perrin and its research partner, Gang and Gang of Salem, Massachusetts. Understanding job dissatisfaction is not necessarily the exercise in logic and rational analysis you might imagine. Individual perspectives and personal biases make it all but impossible to specifically identify precisely what we are talking about. Job dissatisfaction, in general, is the degree to which individuals feel negatively about their jobs. It is an emotional response to the tasks, as well as to the physical and social conditions associated with the workplace (Smith, Kendall, and Hulin, 1969). It is often the latter half of this accepted definition that is misunderstood or greatly underestimated by management, and painfully felt by the employee. It is critical that "the powers that be" clearly understand that the issue at hand is the emotional response of the employee to not only the tasks to be performed, but to all of the physical, psychological, and social conditions that are involved in the execution of those tasks. If an organization's management does not understand the myriad of factors contributing to employee dissatisfaction, it may unknowingly or unwittingly make decisions that further contribute to the problem or implement inappropriate programs that fail to foster the kind of work environment that builds strong, positive emotions (Patrick, 2003).
Teaching the Job Satisfaction Audit Project to Business School Students
Dr. Gene Milbourn, Jr. University of Baltimore, Maryland
Dr. Tim Haight, California State University, Los Angeles
This paper will provide an outline on structuring a consulting project for business school students on the topic of the job satisfaction audit. It will suggest a step-by-step program to improve the job satisfaction in a company. Specially, the paper will assist students in (1) selecting an appropriate measurement instrument; (2) stratifying employees for analysis, (3) selecting useful statistical analyses, and (4) detailing how to profile and structure the feedback of attitudinal data for organizational development purposes. The models of Patricia Cain Smith on job satisfaction and David Bowers on survey feedback is featured. Robbins (2003, p. 72) says that job satisfaction refers “to an individual’s general attitude toward his or her job. A person with a high level of job satisfaction holds a positive attitude about the job, while a person who is dissatisfied with his or her job holds negative attitudes about the job. Smith et al. (1969), in their groundbreaking book, define job satisfaction as a feeling (approach or avoidance ernotion) an employee has about his work, pay, promotional opportunities, supervisor, and co-workers. More specifically, it is the "pleasurable emotional state resulting from the appraisal of one's job as achieving or facilitating the achievement of one's job values." Robbins (2003) reviewed the literature on job satisfaction and found that (1) it is better to measure facets of satisfaction rather than a global measure; (2) satisfaction has been declining over the last 15 years; (3) job dissatisfaction is related to higher levels of absenteeism and turnover in American companies; and (4) while job satisfaction is related to overall company productivity is not generally related to individual performance. There is a positive relationship between job satisfaction and what is now called “organizational citizenship behavior” such as helping others, talking positively about the company and its policies (Lepine el al., 2002). An early longitudinal study with 20,000 managers and 200,000 workers, Likert and Bowers (1973), found that satisfaction accounts for 17% of the variance in “total production efficiency.”
WIP Management Model for Semiconductor Back-end Manufacturing
Fan-Yun Pai, National Taiwan University, Taiwan, R.O.C.
The agreement of due dates is the need for customer satisfaction, which is a critical factor to survive in today’s highly competitive market for Semiconductor industry. However, undesirable and inevitable production variations make it difficult to maintain and improve factory’s performance on due dates, especially for those back-end factories closer to customers. We, therefore, propose a WIP management model to help production managers effectively manage WIP levels to compensate the impact of unexpected production variations and to achieve better due-date performance. The WIP management model can be divided into two parts: an AWDL (Available WIP Deviation Level) determination model is designed to gather proper AWDLs for monitored workstations and a WIP correction action is proposed to make abnormal WIP levels back to normal levels as soon as possible. Simulation experiments are conducted to evaluate the proposed WIP management model. Results show that WIP management model provides back-end factories better performances on average on time delivery percentage (AOTDP). IC (Integrated Circuit) manufacturing is a complicated multistage process, transferring silicon in the form of thin, polished disk into integrated circuit. The entire process basically includes four main steps: wafer processing or wafer fabrication (Fab), wafer probe, IC packaging, and functional testing and burn-in (Chen, 1988). Facing stiff and worldwide competitions, semiconductor manufacturers are apt to provide a cost-effective and time-to-market solution services for their customers. Wafer fabrication is generally referred to as the “front-end” operation and the following stages, wafer probing, IC packaging and final testing, are referred to the “back-end” in the turkey service. Figure 1 shows typical semiconductor backend manufacturing flow. Although wafer fabrication is the most technologically complex and the most capital intensive of all four stages, the back-end operations are much closer to the customers. The on-time-delivery performance of the whole supply chain depends on the performance in the back-end processes. Furthermore, the unique characteristics and production environment with dynamic orders, a huge number of complicate product types, and short cycle times complicates the production management tasks (Lee et al., 1993 and 2000; Manzione, 1990; Uzsoy et al., 1992; Uzsoy et al., 1993). The features of back-end manufacturing flow are summarized in Table 1.
Total Quality Management: Context and Performance
Dr. Esin Sadikoglu, Gebze Institute of Technology, Turkey
There are insufficient and mixed empirical results of the relationship between total quality management (TQM) implementation and performance with respect to contextual factors. The objective of this study was to figure out relationships among TQM implementation, its acceptance, and operational performance of companies considering contextual factors of company size and existence of union, and industry type. Questionnaires were mailed to 437 companies in different industries located in the Midwest, U.S. The study found most of the companies implemented TQM with a high degree of its acceptance. Also, company size, existence of union, and industry type did not significantly affect acceptance of TQM and TQM performance. Quality-oriented companies should improve their internal efficiency as well in order to improve their productivity, profit, and competitiveness. Total quality management (TQM) is a management philosophy and has been applied by many quality-oriented organizations in order to achieve such benefits as enhanced customer satisfaction, improved quality of goods and services, productivity and profits, and reduced waste and cost among other benefits (Evans et al., 1993; Choi et al., 1998; Elmuti et al., 1994; Schuler et al., 1991). Although TQM emerged from manufacturing industry, customer focus and the use of traditional quality control techniques outside the production area have enabled TQM to be used in service industry, government agencies, private industries, health-care organizations, and education (Harris, 1995; Saylor, 1996). There are mixed results of TQM effect on performance. Some authors found that effectively implementation of TQM improved financial performance (Hendricks et al., 2001), operational performance (Shah et al., 2003), customer satisfaction and plant performance (Choi et al., 1998), market share and productivity while reducing cost, employee turnover, and employee complaints (Schuler et al., 1991), perceptions of quality-of work life, employee productivity, and quality of products (Elmuti et al., 1994), and various performance levels (Kaynak, 2003). However, some companies did not gain from TQM implementation. For example, Yeung et al. (1998) found from case studies that manufacturing companies with a quality management system in Hong Kong did not gain operational efficiency and financial benefits. McCabe et al. (1998) explains the failure of TQM implementation from a case study of a medium sized bank. Many authors (Bohan, 1998; Smith et al., 1994; Masters, 1996; Whalen et al., 1994) give the reasons of TQM fails and the barriers to TQM implementation.
Strategic Perspectives Associated With the Golf Industry
Dr. Alan D. Smith, Robert Morris University, Pittsburgh, PA
Dr. Gayle Marco, Robert Morris University, Pittsburgh, PA
Companies who own strategic assets achieve superior profits because they control valuable resources that are hard to replicate. The golf industry must deal with a myriad of strategic issues if they are to remain competitive. In 1986, the average number of golfers per course was estimated at 1,900. This figure grew to a high of 2,250 in 1990, and has declined back into the 1,950 range today. Since the peak of participation in the early 1990s, additions to supply have been outpacing the growth in demand. The industry is facing a potentially difficult problem from this growth of supply and flattening of demand. Strategically, golf courses must increase utilization levels at existing golf courses in order to impact profitability. There is a very large opportunity for golf at this time. There are over 40 million people in the US alone that that either want to golf or want to golf more. Awareness of golf is at an all time high. Unfortunately, without creating a better golfing experience for more people, the golf industry will be hard pressed to convert interest into commitment. The basic purpose of this paper is to review the pertinent business academic and practitioner literatures and to outline managerial processes as to what action it will take to secure the future of golf as a mature industry. Michalisin, Kline, and Smith (1997, 2000) demonstrated the ideas relative to the Resource-Based View of the Firm by showing how intangible assets can create a sustainable competitive advantage and superior profits. Intangibles, such as employee know-how, reputation, and organizational culture are considered strategic assets because they are rare, immutable, and valuable. Companies who own strategic assets achieve superior profits because they control valuable resources that are hard to replicate. Michalisin, Kline, and Smith reference Hall’s studies on intangibles in the workplace and say that managers should select, retain, and manage their rare resources in order to outperform their competitors.
Strategic Disintermediation Within the Context of E-Commerce: The Effect on Distributors and Re-Sellers
Dr. Alan D. Smith, Robert Morris University, Pittsburg, PA
Dr. Dean R. Manna, Robert Morris University, Pittsburg, PA
As larger companies raced to enter the world of e-commerce, they have been confronted with a wide variety of problems. Companies have had to deal with issues such as e-mail viruses, credit card theft and fraud, and slow and confusing web pages, coupled with dead-links, to name a few. The Internet Tax Freedom Act provides a three-year moratorium on any new Internet taxes. This Act also bars state or local governments from imposing new taxes on Internet access, as well as prohibiting any new e-commerce taxes. However, this “hands-off” attitude does not seem to be shared by the local governments, who are also being hit hard by out of state mail order business and the Internet. With the amount of controversy the Internet has caused in this area already it can be assumed that when the moratorium ends, the tax structure will change. The fact that these problems have been so widespread may have been enough to deter distributors from diving into the e-commerce water headfirst. These distributors have a different mindset. They have worked long and hard to build and maintain customer relationships and they will not risk losing them to a technology that is largely beyond their control. It is still their belief that customers are people first and that it is the efforts of people – not a great web page that is going to keep these people satisfied and coming back. As the Internet continues to grow exponentially in size and popularity, an increasing number of companies are beginning to enter the world of e-commerce. Manufacturers are finding that doing business over the Internet allows them to essentially eliminate the middleman -- a phenomenon commonly described in the literature by the term disintermediation -- which allows manufacturers and retailers to cut costs and increase profit margins (Seminario, 1999: Seminario, 2000). However, while manufacturers may find the increased profit margins offered by the Internet to be attractive, the distributors and re-sellers upon whom these companies have relied so heavily are now faced with several difficult questions such as; if manufacturers continue to eliminate the middleman, where will this leave us? Should all distributors and re-sellers rush to start doing on the Internet right away? Will the government step in and regulate the Internet? And what advantages does the Internet hold for marketing professionals? This paper will examine these issues and explore the effect of e-commerce on distributors and re-sellers.
IT Project Management: A Conceptual View
Dr. Sharlett Gillard, University of Southern Indiana, IN
The development of a large-scale information system involves some unique features that are particularly difficult to manage. It involves large project management teams; it is challenging to measure progress or quality short of completion; if not done right the first time, costs increase exponentially; it has historically been plagued with high turnover of personnel; and it requires careful stewardship of enormous organizational resources. This paper highlights organizational structures and presents a tri-dimensional view of the IT project management environment typically found in major information systems initiatives. Organization structures evolve to adapt to changing business environments. Many structural changes have taken place during the last century, primarily moving from centralized to decentralized organizations. Large-scale information systems are typically developed within a matrix organization. According to the most recent Chaos report from industry analyst Standish Group, only one-third of all IT projects can be deemed successes. The report also shows time overruns in projects have increased significantly – from a low of 63 percent in 2000 to 82 percent in 2003. According to industry research firm Gartner, poor project manager competency accounts for the bulk – 60 percent – of project failures (MacInnis 2003), due in part to the complexity of the management role in product development. Management within a matrix structure involves large project management teams, historically plagued with high turnover of personnel; it involves team members who have two supervisors—the functional manager and the project manager—creating conflicting loyalties; and it requires careful stewardship of enormous organizational resources. As fledgling sole-owner businesses grew and hired employees, defining the division of work and responsibilities improved efficiency and bolstered profits. Over time, as businesses continued to grow, the division of work and responsibilities became more frequently observed and studied, and division was noted to have evolved differently in different organizations. Definitions were applied to organizational structure itself as well as to the various configurations of lines of authority or control that comprised the organizational structure. March and Simon (1958) defined organizational structure as the hierarchical relations among members of the organization.@ Child (1972) defined it in terms of the allocation of tasks and responsibilities between individual organization members and groups to ensure effective communication and integration of effort (El Louadi 1998).
The Milieu of the IASB
Dr. Alistair M. Brown, Curtin University of Technology, Perth, Western Australia
This paper considers the milieu of the International Accounting Standard Board (IASB) and the role it plays in providing accounting service to the wider public. Substantially funded by large multinational corporations and elite accounting firms, the IASB is dominated by what Brown, Tower and Taplin (2004) describe as core-financial interest-based stakeholder groups. This milieu may offer a cosy arrangement for a narrow band of stakeholders, but it does little to include other member groups of the global community that it purports to serve. Business communities rely on sound financial information, yet many countries do not have the capacity to generate accounting standards acceptable to these communities (Hopper and Hoque, 2004). As a consequence, many transnational agencies such as, the Asian Development Bank, the European Bank for Reconstruction, the International Monetary Fund, USAID and the World Bank have formulated accounting practices and policies in emerging countries which do not always meet the acclaim of the developing accounting literature (Sucher and Alexander, 2004; Mserembo and Hopper, 2004). Indeed, this criticism of accounting standard setters is not confined to the developing world. The radical accounting literature is highly critical of mainstream accounting in the developed capitalist world (Neu, Cooper and Everett, 2001). It asserts that the market is over-emphasised as a mechanism for allocating resources, its alleged market efficiency only benefiting a part of society (Lehman, 2001) and, in a global sense, an acknowledgement of this position has been made by transnational agencies of the IMF and World Bank (see Rogoff, 2002). The radical position also asserts that corporations are owned, organised and operated to establish and exploit power relationships (Cooper and Sherer, 1984), a theme emphasised and well-documented in a global context by Monbiot (2002), Pilger (2002) and Stiglitz (2002). In addition, the radical literature holds that the developed accounting profession maintains the status quo by siding itself to one party (capital) to the exclusion of another party (labour) (Mathews, 1997), and that accounting, in fact, is socially constructed and socially constructing (Tinker, 1985). It is within the spirit of the so the developing and radical views that this paper reflects on three aspects of the IASB: one, its powerful funders; two its closed membership, and three, its narrow activities. The paper’s purpose is to inform the greater business community of the potential consequences of the designs of the IASB and of the possibilities of widening what Werhane and Freeman (1997) term the moral viewpoint. This dissertation could be of interest to national accounting standard-setters, business communities, accountants and those with an interest in the accounting harmonisation process.
The Role of Trust and Collaboration in the Internet-enabled Supply Chain
Dr. Martin Grossman, American Intercontinental University, Weston, FL
Collaborative computer-based information systems have become a major trend in today’s business environment. Such systems are being used to link companies with suppliers, distributors, and/or customers, thus enabling the flow of information across the supply chain. The trend has accelerated with the emergence of the Internet and the wide scale adoption of e-business. While there are many potential benefits to such collaborative information systems, there are also a number of obstacles which make them difficult to implement. This paper traces the history of interorganizational systems and examines the critical role trust plays in their successful implementation. Several current-day collaborative technologies, specifically pertaining to supply chain management, are examined. It is difficult to pick up a trade magazine today without encountering the words 'trust' or 'collaboration' in relation to current business practices, particularly in the context of information sharing via the Internet. A collaborative approach to business, we are told, allows for much greater efficiencies along the supply chain and therefore greater customer satisfaction. As counterintuitive as this may seem to our competitive and independent sensibilities, we are further instructed that unless businesses quickly embrace this new paradigm they will not be able to compete in the emerging digital economy. Today’s business transactions have become increasingly dependent on the exchange of information between the various links along the supply chain, which may include suppliers, customers, and even outright competitors. As opposed to the traditional approach, in which organizations cautiously guard their separate 'silos' of information, companies today are being forced to embrace a major transformation, where organizational boundaries are being eroded and information is ‘visible’ across the entire supply chain. All of this is made possible, of course, due to the emergence of the ubiquitous digital infrastructure known as the Internet.
Intent to Leave Among Geographically Isolated Branch Office Employees: An Empirical Study
Dr. Philip W. Morris, Sam Houston State University, Huntsville, TX
Dr. N. Ross Quarles, Sam Houston State University, Huntsville, TX
Dr. Colbert Rhodes, University of Texas--Permian Basin, Odessa, TX
The Price-Mueller Job Satisfaction/Intent to Leave model has been used on a number of different populations. The Price-Mueller Model is a causal process in which an estimation of the indirect effects of the job determinants and pay on job satisfaction and their direct effects on intent to leave are made. The job determinants include Participation, Distributive Justice, Instrumental Communication, Promotional Opportunities, Integration, and Routinization. In order to test the robustness of the model, this study has applied this model to a new population and geographic region. White-collar, clerical, semi‑professional and professional employees in West Texas were surveyed. The results, for the most part, conform to and support the usefulness of the Price-Mueller Model. However, Participation had a direct effect on Intent to Leave rather than an indirect effect through Job Satisfaction. This suggests that the role of Participation on Intent to Leave in the Price-Mueller Model needs to be studied further. This study represents a further test of the Price‑Mueller job satisfaction/intent to leave model on a population and region of the United States not previously examined. The model emerged out of Price and Mueller's years of empirical and theoretical research (Price, 1977; Price and Bluedorn, 1979; Price and Mueller, 1981, 1986; Mueller and Price, 1990; Mueller, 1994). The Price Mueller model has been used on such professional and semi‑professional employees as nurses (Price & Mueller, 1981, 1986; Gurney et al, 1997), all categories of hospital employees (Blegen et al, 1988), dental hygienists (Mueller et al, 1994), and physicians (Kim et al, 1996), in the United States. In this study, a sample of clerical, semi‑professional and professional employees of the regional branches of two national companies located in West Texas were administered the survey. The Price‑Mueller model is influenced by expectation theory. Beliefs about the nature of the work environment are reflected in expectation and preferences for specific actions are referred to as values. Vroom (1964) initiated research using expectation theory and later proponents were Porter and Steers, (1973) and Mowday, Porter and Steers (1982). Central to expectation theory is an elaboration of the specific expectations and values employees wish to see fulfilled by an organization and if they are not realized employees will leave.
Integration of TAM Based Electronic Commerce Models for Trust
Teoh Kung Keat, Multimedia University, Malaysia
Dr. Avvari Mohan, Multimedia University, Malaysia
There have been many studies (Gefen et al., 2003; Pavlou, 2003, Egger, 2003) on user trust in electronic commerce. The relationship between electronic commerce and trust crosses the disciplinary lines, by drawing from psychology, marketing, interface design, security, social studies, and information technology. Researchers have often approached this topic from a disciplinary perspective. However, due to the nature of electronic commerce and trust, there is a need for these studies to converge in a singular effort to understand how user trust can enhance the acceptance of electronic commerce. This interdisciplinary convergence must be built upon a strong foundation to facilitate efforts to benchmark effectiveness accurately. We propose to use the Technology Acceptance Model (TAM) proposed by Davis (1989) as the foundation of this study of users’ acceptance of electronic commerce. We then review the key research models using TAM for electronic commerce trust from different perspectives and integrate them into a single model. Electronic commerce has long intrigued researchers. The exciting growth of internet use over the years signals the possibilities of reengineering old business models. Yet the secret to the success of any electronic commerce venture can be summed up in one word: trust. In order to explain the factors that persuade consumers to trust electronic commerce sites, researchers have introduced many trust models. These models identify key determinants to building trust in electronic commerce. The basic building blocks to testing user acceptance of these models, however, centered on the Technology Acceptance Model introduced by Davis (1989). While there a considerable amount of research looks into online consumer trust within the different confines, little of this research makes an effort to integrate the findings of different disciplines. In this article, we examine the work of researchers who have attempted to explain the relationship between consumer trust and electronic commerce by applying the Technology Acceptance Model to find common ground among their respective models. We will then propose a model which will provide a stronger theoretical understanding of consumer trust for electronic commerce. The objective of this article is to mine useful information within the research area and to integrate it into a single model that explains the relationship between consumer trust and electronic commerce.
The Requisite Holism of Information in a Virtual Business Organization's Management (1)
Dr. Vojko Potocan, University of Maribor, Maribor, Slovenia
Dr. Matjaz Mulej, University of Maribor, Maribor, Slovenia
Any successful organization (such as a business) is based on (informal) systems thinking. This kind of thinking may therefore be required by several very influential international organizations, but these organizations do not at present define what they actually mean by systems thinking. We are providing an overview of possible versions of systems thinking and our suggestion of a definition. In the second half we suggest that the requisite holism should be the criterion of the practical application of systems thinking. We describe and give examples of preconditions for the requisite holism of information in the management of a virtual business organization. Since a virtual business organization is complex it requires holistic, rather than a one-sided, narrowly specialized, thinking in order to be managed well enough. So also does the provision of information to and by its managers and their coworkers. The term holistic thinking is mostly linked quite closely with the science called systems theory as a basis for systemic thinking. Holistic thinking is required by numerous national and important international organizations and nations such as the United Nations (e.g. for thinking about and managing nuclear weapons, world peace, sustainable development), the International Standards Organization (e.g. in ISO 9000:2000 which requires business quality to be seen as a system of nine - interdependent! - groups of criteria based on learning and innovation), the European Union (e.g. requiring explicit systems thinking for documents about promotion of innovation), the US (requiring consideration of interdependence in the talk given by President Clinton in UN General Assembly in Sept., 2000), etc. (Ecimovic et al., 2002; Potocan, 2002). But: none of these documents defines explicitly what their authors and legislators mean by systems thinking. Neither do the authors about systems theory, virtual business organization, or information management defined what they mean by systems thinking.
Teacher as Leader and Student as Follower: The Implementation of Expectancy Theory in the English Classes of Taiwanese College
Dr. Ping-Yu Wang, Kuang Wu Institute of Technology, Taiwan, R.O.C.
In Taiwan, most college teachers and students still play their traditional roles, which include teaching for academic excellence and driving students toward passing grades. In this study, the relationship between college teachers’ and students’ teaching and learning motivations will be investigated, and their traditional roles through the implementation of principles associated with famous expectancy theory (Vroom, 1964) will also be challenged. In order to ensure success in the implementation of the expectancy theory, teachers need to become the leaders of the students. When the teachers become leaders, and students become followers, a highly enthusiastic and active learning atmosphere will be developed among the class. Furthermore, based on the author’s previous study, abounding information from a survey results will be utilized herein to help Taiwanese teachers to apply the expectancy theory to their teaching. One of the traditions in Taiwanese society is that scholars have the highest status, higher than farmers, engineers, and businessmen. This tradition has affected Taiwanese parents’ valuing their children’s education. They believe that their children can succeed in the future only if they receive a good education. Thus, a college and a higher education are highly valued in Taiwanese society. In Taiwan, there are about 154 universities and colleges. Any high school student can easily continue his or her education because of too many universities and colleges. Therefore, to ensure a better future, more and more college students continue their further studies in graduate schools. Not surprisingly, nowadays the competition for entering Taiwanese colleges and universities is low, but for graduate schools is very high.
Incorporating Value Judgments into Data Envelopment Analysis to Improve Decision Quality for Organization
Chun-Chu Liu, Ph.D. Candidate, National Cheng-Kung University and Associate Professor, Chang Jung Christian University, Taiwan, R. O. C.
Chia-Yon Chen, Professor, National Cheng-Kung University, Taiwan, R. O. C.
Data Envelopment Analysis (DEA) is a method that uses a mathematical programming model to obtain the relative efficiency of the decision making unit (DMUs), and gives us an optimal weight of a set of input factors and output factors. Boussofiane, Dyson, and Thanassoulis (1991) believe that the weight computed in the DEA model has both advantages and disadvantages. The advantage is that the weight generated will be fair and equitable, and not affected by subjective factors. The disadvantage is that if the weight is selected intentionally, then it may make the DMUs relatively efficient, and its efficiency does not necessarily come from the inherent efficiency, but from the selection of weight. Is the value of the relative efficiency obtained by placing such a congenital weight fair, reasonable, and acceptable? We addresses this question to integrate the subjective and objective weights restriction method, so that the evaluation result can be more realistic, and finally takes the garbage disposal teams in the districts of Kaohsiung city in Taiwan as an example for the illustration. The Data Envelopment Analysis (DEA) method has been commonly used in the efficiency evaluation of multiple inputs and outputs, particularly the efficiency evaluation for non-profit organizations or governmental departments since Charnes, Cooper, and Rhodes proposed the mathematical programming model in 1978. Such a method not only has an overall consideration for an organization, but also provides an improvement direction for the decision maker, which can be considered as a more appropriate calculation method than the traditional method such as the ratio analysis and regression model analysis. However, the present method also has its shortcomings. Boussofiane, Dyson, and Thanassoulis (1991) believed that the weight computed in the DEA model has both advantages and disadvantages.
An Investigation of the Relationship of Organizational Structure, Employee’s Personality and Organizational Citizenship Behaviors
Min-Huei. Chien, The Overseas Chinese Institute of Technology, Taiwan, R.O.C.
The purpose of this paper is to explain how to improve organizational citizenship behavior (OCB) and how to develop a plan to obtain continual OCB through a formal system and an informal environmental setting in the work place. OCBs describe actions in which employees are willing to go above and beyond their prescribed role requirements. Some studies have shown that OCBs are positively related to indicators of individual, unit, and organizational performance. This paper focuses on clearly defining the relationship between organizational effectiveness and OCB. It will also discuss the implications of the OCB and try to find ways to improve OCB. Results indicate that positive work climate, organization resources, employee’s personality, organizational culture, and so on are all related to OCB. This research is important for any businesses that want to improve competence and organizational effectiveness. Improving OCB is the lowest cost and best way for businesses to reach organizational effectiveness. The world is looking forward to high performance organizations that would provide high job satisfaction to their employees and would also cherish excellence and effectiveness. This could be achieved if we could develop organizational citizenship. Research of organizational citizenship behaviors has been extensive since its introduction around twenty years ago (Bateman & Organ, 1983). The vast majority of OCB research since has focused on the effects of such behavior on individual and organizational performance. There is consensus in the field that organizational citizenship behaviors are salient behaviors for organizational enterprises. However, the antecedents of OCBs are not well established. This paper focuses on clearly defining the relationship between organizational effectiveness and OCB. It also will discuss the implications of OCB and try to find out how to improve it.
Examining the Effect of Organization Culture and Leadership Behaviors on Organizational Commitment, Job Satisfaction, and Job Performance
at Small and Middle-sized Firms of Taiwan
Dr. Li Yueh Chen, Chungchou Institute of Technology, Taiwan
Organization culture has a significant effect on how employees view their organizational responsibilities and their commitment. Leaders affect their subordinates both directly through their interactions and also through the organization’s culture. A case can be made that the combination of these influences can create effective organizations with a conscience or organizations where employees have limited commitment and share fewer values leading to reduced success. With increasing globalization, greater knowledge of the interaction of these factors in non-western cultures can be beneficial for assessing the effectiveness of current theory as well as benefiting practicing leaders and decision makers. This study examines specific employee behaviors associated with transformational and transactional leadership and how they both moderate and mediate effects of organizational culture and commitment. Surveys were distributed to 84 Taiwanese manufacturing and service organizations with a total of 1,451 employees. Significant findings are: (1) idealized influence leadership with innovative culture is positively related to organizational commitment, (2) the mediating effort of organizational commitment in the relationship between transformational leadership behaviors and job satisfaction is not influenced by the organizational culture, and (3) the organizational commitment mediates the relationship between transformational leadership behaviors and job performance in supportive and bureaucratic culture. A survey of most admired companies conducted by Fortune has indicated that the CEO respondents believed that corporate culture was their most important lever in enhancing this key capability (Anonymous, 1998). Given the importance of organizational culture and its affect on organizational outcomes, it is currently one of the hottest business topics in both academic research and the popular business press. Today’s business leaders are confronted with frequent unpredictable challenges, which require a high degree of flexibility on their part. Recent organizational crises have emphasized the need for leadership and personal commitment from organizational decision makers which, then, become more critical for organizational success (Earle, 1996).
An Investigation of the Diffusion of Online Games in Taiwan: An Application of Roger’s Diffusion of Innovation Theory
Dr. Julian M. S. Cheng and Leticia L. Y. Kao, National Central University, Chung-Li City, Taiwan
Julia Ying-Chao Lin, Tainan Woman’s College of Arts and Technology, Tainan County, Taiwan
Online games have shown the potential to grow from a small to a major portion of the global entertainment sector. The increasing maturity of broadband technology and infrastructure development will facilitate this growth. In these circumstances, an investigation of the diffusion of online games in a social system will provide some insight into gamer behaviors and further the development of online games in the near future. Nevertheless, a robust review on the literature has revealed a lack of research in the subject matter. Therefore, in this paper, an investigation of the diffusion of online games in Taiwan is therefore conducted. Rogers’ (2003) Diffusion of Innovation model (DOI) is applied in the investigation. Cluster analysis is utilized to divide the current diffusion stages of online games into: innovators, early adopters and the early majority categories. The differences in attributes/characteristics among the three categories are assessed. Implications based on the research findings are discussed, and future research suggestions are also provided. Online games, a form of interactive electronic games rooted in the Internet, have shown the potential to grow from a small to a major portion of the global entertainment sector. The global sales revenue was estimated to increase to second place in the total game market in 2003 (IDC, 2003). As well, uncountable breakthroughs in technology and broadband infrastructure development will further facilitate this future growth (Fu, 2003). Hence, studies on the diffusion of online games within a social system providing better insight into online gamer profiles has become essential. A robust review on the literature has revealed only a limited number of academic papers associated with the current subject matter (e.g. Morahan-Martin, and Schumacher, 2000; Kim et al., 2002, Choi, and Kim, 2004). In order to shed more light on this subject, an attempt has been made to study the diffusion of online games in Taiwan, the second largest market worldwide (IDC, 2003). Rogers’ (2003) Diffusion of Innovation model (DOI hereafter), a frequently adopted DOI theory for explaining innovation diffusion across various products, is applied for the study. The current online game diffusion stage is investigated, and online gamer profiles within each stage are revealed. In considering various innovation diffusion rates across different products (Martinez, Polo, and Flavian 1998), a comparison of players’ innovative attitudes towards online games and general products is also conducted to reveal possible reasons.
Reflections on Academic Misconduct: An Investigating Officer’s Experiences and Ethics Supplements
Dr. Ceil Pillsbury, University of Wisconsin—Milwaukee, WI
One of the inherent core elements of our American system of higher education is the community of trust which exists between professors and students. Professors trust that students will honestly complete the assigned work and rely strictly on their own ability when taking examinations. The unfortunately reality is that research indicates academic misconduct has become a persistent and growing problem in the American system of higher education. This, of course, destroys the trust that plays such an important role in fostering the educational process and allowing learning to flourish. This paper discusses my experiences as the academic misconduct investigating officer of a large Midwestern state university. I include lessons that I have learned in this capacity and offer teaching aids that I have developed to enhance students’ awareness of the problem and to enhance their ethical development. These aids have been used by myself and others and appear to be effective in reducing the occurrence of academic misconduct and in encouraging students to focus on ethical values. The New York Times calls it a “plague”. Campus administrators across the country call it out of control. Many say it will lead to the downfall of academe. (Warning 2003; Howard 2001; Mullins 2001). What I am referring to, unfortunately, is the title wave of academic misconduct that is sweeping the country. And just like an epidemic, an insidious presence of cheating is undermining one of the basic tenets of the American higher education system--that learning occurs in an environment of trust between professor and student. Students trust that professors will remain current in their fields, select useful learning materials, provide fair evaluation procedures and maintain appropriate relationships with their students. Professors trust that students will attend class, study diligently and not engage in academic misconduct.
Deregulation and Globalisation: Process, Effects and Future Challenges to Air Transport Markets
Dr. Zhi H. Wang, Charles Sturt University, Australia
Through a sequence of examinations, the research identifies that there is a difference in the liberalisation process of different regions’ air transport markets. This has impacted on the configuration of strategic airlines alliances. Deregulation and strategic alliances that have contributed to the air transport market globalisation to benefit air travellers’ welfare, carriers and economic development of each country should be pursued simultaneously in reducing the current existing barriers to entry into the regulated air transport market. However, increased complexity in the international business environments stem from a number of sources that pose long term issues. This challenges both globalisation process and approaches. The future air transport market consolidation or fragmentation will therefore have implications for the airline operations, as well as, State of governance. As the cold war is still going on, the impact of Sep. 11th is difficult to discern at this stage. After Sep 11th, some economists indicated that the world had indeed changed, and in significant ways. The airline industry’s case shows that it overall has sustained a massive impact from the September 11th Events. On that day, both American Airlines (AA) and United Airlines (UA) aircrafts were hijacked and used in the terrorist attacks. That caused a dramatic decrease in passenger numbers and flight frequency of airline services. The immediate problems posed to the airline companies after the Sep 11th events in general, include the higher costs associated with new airline security directives, the companies’ ability to raise additional financing, the cost of such financing and the price and availability of jet fuel. These problems were also followed by airlines cutting capacity, grounding aircraft and deferring certain aircraft deliveries to future years, sharply reducing capital spending, closing facilities, trimming food service and reducing workforce. The continuing Gulf and SARS crisis further deteriorated the situation. In an attempt to get SARS wary passengers back on the planes, airlines are continually cutting down airfares (Philling, 2001; O’Toole, 2001). The ‘price war’ hence causes the carriers to suffer. The characteristics of the airline industry show that any single factor such as terrorist attacks, war, financial crisis, economic recession, business competition and even SARS could have a huge impact on airline operations. The idea of airline mergers and acquisitions has been pursued since the 1990s, by carriers to reduce risk and increase competitiveness. Strategic airline alliances were rapidly developed and various types of formations appeared in competing air transport markets in the last decade. Incentives for airlines entering strategic alliances are also because of the commercial aspects of international air transport matters, in which they have been generally governed by bilateral air treaties between the countries involved in the airline industry. Further multinational enterprises have been very slow to develop in this sector because of legal restrictions on foreign equity ownership in national carriers and restrictions on the operation of foreign carriers on domestic routes (Staniland, 1997).
Assessing the Health Insurance Literature 1999- 2003: A Citation Analysis
Shih-Chieh Chuang, M.D., Military Pintung Hospital, Graduate School of Management, I-Shou University, Taiwan
The purpose of this study was to use citation analysis to identify major themes and contributors to health insurance literature over the past 5 years. Looking for data about health insurance over the whole world may be the first step to resolve the problem. A citation analysis was performed on a database of 326 articles in the interrelated literature and more than 10,000 articles were cited from January 1999 through January 2003. A list of search terms was applied to the database of approximately 340 journals. Therefore, the 20 articles and 20 authors with the most citations were then evaluated by checking each of the most-cited articles and authors for their applicability to health insurance. Author’s study is an attempt to characterize the health insurance related literatures in recent five years for seeking the newest information and the tendency about health insurance. Author also note the citation analysis is a useful tool in identifying important contributions to the interrelated literature and detecting longitudinal trends of topics present in a body of science or social science literature. No previous attempt at an exhaustive citation analysis of the field of health insurance has been published. Analogous to systematic reviews of the literature for a specific question, citation analyses have been used to describe the trends, direction, and general literature of other literature, although not extensively. In a search of MEDLINE from 1966 to 2003, relatively few analyses of health insurance fields were found, including health policy, emergency medicine, human services, internal medicine, neurosurgery, otorhinolaryngology, pediatrics, hospital management, health economic, law & medicine, etc (Li & Tsui, 2002; Lin & Tsai, 2002; Smith, 1981; Roy, Hughes & Jones et al, 2002; Cai, 2002; Andrews, 2003; Cui, 1999; Fava, Ottolini & Sonino, 2001; Vishwanatham, 1998; Fang, 1989; Johnson & Leising, 1986). Just as systematic reviews begin with a clear definition of search terms, bodies of literature must be focused and well defined to make citation analysis a useful tool for widespread assessment of the health insurance literature (Lin & Tsai, 2002). Using the objective measures of citation analysis and an understanding of the type of literature appropriate for one's purpose of study, researchers, scientists, teachers and students can avoid the pitfalls of information overload created by the proliferation of journals and information technology, while effectively extracting the clinical and basic science knowledge that will aid their reading and research.
A Two-Dimensional Model for Allocating Resources to R&D Programs
Chun-Chu Liu, National Cheng-Kung University, Taiwan, R.O.C.
Dr. Chang Jung Christian University, Taiwan, R. O. C.,
Professor Chia-Yon Chen, National Cheng-Kung University, Taiwan, R.O.C.
A decision model is developed to help managers select the most appropriate sequences of plans for product research and development (R&D) projects under strict constraints on budget and resources. In recent years, many organizations have changed from discipline-oriented to focusing on integrated programs and related outcomes. For a decision-maker of these high-profile R&D programs, it is critical to understand which activities are the most important, considering both investment feasibility and cost-effectiveness. This paper proposes a two-dimensional decision model that integrates analytic hierarchy process and data envelopment analysis to perform this essential task. Using the information from these two decision science tools, the model develops a two-axis evaluation space for research alternatives. By locating particular activities in this decision space, a program manager can compare and prioritize alternative research investments. A large corporation often has to make decisions on the scope of product research and development (R&D) projects. The main criteria for project evaluation are budget and resources constraints. Therefore, the selection of a balanced R&D portfolio, combining corporation goals, resources, and constraints, is an important but venturesome task (Islei, 1991). Research portfolio analysis and decision models can be effective tools in promoting organizational participation in complex decision-making. This involvement develops a consensus for and understanding of organizational goals and the associated performance metrics. To achieve the goal, decision models should provide managerial information without the distraction of excessive complexity (Howard, 1988). Specifically, models should provide benefits that exceed the difficulty and effort required for model development, use, and maintenance. This study proposes integrating two complementary decision tools that have particular promise in R&D management environment: analytic hierarchy process (AHP) and data envelopment analysis (DEA). Major concerns in the two-dimensional decision model are comparing, prioritizing alternative research investments and the best allocation of the corporation's resources to selected projects. Specifically, the Two-Dimensional Model incorporates the following features.
Organisational Change: from Public to Private Sector A UK based reflective Case Study
Dr. David Cooper, University of Salford, Manchester, England
Informed by the UK Government led context of ‘Rethinking Construction’ and ‘Commitment to People’, this paper provides dialogue as to how Public sector organisation’s can be supported toward making the transition to managing and behaving commercially while continuing to focus on the provision and continuous improvement of an effective service. Key issues discussed examine UK Political and social responsibility, potential sector differences as to internal organisational culture, the role of HEI’s in assisting change, the assessment and specificity of training needs, implications for systems and procedural change and the need for ongoing cultural and climatic development. A work in progress case study is used to aid illustration, explanation, and allow scope for reflective commentary. The case describes the factors affecting a UK Metropolitan Borough (Local Government) Council Housing division as it attempts to move toward adopting a private sector capital structure and private sector management techniques. Findings will be of interest to managers of public services, regardless of the country of origin. In essence, the case chosen is one of four key medium- to long-term interventions that the University of Salford, Manchester, UK is currently involved. It provides an insight as to the process of change that is leveraged by the decision to transfer housing stock from Council to private sector Housing Trust ownership. A change that entails restructuring of the new organisation’s financing arrangements and work practices. The process may sound simple, but it affects the very foundation of the organisational culture – ‘the way they do things around there’. In many ways, such organisations need to pull and push their way through a process of organisational metamorphosis. The decision to transfer housing stock normally means that a section of people from a local government body need to work within a pseudo private sector environment. The change pressures the organisation and its staff to adopt a different approach, to acquire new skills and knowledge and be given the opportunity to exercise it. It would be fair to state that most people are adaptable and in time they can perform effectively in any environment. However, what may be required under the conditions described above is to boost the process of people and organisational development. This is where Higher Education Institutes (HEI’s) can and should help.
Enhancing Competitive Advantage of Hospitals through Linguistics Evaluation on Customer Perceived Value
Feng-Chuan Pan, Tajen Institute of Technology & I-Shou University, Taiwan
Chi-Shan Chen, Shih Chien University, Kaohsiung Campus, Taiwan
Due to strict competition for retention of customer loyalty in extremely a competitive market, hospitals that offer a full line of healthcare services are striving to gain sufficient revenue for survival. As loyal customers, patients contribute above-average return with lower cost and lower service. In order to combat this, hospitals must develop and deliver valuable services consistent with customers’ needs. Using linguistics other than traditional statistical analysis, as this paper proposes, would more precisely exploit the value attributes perceived by customers. This research would be pioneer in a value perception study for healthcare services; it would contribute to the industry by providing clear insight to accurately identify target customers who are most valuable in the long-term. Findings of this research indicate patients/customers perceive more value from quality delivered by physician competence versus updated facilities. Personal care and a comfortable atmosphere are more important value attributes than a gorgeous, modern building; price is surprisingly a significant value similar to the reputation of a hospital. Hospitals in this research are characterized by diverse value attributes (in terms of five individual value factors studied). Nevertheless, quality remains the strongest value driver. Physician’s competence, along with correctness and speediness of emergency services are the most valuable criteria customers seek for healthcare service as this research revealed. Therefore it can be concluded that top rate emergency rooms filled with expert medical teams is of utmost importance in making a particular hospital a standout in this industry. Vast environmental changes have brought massive challenges to healthcare service industries over the past decade. Increasing global integration made cross-border expansion possible, and this has stiffened the competition. Many times it is the customer/patient’s choice. For example, there are hundreds of wealthy Taiwanese patients who seek to purchase special services from healthcare institutes in advanced countries or neighboring countries that were perceived to supply better value. The rapid development and diffusion of healthcare knowledge, such as updated medical and pharmaceutical information that are easy accessible through Internet communication, have also made patients much more complicated customers to serve. Customers today are more knowledgeable, more demanding, and more concerned about the gap between expectation and experience. We propose that customers are now more savvy and vocal about what is most valued by them today, especially since they may have concurrent knowledge just like their healthcare service providers. It is harder than ever to establish patient loyalty towards particular healthcare service providers who ignore the value perceived by patients.
The Effects of Web Operational Factors on Marketing Performance
Yuan-shuh Lii, Feng Chia University, Taichung, Taiwan
Dr. Hyung J. Lim, Lizard Tech, Inc.
L. P. Douglas Tseng, Portland State University, Portland, OR
This paper presents the results of empirical research examining the impact of several Web operational factors on marketing performance. Using structural equation modeling, this model reveals that there are three key factors that have a significant effect on Web operational effectiveness: reliability, accessibility, and feature enhancement. Of these three factors, reliability had the strongest effect on the Web operational effectiveness. The accessibility factor had a small, negative effect on online marketing performance and marketing productivity. The feature enhancement factor, which is defined by the entertainment and multimedia experiences, had a strong positive effect on Web operational effectiveness and a modest positive effect on online marketing performance. The size and growth of the Internet and the World Wide Web are phenomenal; still, only a small fraction of the power of the Internet has been harnessed. In order to maximize from its true potential, companies must formulate a sound Internet strategy and incorporate it into their long-range plan (Basch, 2000; Hoffman, Novak, & Chatterjee, 1995). Despite the enormous business potential presented by the Web, companies connected through the Internet are seeing only limited success (Preston, 1999). Many experts in the industry attribute this shortcoming to the inability of firms to understand the medium correctly and therefore do not have the right strategy for using it most effectively (Preston, 1999; Stuck, 1996; Fortune, 1996). The Web has transformed the buyer-seller relationship by tipping the balance of power in favor of consumers. Interactive technology gives marketers a cost-effective way of attracting consumers into one-to-one relationships fueled by two-way communication. The interactive feature of the technology puts the consumer in control (Korgaonkar & Wolin, 1999). In order to take full advantage of an e-commerce medium such as the Web, firms must first see this market as an interactive, multimedia environment, where “many-to-many” communication occurs, something that is dramatically different from the traditional “one-to-many” communication model of the media. Practitioners and researchers must pay more careful attention to the needs of Web users to explore the potential of this new and different channel (Korgaonkar & Wolin, 1999). Although the field of e-commerce is not a new one, there is a lack of statistical data or empirical research by academicians and business researchers (Huizingh, 2002; Nagendra, 2000; Bloch, Pigneur, & Segev, 1996). The objective of this research is to perform empirical research to examine the impact of various Web operational factors on marketing performance, first by identifying the key factors that influence marketing performance and by investigating the nature and dynamics of the relationships between Web operational factors and performance.
Globalization Effects, Co-Marketing Alliances, and Performance
Dr. Amonrat Thoumrungroje, Assumption University, Thailand
Dr. Patriya Tansuhaj, Washington State University
Drawing from marketing, international business, and strategic management literature, our paper proposes a conceptual framework to investigate the relationships among globalization effects, degree of cooperation in co-marketing alliances, and international marketing performance. This paper explores how global market opportunities and threats, the two major effects of globalization, influence the degree of cooperation in international marketing activities among firms participating in co-marketing alliances. It also emphasizes how firms can enhance their performance in the global marketplace through increased cooperation in such alliances. Globalization refers to the process of increasing social and cultural inter-connectedness, political interdependence, and economic, financial and market integrations (Eden and Lenway, 2001; Giddens, 1990; Molle, 2002; Orozco, 2002). During the past two decades, globalization has caused dramatic changes to business around the world. Limited studies have been conducted to investigate how globalization actually affects firms and how firms respond to such effects. Thus, this paper aims to examine such effects by focusing on how globalization influences the degree of a firm’s international marketing cooperation, which ultimately affects its international marketing performance. Due to the emergence of global market opportunities and global market threats, firms have been forced to respond quickly to these effects. Unlike other environmental changes, the effects of globalization are far more pervasive—affecting every individual, business, industry, and country (Garrette, 2000). The environment surrounding business today is characterized as a “hypercompetitive” environment—a faster and more aggressive competitive environment (D’Aveni, 1994). Major forms of business restructuring in response to the dramatic changes brought by globalization include, for example, investments in new technologies, downsizing and reengineering, the formation of strategic alliances and networks, and a shift from international and multinational to global and transnational strategies (Jones, 2002). Among these various forms of business restructuring designed to manage globalization effects, alliance formation is considered the most remarkable business trend in the past decades (Hwang and Burgers, 1997). Therefore, it is of interest to both academics and practitioners to explore how alliances help firms achieve superior international marketing performance in the globalization era.
The Store Loyalty of the UK’s Retail Consumers
Dr. Sudaporn Sawmong and Dr. Ogenyi Omar, South Bank University, London, UK
The multinational grocery retailers are expanding retail businesses into UK such as America, Netherlands, Japan and etc. While some retailers are very successful, others are however unsuccessful in their business. In order to increase the possibility of success, retailers must understand the consumer behavioural process that could affect the performance and competitive position of most retail. Most retailers would like to have a hard core of loyalty customers who continue to frequent their outlets. Generally this is achieved but whether there are enough of these customers, and whether they are the right customers (Sullivan and Dennish, 2002). There are too many retailers in today’s marketplace. It is necessary to create, evaluate and retain the loyalty of their customers. This paper aims to measure the store loyalty of the UK food retail consumer from the grocery store in UK by taking into consideration the difference in store loyalty of the retail consumers where theoretical based on Oliver’s’ four stage loyalty model (cognitive, affective, conative and action). This measurement has applied the twenty-eight questionnaires to identify its stages. The mean average technique has applied for the measurement in this paper. The beginning of a behavioural perspective on loyalty appeared in the 1970s, after a period when the majority of researchers measured loyalty as a pattern of repeat purchasing (Oliver, 1997). Omar (1999) has emphasised that store loyalty is the single most important factor in retail marketing success and store longevity. He further observed that without loyalty toward the retail organization, the competitive advantage for which retail management is striving does not exist and the store is likely to be unsuccessful.
Copyright 2000-2016. All Rights Reserved