The Journal of American Academy of Business, Cambridge
Vol. 10 * Num.. 2 * March 2007
The Library of Congress, Washington, DC * ISSN: 1540 – 7780
Most Trusted. Most Cited. Most Read.
All submissions are subject to a double blind peer review process.
The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Journal of American Academy of Business, Cambridge will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work. All submissions are subject to a double blind peer review process. The Journal of American Academy of Business, Cambridge is a refereed academic journal which publishes the scientific research findings in its field with the ISSN 1540-7780 issued by the Library of Congress, Washington, DC. The journal will meet the quality and integrity requirements of applicable accreditation agencies (AACSB, regional) and journal evaluation organizations to insure our publications provide our authors publication venues that are recognized by their institutions for academic advancement and academically qualified statue. No Manuscript Will Be Accepted Without the Required Format. All Manuscripts Should Be Professionally Proofread Before the Submission. You can use www.editavenue.com for professional proofreading / editing etc...
The Journal of American Academy of Business, Cambridge is published two times a year, March and September. The e-mail: email@example.com; Journal: JAABC. Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via the e-mail address above. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above. Address advertising inquiries to Advertising Manager.
Copyright 2000-2017. All Rights Reserved
Dr. Canri Chan, Monterey Institute of International Studies, CA
This paper examines profit shifting focusing on the influence of tax effects on the transfer pricing decision. The experimental results found that decision makers considered the profit effects of tax rates. Nevertheless, while influenced by tax rates, decision makers did not optimize overall corporate profitability. The findings suggest that managers trying to induce transfer pricing decisions that optimize profits may need to consider incentive schemes that will induce profit optimizing decision behavior when considering the selection of transfer prices. One major issue of interest in studies of transfer price choice has been whether or not multinational corporations (MNCs) manage earnings across subsidiaries in disparate tax jurisdictions by manipulating transfer prices and thereby shift profits among divisions/subsidiaries. Despite the seemingly obvious benefits of “tax-shifting” via transfer pricing, prior research along this vein has yielded inconclusive results and understanding why has proved elusive. Several transfer pricing studies using extant databases have found inconsistent results as to whether multinational companies minimize world-wide tax liabilities (e.g. Chan and Chow 1997; Crain and Stitts 1994; Harris 1993; Klassen, et. al., 1993; Shackelford 1993, etc.). Some empirical studies provided evidence that gross profit margins were significantly lower for foreign corporations in low tax jurisdictions, just the opposite of expectations, whether or not the foreign corporations operated through U.S. subsidiaries of the U.S. corporation (Crain and Stitts 1994). Other studies found similar results in developing countries. For instance, Chan and Chow’s (1997) results revealed that MNCs shifted income out of China although their home countries’ income tax rates were much higher than their income tax rates in China. This study examines and explores how environmental factor, particularly tax rates, affects corporate profitability which in turn affects decision making regarding international transfer price choice. An experimental accounting setting was utilized in order to provide a better understanding of the cause and effect relationship. This study and its findings make an important contribution toward explaining how tax rates affect individual decision-making regarding international transfer price choices. The findings also provide further explanation and understanding of prior inconsistent results of whether MNCs minimize world-wide tax liabilities in order to maximize overall corporate profits. The findings of this study show that decision makers were concerned with overall corporate profitability, and thereby managed corporate earnings by sourcing profits in low versus high tax jurisdictions via transfer pricing decisions. However, contrary to transfer pricing survey studies, individuals did not necessarily maximize overall corporate profitabilities in their decision making. Regarding the organization of this article, Section II covers the hypotheses development, Section III provides the experimental method, and Section IV discusses statistical results. Section V provides the discussion and implications of this study followed by a discussion of limitations and recommendations for future research. Several empirical studies have attempted to provide evidence regarding multinational corporations income-shifting with regard to varying tax rates (Harris 1993; Scholes, et. al., 1992; Grubert and Mutti 1989). Borkowski (1997) and Stoughton & Talmor (1994) noted that multinational corporations have an incentive to shift income from high-tax countries to low-tax countries in order to reduce tax liabilities. As noted in the introduction, prior studies hypothesized that MNCs should minimize overall worldwide tax liabilities by shifting income from high tax jurisdictions to low tax countries through transfer pricing (Stoughton and Talmor 1994; Klassen, et. al., 1993; Scholes, et. al., 1992; Grubert and Mutti 1989). However, the results of these findings were mixed. Shackelford (1993) argued that the results of whether or not multinational corporations shifted income into the U.S. after the Tax Reform Act of 1986 in order to minimize worldwide tax liabilities, as reported by Harris (1993) and Klassen, et. al., (1993), were inconclusive and puzzling. Klassen, et. al., (1993) noted possible explanations for their inconsistent findings. They suggested that the reversal of predicted income shifting into the US in 1987-1988 was due to the reduction of income tax rates in other countries in 1988. “Alternatively, the nontax costs of shifting income from the non-U.S. operations may have been larger than anticipated and it was only after the change was made in 1987 that the full extent of these costs was realized” (Klassen, et. al., 1993, 172).
Purchasing Power Parity, Inflation and Central Bank Independence: A Panel Cointegration Test of Emerging Economies
Dr. Hermann Sintim-Aboagye, Montclair State University, NJ
Dr. Chandana Chakroborty, Montclair State University, NJ
Using a panel cointegration framework, this paper investigates the possible effect of the inverse relationship between measures of central bank independence (CBI) and the level and stability of inflation on the long run relationship between prices and exchange rates. This study involves 26 emerging economies from 1970 to 2000. Results of cointegration tests involving the entire 26 original countries from 1970 to 2000 fail to reject the no cointegration null hypothesis. However on an individual country basis, 54% of the countries significantly confirm the long run relationship between prices and exchange rates. Notably most of these countries experienced high inflation over the period of study. Panel results for the sub-divided groups of both high and low CBI countries fail to reject no cointegration null hypothesis as well. However, on an individual country by country basis, 75% of countries in the low CBI group provide evidence in support of cointegration of prices and exchange rates. In contrast, cointegration is supported by only 36% of the high CBI countries. These outcomes provide support for the central hypothesis of this paper that measures of CBI probably influences the long run relationship between prices and exchange rates. A growing literature in economics and finance has drawn considerable attention to emerging economies. Bekaert and Campbell (2002) provide an overview of the existing research on emerging economies and opportunities for the future. An area with some paucity of exposure in the literature is the reform or transformation of monetary policy institutions and their impact on the behavior of financial and economic variables. This study contributes to closing this apparent gap in the literature on the dynamics of emerging economies by investigating the influence of the varying measures of central bank independence on Purchasing Parity Hypothesis (PPP hereafter). Specifically, we investigate if the degree of CBI affects the long run relationship between price levels and exchange rates. Existing theoretical and empirical work provide support for an inverse relationship between measures of central bank independence and the level and stability of inflation. In effect, higher levels of CBI are associated with low and stable inflation and vice versa. Published works by Alesina and Summers (1993), Cukierman, Webb, and Neyapti (1992), Neyapti (2003), Diana and Sidiropoulos (2004), Down (2004) and Siklos (2004), among others provide empirical evidence to support the afore-mentioned inverse relationship between CBI levels and inflation rate. Evidence on the relationship between relative prices and exchange rates however is mixed. While literature provides some support for the PPP hypothesis in the mid to long run time frames (Dornbusch, R., 1980; Enders and Dibooglu, 2001), support for the hypothesis in the short run is especially weak. As an extension of existing studies, this paper attempts to bring the two related phenomena of CBI and PPP hypothesis together. Specifically, this study examines how the interaction of the CBI relationship with the behavior of general prices affects the test results of the PPP hypothesis in the twenty six emerging economies from 1970 to 2000 (1) (countries are listed in table 1). Also, given the influence of CBI on price changes, this paper reexamines the view point in the literature that PPP hypothesis appears to hold stronger in high rather than low inflation economies (Melvin 1992, Mahdavi and Zhou, 1994 and Zhou, 1997). If the latter holds, we expect to see more evidence of PPP in low CBI countries with relatively high inflation than high CBI economies. The paper employs the relatively new and innovative panel cointegration procedure proposed by Pedroni (1995 & 1999). Unlike existing cointegration procedures for PPP tests, this approach allows for simultaneous testing of a group of countries without homogenizing the vector of cointegration estimates between prices and exchange rates across countries. In essence, it reaps the economies of group estimation while allowing, and capturing, the peculiarities of how each country’s variables interact with each other in transition (Pedroni, 2001). To the extent that these transitional plays of variables affect the long run equilibrium, this approach appears to be more effective at using information relevant to testing the cointegration of prices and exchange rates. In line with existing work, this study employs Cukierman’s turnover rates (TOR) as the measures of CBI. TOR measures the rate of change of chief executives of a nation’s central bank. The latter measures are considered appropriate gauges of practical degrees of central bank independence (CBI) in relatively new democracies and developing worlds rather than the ‘legal’ measures of CBI. Reason for the latter is that for emerging economies, actual operational independence tends to deviate from the ‘legal’ independence as indicated in the central bank charter. Empirical results confirm the latter observation (Cukierman 1992).
Zero-Investment Trading Strategies and the Measurement with Recognition of Short-Selling Constraints: A Theoretical Perspective
Dr. Yan Xiong, California State University Sacramento, Sacramento, CA
Dr. Charles Davis, California State University Sacramento, Sacramento, CA
This paper reviews the zero-investment trading strategy literature and the measurement of returns using this strategy. A zero-investment trading strategy typically involves forming a long portfolio in one set of securities and a short portfolio in another, with the two identified with the same trading rule. The differences in the two portfolios’ abnormal returns represent an unbiased estimate of the economic profitability of the strategy under the assumption of perfect markets, and thus it serves as indication of market efficiency. The paper also addresses the failure of previous research to consider the constraints on short-selling imposed by the Federal Reserve Board under the Securities Exchange Act of 1934 when measuring abnormal returns on zero-investment trading strategies. Lastly, the paper proposes modifications to the CAPM model to measure abnormal returns of zero-investment trading strategies by incorporating those short-selling constraints. A zero-investment trading strategy consists of taking a long position in one set of securities and a short position in another set of securities. The two sets of securities are identified using a trading rule which identifies one set of securities expected to have positive abnormal returns, and a second set expected to have negative abnormal returns. In an ideal setting, the proceeds from the short sale are used to purchase the securities held in the long position thus involving no net investment (“zero-investment”). Investors earn positive returns on this strategy through any combination of price decreases on the short positions and price increases or dividends on the long positions. Zero-investment trading strategies are commonly applied in academic research to test various market inefficiencies. The first purpose of this paper is to review some important studies in the finance and accounting areas that employ zero-investment trading strategies to test various market inefficiency issues. Moreover, this paper reviews the different measurements of abnormal returns of zero-investment trading strategies and also the controversial issues regarding the measurement of zero-investment trading strategies. Lastly, this paper focuses on one of the most important market frictions on the implementation of zero-investment trading strategies: the failure to recognize the constraints imposed by Federal Reserve Board on short selling. This paper proposes modifications to the CAPM model to measure abnormal returns of zero-investment trading strategies by incorporating a procedure developed by Alexander (2000) to recognize the short-selling constraints. The seminal work on zero-investment trading strategy is the De Bondt and Thaler (1985) study of market overreaction. Motivated by the Kahneman and Tversky (1982) study in experimental psychology that demonstrated how individuals tend to overreact to unexpected and dramatic events, De Bondt and Thaler (1985) form 16 sets of offsetting portfolios. Their portfolios are labeled as the winner portfolios and the loser portfolios, based on their market performance in a series of nonoverlapping three-year formation periods between January 1930 and December 1977. For each of the 16 nonoverlapping three-year periods, firms whose market performance are ranked in the top decile are assigned to the winner portfolio and the firms in the bottom decile are assigned to the loser portfolio. Winners are sold short on the expectation that the prices of those securities would decline in the future. The losers were purchased long. The difference between the raw returns of the winner and the loser portfolios were then computed for a three-year observation period. The authors found the average return attributable to their zero-investment strategy exceeded eight percent per year during the observation period. The authors concluded that the securities with the worst formation-period performance outperformed the securities with the best formation-period performance over the next three years. Their results provide support for the market overreaction hypothesis. The market overreacts to news, so winners tend to be overvalued and losers undervalued, and an investment strategy of buying recent losers and selling recent winners short will be successful. Using a single unifying framework to analyze sources of profits to a wide spectrum of trading strategies implemented in the literature during the 1926-1989 period with holding periods ranging from one week and 36 month, Conrad and Kaul (1998) find consistent patterns as De Bondt and Thaler (1985). In their review of the current debate on market efficiency, Merton and Mason (1985) considers the work of De Bondt and Thaler to be noteworthy because it represents a first attempt to formally test theories of cognitive misperceptions as applied to the general stock market.
Duality of Alliance Performance
Dr. Noushi Rahman, Lubin School of Business, Pace University, New York, NY
While alliance research has proliferated and branched out to several areas in the past decade, alliance performance remains a misunderstood and limitedly studied area. A review on alliance performance suggests that it comprises two elements: goal accomplishment and relational harmony. Both are necessary to ensure alliance performance. This paper reviews four theoretical streams in organization research that are relevant to alliance performance. Apparently, extant research has attended to alliance relationship management much more than it has attended to alliance goal accomplishment. This review highlights the need to extend existing theoretical streams in certain directions to further explain alliance performance. The literature on strategic alliances has flourished tremendously over the past decade. Strategic alliances are enduring, yet temporary, interfirm exchanges that member firms join to jointly accomplish their respective goals. In his review of the state of alliance literature, Gulati (1998) wrote about five avenues in which alliance literature has spread out: formation, governance, evolution, performance, and performance consequence. Of these five paths, research on alliance performance has received the least attention: “the performance of alliances remains one of the most interesting and also one of the most vexing questions” (Gulati, 1998: 309). Strategic management research is generally geared toward better performance of the firm. While conceptualizing and measuring firm performance is quite straightforward, the involvement of more than one firm and the permeable boundary of the alliance entity (with the exception of joint ventures) make conceptualizing and measuring alliance performance a messy and daunting task. Performance of an alliance is conceptualized as the extent to which member-specific goals are accomplished by the alliance. However, alliance members may find it difficult to work with each other for a lack or trust and threat of opportunism. Consequently, an alliance may fail to perform despite its ability to accomplish alliance-specific goals. Given the importance of maintaining a good working relationship between partner firms, many studies have focused on relational issues arising within alliances. Ironically, as it would become evident toward the end of the paper, the current state of strategic management research seldom focuses on goal-accomplishing or task-oriented aspect of alliance performance. The purpose of this article is to review how major theoretical streams in organization management research explain alliance performance and how these theories can be extended to further our understanding of alliance performance. The paper is divided into four parts. First, I delineate the nature of alliance performance. Second, I review major theoretical streams in organization management as they pertain to alliance performance. Third, I discuss the research implications of this paper. Finally, I describe how alliance managers can benefit from the theoretical conclusions drawn here. Alliances are unique in that they are the only form of economic organization that requires maintaining a relationship, in addition to concentrating on performance issues. Independent firms or firms engaged in spot transactions do not have to maintain relationships. This peculiarity of alliance has drawn tremendous research attention to this topic. Therefore, it is not surprising that lately the majority of research seems to be focusing on relational angles of alliances, such as trust (Gulati, 1995; Perry, Sengupta and Krapfel, 2004), relational risk (Delerue, 2004; Nooteboom, Berger and Noorderhaven, 1997), opportunism (Parkhe, 1993; Provan and Skinner, 1989; Brown, Dev and Lee, 2000), commitment (Gundlach, Achrol and Mentzer, 1995; Perry et al., 2004), reciprocity (Kashlak, Chandran and Di Benedetto, 1998; Wu and Cavusgil, 2003), relational capital (Heide, 1994; Kale, Singh and Perlmutter, 2000), and relational quality (Arino, de la Torre and Ring, 2001). While the relational issues are critical to alliance effectiveness, another critical element of alliance performance is goal accomplishment. Existing theoretical streams explain alliance performance in terms of either relationship maintenance or goal accomplishment. Of course, conceptualizing alliance performance is different from measuring alliance performance, which can take various paths as well. To avoid the mess of explaining relational and goal-based conceptualization of alliance performance, scholars have adopted alliance satisfaction as a measure of alliance performance (Habib and Barnett, 1989; Killing, 1983; Lui and Ngo, 2005). Alliance satisfaction is, however, reflective of more than just alliance performance. In the words of Hatfield, Pearce, Sleeth and Pitts (1998: 368): “Because the respondents were those individuals in the partner firm who were closest to the joint venture operation, the positive relationship between partner satisfaction and JV survival may reflect a bias for maintaining one’s sphere of influence and power.”
Valuation of a Bank Credit-Card Portfolio
Dr. Riaz Hussain, University of Scranton, Scranton, PA
This paper presents a simple model of the valuation of a portfolio of a credit cards held by a bank. Using discounted cash-flow analysis, the model takes into account various factors that may influence the value of the portfolio. These factors include the balance on the cards, fees and penalties, interest rate, and default rate of the cardholders. The model is then tested using actual data. First issued in 1950, Diners Club Card was the forerunner of the modern credit card. It carried the names of 28 New York restaurants where customers could charge food and drink, and get a bill for them at the end of the month. Credit cards have now become a permanent fixture on the national scene. At the end of 2004, Americans carried 657 million bank credit cards (4). Some of the largest banks have millions of cards in the hands of cardholders. With 88 million credit cards, JPMorgan Chase is the nation's largest issuer, with $134.7 billion of outstanding loans (15). Some of the other large portfolios belong to Citigroup ($115 billion) and MBNA ($83.5 billion) (9). In 2005, Bank of America acquired MBNA. There is fierce competition among card issuers. After saturating the adult population, the banks are offering credit cards to students and young adults. To gain customers, most of the card issuers have dropped the annual fees, and they are offering promotional rates as low as 0% for the first six months. The card issuers sent 1.285 billion direct mail solicitations in the first three months of 2004, an average of 5.3 solicitations per household per month. The response rate was 0.4% (10). There is also consolidation in credit-card industry. Many of the smaller regional banks are moving out of this business, selling their credit-card portfolios to national banks. An investment-banking firm, R. K. Hammer, negotiated 75 portfolio sales in 2004, with a total value of $30.57 billion (14). A paper by Trench, et al. (2) provides an excellent survey of actual management of a credit card operation. They designed the portfolio control and optimization system using Markov decision processes to select interest rate and credit lines for each card holder that maximize the net present value for the portfolio. They identify the main sources of income, interest, merchant fees, and various other fees. Offering a higher credit limit, coupled with lower interest rates, induces customers to charge more on their credit cards. However, a higher credit limit also increases the default risk and a lower interest rate reduces the income for the bank. Chakravorti and Shah (1) analyze the relationship between cardholders, merchants, banks, and card networks. They observe the lack of competition between the networks in allowing the members banks from issuing rival credit cards. The article describes the nature of merchant fees and their impact on the selling prices. R. K. Hammer (14), an investment-banking firm active in the negotiated sales of credit-card portfolios, lists several factors that they consider in the valuation of these portfolios. These include: (1) Credit Quality, as evidenced by original credit criteria, credit bureau risk scores, behavior scores, bankruptcy scores, and the trends of those score patterns; (2) Attrition Rate, the percentage of accounts and balances (and the profitability of those accounts), that close voluntarily (customer requested closure) vs. involuntary (bank revoked); (3) Income Yields, the APR, annual fee structure, nuisance fee structure, teaser rates outstanding, the percentage revolving; and (4) Open vs. Closed, the percentage of accounts and balances, that are open to buy vs. those that are closed (but who also may be paying as agreed and, therefore, not delinquent). Their methodology is proprietary. This paper presents a simple valuation model based on discounted cash flow analysis. The model analyzes the impact of various factors on the premium that a buyer should pay over the receivables of the credit card accounts. In this section, we develop a simple model for the valuation of a credit-card portfolio. We may estimate the value of a portfolio and thus, the value of a single card to the issuer. Suppose a bank has issued N cards in all. The bank charges several different fees on the credit card customers. These fees could be annual fees, over the limit fees, late fees, foreign currency transaction fees, cash advance or convenience check fees, and others. Dividing the total fees collected by the total number of cards outstanding, we may find the current average fee F charged across all cardholders. The average fees F collected on a credit card are strongly correlated to the revolving credit balance C on the account. As a first approximation, we may set
Effectiveness and Dynamics of Cross-Functional Teams: A Case Study of Northerntranspo Ltd.
Dr. Steven H. Appelbaum, Concordia University, Canada
Frederic Gonzalo, VIA Rail Canada
A conceptual model was developed from the literature, based on twelve criteria considered key to successful and effective team dynamics and tested within an organization for congruence. This article was a combination of a research design and case study. A model was developed and contrasted with an organization (NTL) where cross-functional teams were implemented early in 2002. In November 2004, quantitative and qualitative research was conducted to compare the conceptual model and twelve hypotheses with how members and leaders were perceiving dynamics to be performing within their respective cross-functional team environment. The organizational structure began a shift away from functional units towards a matrix-like structure. There were now issues affecting the effectiveness and evolution of these cross-functional teams, impeding its potential to spread to middle and low management within the company, and gain acceptance with a wider internal audience. This critical data is presented. The first part of this article will present some literature on barriers cross functional teams face and also address the specific situation existing at Northern Transpo Ltd.(NTL), a federal Corporation that provides regular passenger rail services across a country-wide network to test the congruence of the literature. As part of its most recent reorganization in late 2000, a matrix structure was implemented, with the novelty of four permanent cross-functional teams looking after the four key regions of the corporation. A recent history of the company will be provided, and how the concept of cross-functionality came about. The essence of this article is to contrast how the dynamics of cross-functional teams (CFT) at NTL compare with the generally accepted view of a successful CFT in the literature. A survey was conducted and will be presented including its methodology, and how the discrepancies between the model and the existing perceived dynamics within the CFT at NTL were interpreted. An analysis of the results will be provided that contrast the results with the literature and feedback from the quantitative and qualitative data that was collected as part of this research will be provided. How effective are CFT’s presently at NTL? What elements of their actual team dynamics are presently working well vs. those that may need improvement? Recommendations and conclusions will be based on the results of this research, but will obviously take into consideration the evolution of the past three months regarding this topic on which senior management decided to re-energize its efforts across the company. The literature is ripe with many positive factors as to the success and formulas of cross-functional team described as a group with a clear purpose representing a variety of functions or disciplines in the organization whose combined efforts are necessary for achieving the team’s purpose. (Parker, 2003) Trust and leadership are not the only important elements of an effective CFT. Other key components include empowerment, training, a clear goal, a right mix of players, and an adequate reward system. The following explores the key components individually. There are many factors attributing to CFT failures as well. Although a leader plays an important role in any team, leadership of a cross-functional team is both more important and more difficult, causing limitations in the team leadership (Parker, 2003). The team leader has to have the technical background to understand both the subject and the contributions made by people from a variety of backgrounds. Then there are the people management skills, to facilitate the interactions between team members. According to various surveys, the major complaints about team leaders include their inability to run good meetings, involve everyone in discussions, resolve conflicts, and effectively use all of the team’s human resources. Originating from its lack of empowerment, a team can become confused, leading to a lack of consistency. This confusion on the team’s authority is linked with the team’s leadership, where some CFT were found to operate on the old axiom, “It’s easier to get forgiveness than permission”. CFT with a more conservative leadership feel the need to seek approval for every key decision. Often, members of a team are clear about what pieces they themselves have to deliver but little sense of where their pieces fit into the whole. There is a lack of vision, or goal ambiguity, and this is not always provided by the team leader. It is easier to be an operational analyzer and taskmaster than it is to be a developmental and visionary motivator (Quinn, 1996). Collaborative behaviors emerge when team members agree on a common agenda, openly share concerns and power, and commit to building trust (Jassawalla and Sashittal, 1999).
A Macromarketing Perspective on the US Hospice Industry’s Shift to For-Profit Providers
Dr. John J. Newbold, Sam Houston State University, Huntsville, TX
Reacting to changing social mores and an aging population in need of better alternatives for end-of-life care, the United States federal government permanently enacted the Medicare Hospice Benefit in 1986. At the time, the hospice industry was relatively small and dominated by small, independently-run non-profit organizations. However, the past 20 years has seen the hospice industry grow in exponential fashion, from less than 200,000 patients per year to a number that now exceeds 1 million patients per year. In addition, the pricing umbrella afforded by the establishment of the Medicare hospice benefit has attracted for-profit entities into this traditionally non-profit market sector. This paper discusses the higher-order societal implications of the entrance of for-profit firms into a traditionally non-profit market sector. Macromarketing impacts are discussed from the perspective of dominant social paradigm (DSP) theory. Finally, areas for future exploration are set forth. As the US population ages, the need to find better ways of caring for dying people and their loved ones is becoming more acute. The hospice industry in the US is a relatively small and fragmented component of the overall healthcare industry, generating aggregate annual revenues of about $4.5 billion in 2003. However, the growth in this sub-sector has been quite dramatic: Medicare spending on hospice care grew at a 13% compounded annual growth rate between 1995 and 2002, while aggregate patient volume grew at an 11% compounded annual growth rate between 1985 and 2002 (Shattuck Hammond Partners 2004). There are several factors driving this growth: 1. The overall aging trend in the US and the increasing size of the over 65 population. 2. The increasing role of advocacy groups in promoting hospice care over other end-of-life alternatives. 3. Favorable regulatory trends. The Center for Medicare and Medicaid Services (CMS), a Federal agency within the U.S. Department of Health and Human Services, appears to be promoting hospice care through its liberal policies for reimbursement, at least in part because hospice is viewed as a lower cost alternative to traditional, hospital-based end-of-life care. 4. Higher usage rates. Hospice care is being viewed as a more accepted and appealing alternative by doctors, patients, and families. This is particularly true for usage rates by non-cancer patients. (Shattuck Hammond Partners 2004) Hospice care is defined by the Hospice Association of America as: “…comprehensive, palliative medical care (treatment to provide for the reduction or abatement of pain and other troubling symptoms, rather than treatment aimed at cure) and supportive social, emotional, and spiritual services to the terminally ill and their families, primarily in the patient’s home. The hospice interdisciplinary team, composed of professionals and volunteers, coordinates an individualized plan of care for each patient and family.” (Hospice Association of America website, 2005) Palliative care differs from curative care in that its objective is to ameliorate pain and suffering, both physical and mental, as opposed to curing the patient of the illness. In 1986, Congress permanently enacted the Medicare Hospice benefit. A significant jump in usage of hospices occurred at this time. Figure 1 depicts the exponential growth rate of patients choosing hospice services. In 1996, the federal government initiated a program (“Operation Restore Trust”) focused on preventing Medicare fraud across all provider groups. This increased level of regulatory scrutiny, while probably needed, likely inhibited referrals of patients and reduced average and median lengths of stay industry-wide. The Balanced Budget Act of 1997, which attempted to control the rate of Medicare spending, further negatively impacted reimbursement rates. Among the impacts of these budget cutbacks was the further reduction in the growth rate of hospice sites. The net impact of the ever-increasing number of hospice patients and the relatively flat growth curve in the number of hospice sites has been the increasing average size of the hospices in terms of patients, as Figure 3 attests. Over the past 20 years, the average number of patients cared for by a single hospice in a year has nearly tripled.
Globalization and Performance in the New Millennium: A Look at firms from Developed and Developing Nations
Dr. Sally Sledge, Christopher Newport University, Newport News, VA
Although firms from developing countries are accounting for increasingly larger portions of the global economy in the new millennium, they have not been studied as frequently as their counterparts in developed countries. It is helpful to analyze the factors associated with going global for these new world competitors. This paper compares the performance of multinational corporations from developing and developed nations for the period 2000 – 2004. Developing country firms are shown to rely more heavily on foreign employees for success while developed country firms lean more on foreign affiliates, yet each group relies on foreign sales. The implications of internationalization for both groups are discussed. Directions for future research are given. There is much evidence that firms from developing nations are making significant inroads into the global economy, as indicated by Fortune Magazine’s Global 500, which ranks the largest corporations in the world by revenues. In 1990, no firms from the developing world were listed in any of the world’s top business listings. Yet in 2005, a record 9 firms from developing countries were in the Fortune Global 100 and 57 were listed in the Fortune Global 500. This makes nearly 1 in 10 firms from the developing world a part of this elite list. The ranking shows that in 2005, revenues of the firms from the 10 largest developing economies totaled $1,631.5 billion, approximately 10% of firm revenues from the 10 largest developed economies, which amounted to $16,073.7 billion. Additional verification of the increasing market power of firms from developing countries comes in the form of frequent references in business textbooks and the popular press (Hill, 2004; Peng, 2005). Companies such as cement maker Cemex of Mexico, Singapore Airlines, Acer Computers of Taiwan, petroleum giant Petrobras of Brazil and LG Electronics of South Korea are among those cited as success stories. According to the United States Patent and Trademark Office, a total of 28,315 patents were assigned to individuals and businesses in developing countries during the period 2001-2003, which was more than any previous period on record (USPTO data, 2004). This number attests to the fact that firms from the developing world are becoming more innovative and thus more competitive with their rivals from developed countries. Other trends that speak to the increasing importance of these firms include the increasing prominence of the banking, natural resources and tourism industries in international business. Large emerging markets such as India, Brazil and China are increasingly becoming major sources of and consumers of these products. Most economists project that the growth and influence of firms from the developing world will continue to increase steadily for the first half of the 21st century (Global Economic Prospects, 2005). It has been over 10 years since the United Nations Council on Trade And Development (UNCTAD) began tracking multinational corporations (MNCs) from developing economies, thus it is appropriate to study these firms now that more than a decade of data has been collected (World Investment Report, 2004). This paper is an exploratory step in this direction. It will investigate the links between internationalization and performance among firms from the developed world and firms from the developing world. To that end, the following research questions will be addressed: 1. How global are the top MNCs from developing nations and the top MNCs from developed nations? 2. Does the relationship between globalization and firm performance differ for these two groups? This information will lead to a greater understanding of the challenge of globalization and what measures can be taken to attain optimal performance for MNCs from both areas. Relatively few studies in the strategic management literature analyze company performance between developing and developed countries. Of those that do, many have found differences in the two groups, thus substantiating the need to evaluate them separately. For example, Ataay (2006) found that information technology impacted labor productivity differently in developing countries than it had in developed countries. Elmawazini et al. (2005) noted that foreign direct investment impacted productivity growth among businesses to a greater degree in developed countries than it did in developing countries. In a 36 country study, Espiritu (2003) found that the digital divide accounted for a significant difference in economic growth when comparing developed nations and developing nations. Merchant (2005) discovered that international joint venture performance varied among groups that contained partners from developing countries and those that contained partners from developed nations. Thus there is evidence that firms from these two worlds respond differently to the demands of going global. In the international business literature, many scholars have acknowledged the need for more research using data from firms located in developing countries (Piercy, Low and Cravens, 2004). Some studies have compared and contrasted firm performance between developed country MNCs and developing country MNCs. Makino, Isobe and Chan (2004) discovered that for developed country MNCs, corporate and affiliate effects explained the most variation in firm performance whereas for MNCs from developing countries, country and industry effects accounted for more differences in firm performance. In a study of Japanese firms, foreign direct investment in lesser developed nations outperformed foreign direct investment in developed nations (Makino, Beamish and Zhao, 2004). Dewan and Kraemer (2000) found significant differences in the performance of information technology investments among technology-based firms from developed and developing countries.
A Model Continuous Improvement Based ERP Applications Class
Gary B. McCombs, Eastern Michigan University, Ypsilanti, MI
The author believes that it is important to include enterprise resource planning (ERP) skills and applications in academic programs. Although difficulties were expected in implementing an ERP applications class, the author nonetheless pursued the matter and started such a course. This paper reports on those efforts as they relate to an Oracle accounting applications class, but it includes information on course content that is adaptable to a variety of ERP platforms and disciplines. The course was basically segmented into so-called “hands on” components and more traditional academic components. Student ratings and feedback have been obtained and are presented for all significant class assignments for four semesters of offering the course. These ratings serve as the underpinnings for a continuous improvement mode of course offering; the feedback and work submitted by students each semester is used to update and improve the components of the course in each subsequent offering. The author believes that a number of the ideas, assignments and projects presented will prove useful regardless of the specific ERP or applications chosen. Further, hopefully they will prove to be useful to schools that are in the process of implementing or considering an ERP classroom system. Enterprise resource planning (ERP) systems essentially promise better information and more information, with the anticipation of lower costs in an enterprise wide real time environment (Krumwiede and Jordan, 2000). ERPs are business management systems that integrate all facets of the business, including planning, manufacturing, finance, sales and marketing, into a tightly integrated business process / information system, facilitating a seamless exchange of information across the organization (Mabert et al., 2001; Stevens, 2003). ERP packages are known to be information systems that help companies integrate their operations for better speed, efficiency and agility, although successful implementation requires a clear business strategy (Gadson, 2001; Kocakulah and Willett, 2003). To be successful ERPs need to be business solutions, not just technology solutions (Davenport, 2000). The ability to cross departmental, global and other organizational boundaries and remove information ownership through the use of a common database drives their success. Further, the software makers, in their efforts to continue their revenue increases after the sales push achieved from Y2K and Euro conversions, are targeting their product at smaller and smaller companies (Jones, 2002). It is no wonder then that many colleges and universities wish to develop and provide courses with significant ERP content; this is evidenced by the hundreds of international college and university members in the Oracle and SAP academic alliances. And this desire will extend to all departments and disciplines that provide knowledge in ERP related areas, although they will have significant concerns about committing substantial amounts of resources on an ERP initiative. Further, given its very nature, ERP provides an opportunity for curriculum integration across disciplines, especially in the business area (Joseph and George, 2002). There is also growing evidence that students with ERP skills obtain higher salaries (Sager et al., 2006). This paper reports on the efforts that culminated in the creation and offering of an ERP applications class in a relatively short time period. The success in dealing with implementation and financial issues is shared, as well as the continuous improvement effort that the author believes should be an essential element of such an important endeavor. The paper starts with the ERP software selection and implementation process, and then gives class objectives. Oracle lab assignment information is then presented, which is followed by more traditional academic components. The purpose of this paper is to provide what is hoped to be useful information for those involved in any discipline who might wish to develop and implement an ERP applications course of their own. It also will show how to continuously improve the quality of the course by merely focusing on the most important stakeholder, the students. Most academic institutions wish to provide current, up to date and real world software experience for their students. Often a driver in this decision is what employers demand. It appears that the understanding of business processes is among those critical skills desired by employers in general, and ERP is a facilitating vehicle to accomplish this goal. However, ERPs frequently involve resource issues for software, hardware, training and ongoing administration. This is especially true as those applications become far larger and more complex, such as in ERP systems. Fortunately, more and more “academic” initiatives are being made available by the software companies. As there have been corporate acquisitions in recent years, the bulk of the market is now captured by SAP and Oracle. Product is also becoming available on the lower end, such as Microsoft’s Great Plains. Arens and Ward (2006) have published Great Plains materials that could serve as an introduction to more complex systems for the small / medium sized business market. This option may be the best one for programs with very little time or resources to commit.
Revised Mean Absolute Percentage Errors (Mape) for Independent Normal Time Series
Dr. Luh-Yu (Louie) Ren, University of Houston-Victoria, Sugar Land, Texas
Commonly used Mean Absolute Percentage Errors (MAPE), and author’s revised Mean Absolute Percentage Errors (RMAPE) are applied to measure the forecasting accuracy from different Moving Average Methods for independent normal time series. Simulation results show that both MAPE and RMAPE can only provide sensitive forecasting accuracy measurements on Moving Average Methods when coefficients of variation (c.v.) is smaller than 0.4 or is greater than 2.0 for an independent normal time series. For independent time series with moderate c.v.’s, the complexity from the ratios of MAPE and RMAPE will mislead researchers on distinguishing the forecasting accuracies from different Moving Average Methods. The complexity from the ratios will be released only when the c.v. is very small, or when the c.v. is very large. Therefore, when data are from independent normal time series, the Mean Absolute Deviation (MAD) reveals valid the forecasting accuracies from various Moving Average Methods, but not from MAPE or RMAPE. Mean Absolute Percentage Errors (MAPE),, is a widely used accuracy measurement in forecasting with non-negative actual observations, for instance, on monthly or quarterly sales. It expresses the forecasting errors from different measurement units into percentage errors on actual observations. One common criticism of the MAPE is on its existence when the actual observation At is equal to 0. However, in practical forecasts such as forecasts on profits, actual observations may end up with negative values. In this paper, a revised definition for mean absolute percentage error (RMAPE), , is considered. Simulation results show that neither MAPE nor RMAPE is a sensitive forecasting accuracy measurement for comparing different Moving Average Methods with moving periods (p) of 1, 3, 5, 7, 9, and averaging periods k of 3, 5, 7, 9, and 11, on independent normal time series with coefficients of variation between 0.4 to 2.0. 10,000 random data are simulated from each of Normal Distributions with a mean of 1 and a standard deviation of 0.1, 0.2, …, 1.9, 2.0, 3.0, 4.0 and 5.0 (i.e., with coefficients of variation (c.v.) of 0.1, 0.2, …, 1.9, 2.0, 3.0, 4.0 and 5.0). The data is then grouped into 1,000 groups with 20 observations each. The first nine (9) observations in each group are treated as historical observations, and the tenth (10th) to twentieth (20th) observations are treated as the future 11 observations. Moving average methods with moving period of 1, 3, 7, and 9 are applied to historical observations and their forecasts compared with the first future observation (the 10th observation). Absolute Percentage Deviation, , is calculated for the first future observation. Now, include the 11th observation into the new historical group. Moving average methods with moving period of 1, 3, 7, and 9 are applied to the most current 9 historical observations counted back from the 10th observation (i.e., the first observation in the old historical data group is eliminated), and their forecasts are compared with the “new” first future observation (the 11th observation). Absolute Percentage Deviation, , is calculated. Continue the above process until we find out the 11th Absolute Percentage Deviation, . Revised Mean Absolute Percentage Errors (RMAPE), can then be calculated for k from 1 to 11, accordingly. In this paper, we will only show the results of analyzing MAPE’s for k = 3, 5, 7, 9, and 11. A numerical example for simulated data from a normal distribution with a c.v. of 1 is listed in Table 1 for illustration purposes. For instance, the italicized figure 0.9247135 in Table 1 for the forecast of the 11th period from MA(3) comes from (0.7658188+2.0950225-0.0867006)/3. We can obtain the remaining figures similarly. For example, the italicized figure 1.2607029 for the cumulative absolute forecasting error of the 11th period from MA(3) comes from Table 1, |-0.0867006 - 0.5590845 | + |0.3097958 - 0.9247135 | = 0.6457851 + 0.6149177. We can obtain the remaining figures similarly. Following the same procedure, we can obtain the cumulative absolute percentage errors in Table 3. For instance, the italic figure 9.433359783 in Table 3 for the cumulative absolute forecasting error of the 11th period from MA(3) comes from |-0.0867006 - 0.5590845 | / |-0.0867006| + |0.3097958 - 0.9247135 | / |0.3097958| = 7.4484502 + 1.9849129. We can also obtain the remaining figures.
Taiwanese Executive’s Leadership Styles and Their Preferred Decision-Making Models used in Mainland China
Peng-Hsiang Kao, Ph.D., China Industrial & Commercial Research Institute, Taipei, Taiwan
Hsin Kao, Ph.D., China Industrial & Commercial Research Institute, Taipei, Taiwan
This research measured the relationship between the leadership style and preferred decision-making models used by executives at traditional Taiwanese-investment companies in Shanghai, China. This study used a quantitative research methodology. The Leader Behavior Description Questionnaire XII and the General Decision-Making Style Scale were used to measure perceived leadership style and decision-making style, respectively. Four leadership styles were measured based on different intensities of the combination of consideration behavior with initiating structure behavior. Decision-making models comprised the dimensions of rational, intuitive, dependent, avoidant, and spontaneous. The results show that the leadership styles are related to the decision-making models. As the East Asia market has grown, its influence on the world economy has become more significant (Yi, 1998). There are two important economic powers in this market: China and Taiwan. Many economic scholars believe that Mainland China will become the biggest market in the 21st century and have the most economic power in the world by 2020; therefore, more and more industries are investing in Mainland China (Liu & Li, 1998). To do this successfully, businesses must have accurate information so that they understand this market environment. This information may influence a company’s decisions and strategies, such as enabling a company to determine the proper timeframe to enter the market. Recently, the business environment in Taiwan has undergone enormous changes, such as the awakening of labor force issues, the rising cost of labor, the variation in the exchange rate, the competition for property, and the market saturation of products. These factors have caused many small and middle-sized enterprises to lose their cost-benefit competitive advantage (Lee, 1999). Moving external operations to Mainland China undoubtedly would accelerate the shift of companies’ competitiveness. Attracted by the cheap raw materials and labor force in China, Taiwanese companies, who share the same culture and language with the people of the mainland, have been enthusiastic about investing in Mainland China. According to Erven (2001), “Every business needs leadership. Leadership is one of the ways that managers affect the behavior of people in the business. Most successful managers are also successful leaders. They get people to work to accomplish the organization’s goals” (p. 2). Leadership refers to a person’s ability to guide, modify, and direct the actions of others in such a way as to gain their cooperation in doing a job: It is the ability of a person to facilitate the problem-solving processes of others. Essentially, leadership is a process of influence; personal traits, attitudes, values, and past experience influence leadership styles and performance. Situational leadership is designed to help all levels of managers become more effective in their daily interactions with others. Effective leaders utilize participative decision making as a mechanism of supervision and control; by involving everyone in both the problem analysis and the derived solution, better information is generated and everyone is more committed to implementing the decision and solving the problem. Influence is a function of commitment and involvement (Hill, 1977). Decision makers have a significant problem related to their positions in the organization; that is, their subjective point of view and preference might influence the decision process (Montgomery & Sevenson, 1989). Consequently, decision-making is a difficult and challenging job for leaders at all levels of a corporation, both large and small. Although an information system can help corporate leaders communicate and distribute information, it has a limited role in the decision-making process. Therefore, executives should understand how to select an appropriate decision-making model for their organization. Although the environment has some influence on the enterprise, to some degree management still has an effect on the diversity of possible conclusions. The executive is the person who has a full view and can integrate ability and decision-making talent to influence the entire organization. Therefore, the executive’s leadership style and decision-making ability influences the company’s development and future. The purpose of this study was to investigate the possible relationship between the leadership styles and preferred decision-making models used by executives of Taiwanese investment companies in Mainland China.
Exploring Customer Repeat Patronage in Tourism: The Influence of Marketing Culture, Relational Selling, and Sales Expertise
Cheng-Yuan Hsu, Chungyu Institute of Technology, Taiwan
Dr. Chou-Kang Chiu, Ching Kuo Institute of Management & Health, Taiwan
This study examines the formation of customer satisfaction and repeat patronage towards a travel agent in tourism. In the proposed model of this study, marketing culture, relational selling behavior and sales expertise indirectly influence customer repeat patronage through the mediation of satisfaction with a travel agent. Meanwhile, marketing culture has also a direct influence on customer repeat patronage. Several propositions related to the model are stated as important reference for management in tourism. Finally, discussion and limitations of this study are also provided. The advocacy of a sound culture that facilitates the successful marketing practice in tourism such as customer satisfaction and repeat patronage has been witnessed for the last decade (Appiah-Adu, Fyall and Singh, 2000). Today customer repeat patronage is crucial to tourist service. Proponents of this view assert that the internal culture of a travel agent plays a substantial role in service marketing and may impact customer satisfaction and repeat patronage (Appiah-Adu et al., 2000). There is no doubt that travel agents are now seeking customers by offering highly competitive services in order to obtain their satisfaction and repeat patronage in a long run, but such satisfaction and repeat patronage counts on successful service marketing of that agent. However, previous research has summarized that service marketing is tougher to manage than product marketing due to its intangibility, variability, inseparability and perishability (Appiah-Adu et al., 2000). For the purpose of providing management with strategies to overcome the tough issue of service marketing, this study identifies the critical determinants of repeat patronage from a perspective of intangible service factors such as marketing culture, relational selling behaviors and so on, in order to shed some light for management in learning how to achieve an ultimate goal of customer satisfaction and repeat patronage. This research differs from previous works in a principal area. That is, the applicability of marketing culture to strengthening customer satisfaction and repeat patronage has been extensively studied for tangible goods in general. In contrast, highly intangible service of travel agents from an angle of marketing culture has attracted little attention. Therefore, this work explores customer satisfaction and repeat patronage from an aspect of intangible service of travel agents and draws useful inferences for management in tourism. To sum up, since marketing culture and other factors proposed in this study are generally acknowledged to profoundly affect one’s response to service of travel agents, specifying the impacts of these factors can guide the agents to design different strategies for different potential customers and consequently achieve high customer repeat patronage. The conceptual model proposed in this study is displayed as Figures 1. In the proposed model, marketing culture has a direct influence on customer repeat patronage. Accordingly, marketing culture, relational selling behavior and sales expertise also indirectly influence customer repeat patronage through the mediation of satisfaction with a travel agent. Through such a proposed model, not only can managerial discussion be provided, but also the most appropriate model for exploring customer repeat patronage in service contexts of tourism can be specifically emphasized. (Insert Figure 1 About Here) Satisfaction with the relationship between customers and their travel agent is not only considered as an important outcome of customer-agent relationship (Smith and Barclay, 1997), but also an emotional state that occurs in response to an assessment of customer-agent interaction experiences (Lin and Ding, 2005; Westbrook, 1981). In other words, satisfaction with a travel agent may be defined as a potential tourist’s affective state resulting from an overall appraisal of his or her relationship with an agent (Lin and Ding, 2005), and it is a cumulative effect over the course of a relationship compared with satisfaction that is specific to each service offered (Anderson, Fornell and Rust, 1997). Previous studies stated that the effectiveness of satisfaction may be evaluated in terms of the behavioral changes customers create (Sharp and Sharp, 1997), so this study proposes that the construct of customer repeat patronage as the outcome of satisfaction. Customer repeat patronage may be reflected with the relationship between relative attitude toward a travel agent and repeat patronage of the service provided by the agent. Empirical evidence has been found for the relationships between satisfaction and customer repeat patronage (or loyalty) to an ISP (Lin and Ding, 2005, 2006). Notably, positive paths from relationship satisfaction with a travel agent to both relationship duration and purchase intentions to the agent (Bolton, 1998) are indicators of customer repeat patronage (Lin and Ding, 2005; 2006; Wulf, Odekerken-Schröder and Lacobucci, 2001).
Effects of System Trial on Consumer Beliefs in Marketing Software Products
Dr. Yao-kuei Lee, Tajen University, Pingtung, Taiwan, ROC
Marketers are constantly faced with designing an appropriate level of marketing stimulus at which a potential consumer can experience a sensation. Potential consumers generally are constrained with time and energy to explore new products and services. This study attempts to investigate the effects of a system trial as a marketing stimulus on potential consumers’ beliefs. Using a web-based e-learning system, the examination reveals that targeted consumers with or without system trial experience present differences in their determinants of behavioral intentions and beliefs. Potential consumers with system trial experience form a higher perception of usefulness, their intention to use e-learning for distance education purpose is more strongly affected by system functionality, their intention to use e-learning as a supplementary learning tool is more strongly affected by perceived usefulness, and their perception of ease of use is also more strongly influenced by system response. Without system trial, potential consumers’ self-efficacy plays a more important role but only to the extent of forming a higher perception of ease of use. These findings enhance our understanding of marketing information goods to potential consumers. The information service and software market in Taiwan has experienced a steady growth for the past several years and the trend will continue in the foreseeable future. Based upon the report by the Institute of Information Industry (Taiwan), the market growth is driven mostly by the following five factors: (1) optimistic economy, (2) enriched digital content, (3) business globalization, (4) e-Taiwan plan, and (5) e-learning needs. The software markets cross straits show a similar growth trend as well (Taiwan Economic Daily News, 2004-02-07). Traditionally, marketers as agents of change attempt to find ways to influence consumers use or purchasing behavior. One of the employed methods is for consumers to experiment with the product on a limited basis and make evaluation before use or purchasing. For example, some of the common practices are: (1) giving out trial-size samples of consumer goods, (2) providing limited trial of durable goods, and (3) offering test-drive of the latest auto models. These practices are designed to make consumers aware of the new products and at the same time reduce the risk perceived by prospective buyers. The costs associated with these such as producing and delivering trial samples sometimes are significant, as in the case of cosmetics. Information service and software marketing has similar but different aspects due to the so-called information economics. For example, to market a new software product, companies may provide customers with a test version having a three-month expiration date, a free version handling only a limited numbers of variables, or a guest account to use for a web-based system with limited functionalities. Furthermore, the Internet extends the time and place convenience for consumers to try out the new product. Although additional development costs may be required, the marginal costs of distribution or delivery are trivial or near-zero (Nejmeh 1994) in comparison with traditional consumer goods. Software companies more than likely will leverage this cost advantage to a greater extent in accelerating new product introduction in order to capitalize on the growth markets. Therefore, it becomes vital to investigate the effects of the trial. This study used a web-based system to evaluate the effects of system trial on consumers’ beliefs and intentions regarding e-learning use. In particular, the following research questions guided the study: (1) Do the potential consumer groups with or without system trial experience have similar beliefs and use intentions regarding the software product? (2) Do the relationships between consumers’ behavioral intentions to use an e-learning system and determinant factors differ for trial and non-trial groups? In understanding consumer behavior, theory of reasoned action (TRA) has been used to explain the relationship among attitude, intention, and actual behavior (Ajzen and Fishbein 1980). Based on TRA, technology acceptance model (TAM) was developed as a parsimonious model to understand the information technology acceptance (Davis et al. 1989). The belief-attitude-intention structure has since been widely applied in IT adoption studies. In addition, diffusion of innovations (Rogers 1985) was also an important perspective in explaining consumer behavior (Hanna and Wozniak 2001). Derived from the above theories, an e-learning acceptance model as shown in Figure 1 was proposed and empirically supported (Pituch and Lee, 2004). Other than the core constructs of perceived usefulness (PU) and perceived ease of use (PEOU), the model modified and extended TAM as follows: (1) The acceptance criteria were categorized into behavioral intentions to use the e-learning system as a supplementary learning tool (IU1) and as a distance education method (IU2); (2) External variables were identified: namely, system characteristics and individual factors. The system characteristics contain functionality, interactivity, and response while the individual factors include self-efficacy and Internet experience. System functionality (SF) is a consumer’s opinion or perception of system functions related to learning and relative advantage as to time and place in learning. System interactivity (SI) is a consumer’s opinion or perception of the e-learning system’s ability in enabling interactions between teacher and students, and among students themselves. System response (SR) is the degree to which a consumer perceives whether the system response is fast/slow, consistent, and reasonable in requesting a system service (Bailey and Pearson 1983). For the individual factors, self-efficacy (SE) is defined as one’s self-confidence in his or her ability to perform certain learning tasks using an e-learning system (Bandura 1977). Internet experience (IE) is the extent to which a prospective consumer uses the Internet (Tan and Teo 2000). The e-learning acceptance model accounted for approximately 65.3%, 63.8%, 47.9%, and 60.3% of the outcome variances of IU2, IU1, PU, and PEOU perspectively. In the context of e-learning, the potential consumers are the students or learners.
Electronic-Commerce Market Entry and Exit Decisions with Jump Risk under Uncertainty
Ching-hsien Chiu, National Sun Yat- Sen University, and Lecturer, Far East College, Taiwan
This paper develops an entry and exit decisions in e-commerce market with jump risk for a company under uncertainty. It is evaluated by value matching and smooth pasting conditions to assess the entry and exit thresholds for two decisions. A sensitivity analysis is used to simulate the effect of parameters on entry and exit thresholds. For entry thresholds, the positive correlation factors are price volatility, risk free interest, operation cost, investment cost; the negative correlation factors for entry thresholds are jump risk. For exit thresholds, the positive correlation factors are, operation cost, risk free interest; the negative correlation factors for entry thresholds are exit cost, price volatility. Some import implications to invest and disinvestment in an e-commerce project are provided. Electronic Commerce is an idea for trade based upon products and services that are being marketed, contracted, and paid for over the web. Consequently, electronic commerce stress for the investment in computer systems, marketing, logistics and payments (Bergendahl, 2005).Electronic-Commerce (EC) investment has been growing rapidly since1995. The Organization for Economic Co-operation and Development reported that total EC sales in the United States will achieve US$1000 billion per annum among 2003 and 2005, an increase of 3854%,up from an predictable US$26 billion per annum in 1996–1997 (National Office for the Information Economy, 2000). Concerning global e-commerce statistics, at the end of 2004, the United States had almost 186 million web users and worldwide the amount of users exceeded 945 million(ET Forecasts, 2005). Furthermore, corresponding sales revenues in the Asia-Pacific were estimated to grow from US$6.8 billion in 2000 to US$14 billion in 2001 (Boston Consulting Group, 2001). In the recent study about the e-commerce investment, real options approach has been often used. The reasons are as follows. (1)The optimal timing for investment in e-commerce can be delayed, shut down, restarted or abandoned. A delay in investment gives an opportunity to obtain a more developed information technology system as well as more information about market conditions. On the other hand, a delay may result in a loss of cash flows to competitors. (2)The expected stream of net profits for an e-commerce project has a high degree of uncertainty. (3) E-commerce investments are irreversible or partly irreversible. (4)The firm can wait for new information regarding prices, costs, and other market conditions before investing in E-commerce investment (Dixit and Pindyck, 1995; Bergendahl, 2005). Some seminars were studied on e-commerce investment. Whelan and McGrath ( 2002 ) present the total life cycle cost of an E-Commerce investment and think that an EC cost taxonomy concerned tangible and intangible should be incorporated during the various phase of the project cycle. Ferguson et al. (2005) divide electronic commerce investment project announcements into innovative and noninnovative to decide whether there are excess returns associated with these types of announcements. They find that noninnovative investments seem as more valuable to the firm than innovative investments. On average, the market expects innovative investments to earn a return commensurate with their risk. Some import questions of ecommerce evaluation was noted by Doherty and McAulay ( 2002 ).They are,(1)Is a simple EC evaluation framework?,(2)What is the costs and benefits, risk, flexibility for EC investment?(3)Is there the specific tools, techniques or approaches for evaluating investment? Bergendahl (2005) thinks Electronic commerce as an investment opportunity and develops some conditions for beneficial investments in ecommerce. Bergendahl (2005) suggests that it should develop a multi-period investment model in order to handle risk in the future. One new way of thinking is to formulate the uncertain sales growth in terms of a binomial lattice and reformulate the model as one of real options. Conventional discounted cash flow (DCF) techniques including net present value (NPV) and internal rate of return (IRR) are not used or widely used in IT investment decision making, partly due to their failure to identify the management flexibility ( Bacon, 1992 ; Weill, 1993 ;Dixit and Pindyck ,1995 ; Abel et al. ,1996 ). Bicher and Ahnefeld (2002) applied the real option method to private equity investments. Li and Johnstone (2002) applied the real options theory in relation to strategic IT investment. The entry exit decisions are introduced by Marshallian idea (Oi, 1962).However, Marshallian concepts ignores the uncertainty of investment, which cannot help the managers to handle with the risk of uncertainty. The entry and exit into a market can now be modeled using real option. In fact, borrowing from financial option theory, the entry decision can be modeled as a call option and the exit decision can be modeled as a put option (McDonald and Siegel, 1985, 1986; Dixit and Pindyck 1994). The aims of this paper are to derive the optimal entry and exit strategy of e-commerce in the presence of jump risk. We incorporate Poisson process into the optimal entry and exit decisions of in e-commerce market. The optimal solution to our results consist of two thresholds that triggers an e-commerce investment: if the price hits to the entry threshold it becomes optimal to invest, while at the exit threshold it becomes optimal to disinvest in e-commerce project. The thresholds can not be solved in closed form, but we obtain the equations that can be used to solve them numerically. The structure of the paper is as follows: Section 2 establishes the entry and exit decisions model with jump risk for e-commerce investment project .In section 3, a sensitivity Analysis is used to simulate the entry and exit decisions with jump risk for a company to entry and exit e-commerce market under uncertainty. Sections 4, some import implications to invest and disinvestment in an e-commerce project are provided. We assume that an idle firm wills entry the e-commerce market; an active firm will exit from the e-commerce market. The e-commerce investments are irreversible.
4C Diamond Model: Performance Appraisal System Mechanism
Yi-che Chen, Yuan-Ze University, Taiwan, R.O.C.
Pi-feng Hsieh, Tak-Ming College and Yuan-Ze University, Taiwan, R.O.C.
This study focuses on performance appraisal for S&T development. Themes regarding organizational performance appraisal have been examined via literature review. The concept of the 4C Diamond Model was used to conduct a diagnosis and comparison between the major systems in Taiwan and the U.S., namely the Accreditation System for R&D Organizations from the National Science Council in Taiwan, and the Government Performance and Result Act in the U.S. This study concluded that an effective appraisal system should include a clear reason for the appraisal, a feedback linkage, and a functional operation process which is incorporated within a comprehensive index system. Meanwhile, appraisal results must be disclosed to ensure fairness and credibility, especially when public funds are being used. Since the 20th century, science and technology have advanced rapidly. All countries now regard scientific and technological developments as major sources of national competitiveness. In the course of these developments, all countries are making strong efforts to establish technology policies or to allocate research resources in the hope of promoting economic development and social progress. This will be achieved through effective planning and execution of scientific and technological development strategies. Countries all over the world attach enormous importance to the dynamic performance and execution of scientific and technological development strategies. Taiwan has invested considerable R&D funds in scientific and technological development (see Fig. 1). Taking public agencies as an example, the government has set a scientific and technological annual development budget of tens of billions of dollars (in 2001, excluding the budget for defense technology development, the total budget for all central government agencies and councils amounted to $51,647 million, of which the National Science Council (NSC) donated $18,709 million). (NSC, 2002). Through compiling, entrusting, subsidizing, and cooperating during the preliminary stages of creating annual budgets, funds are provided to governmental research organizations, consortia, academic organizations, and industrial R&D departments. All of these organizations are encouraged to engage in technological innovation and applied research. Consortia received the most funds, followed by academic research units such as Academia Sinica, and universities. The funds are sourced from the annual governmental budget or from the National Science Council. With the recent change in the objective conditions of limited budgets, the R&D resources of the government are being stretched to their limits. The performance and capabilities of R&D institutions are a key concern for the allocation of the resources. Therefore, the technological policies of the government are expected to clarify the required resources inputs and associated benefit outputs, thus enabling more rational and efficient resource allocation. Furthermore, with Taiwan’s entry to the WTO, foreign enterprises can undertake S&T projects in Taiwan under the Government Procurement Law. Consequently, the Executive Yuan established “Operational Guidelines for Governmental R&D Procurement”(NSC, 2001) in accordance with the decision that “R&D should establish appropriate procurement rules” made by the Sixth National Science and Technology Conference. Consistent with Item 1, Article 6 of the Science and Technology Basic Law (1999), “science and technology R&D which is subsidized, entrusted or financed by the government should identify the research objects using a selection or investigation procedure, which should enclose main content,” every governmental organization must establish a science and technology performance appraisal discussion committee to conduct an objective and scrupulous appraisal. Therefore, a clear performance appraisal mode for science and technology institutions to display in a specific, definite and objective way is essential for understanding the state of scientific and technological development in Taiwan. Performance appraisal is a part of the control function in management activities. This function has both passive and positive meanings: the passive meaning is to understand the progress and condition of the appraised objective and taking correctional measures in cases of divergence; the positive meaning is to affect and guide decisions and actions to make both of them consistent with the organization targets, a practice known as “goal congruence.” Moreover, the methods of measuring organization performance can be based on the principles of effectiveness and efficiency. Effectiveness is the degree to which the set target is fulfilled; efficiency describes the proper utilization of production resources for ensuring that the input and output reach Pareto Optimality. Transparency and accountability are the main factors considered by governmental administration in relation to S&T. S&T policy is a link in the government administration chain and cannot be separated from the categories of transparency and accountability. Recently, the main direction in the improvement of accountability lay in the disclosure of service efforts and accomplishment (SEA) reports. SEA reports help the public understand government achievements and promote the establishment of an accountability system (McTavish, 1999). Since the end of the 1990s, SEA has been instrumental in promoting government accountability (with information technology) (Welth, 2001). Regarding the increasingly use of outsourcing by government, Robert and Laura (1998) believed that to improve administrative efficiency and avoid the failure of accountability, four steps should be taken: 1, affirming the actors; 2, affirming the functions; 3, affirming the projects; and 4, affirming the guarantee clauses. What are the connotations of accountability?
Parametric and Nonparametric Evaluation of USDA Hog Price Forecast
Dr. Sung Chul No, Southern University and A&M, Baton Rouge, LA
This study provides a comprehensive evaluation of USDA hog price forecasts and proposes a comparable time-series forecasting model for US hog prices based on an input-output price relationship. Having conducted various parametric and nonparametric forecasting evaluation tests, the study found a trivariate VAR model consisting of corn, soybeans, and hog prices comparable to USDA forecasting model. The empirical evidence thus suggests that market participants in general and hog producers in particular who seek accurate forecasts and at the same time a prediction of general hog price movements would be better off when they supplement the USDA hog price forecasts with time-series forecasts. As the structure of agricultural industries continues to change, the unwelcome reality of record low hog prices have forced many small farmers out of business. The cyclical nature of agricultural production sometimes caused by uncontrollable factors have been minimized to some degree via confined, climate controlled, air tight facilities used by most vertically coordinated firms. However, in the past five years, droughts in the Midwestern and Southeastern regions of the country have driven corn prices up and many small non-contract producers who did not have resource-providing agreements found it difficult to survive receiving less than 30 dollars per hundred pounds for hogs during 1998, 1999 and 2002. To minimize this problem, several marketing strategies, such as forward-contracting, revenue insurance, hedging, options, diversified enterprises and extensive research have been proposed and implemented by economists. Another pivotal component in improving marketing decision for future hog production is to use accurate public hog price forecasts. Economic Research Service of the USDA provides a quarterly forecast in its monthly “Livestock, Dairy and Poultry Situation and Outlook.” However, it is critically important that market participants in general and hog producers in particular understand the uncertainty surrounding USDA hog price forecasts as well as any systematic biases that they might include. Preliminary study indicated the USDA one-quarter-ahead price forecasts are inefficient in that they are not minimum variance forecasts. Thus, this paper propose a simple yet comparable time-series forecasting model for quarterly USDA hog prices based on an input-output price relationship. This is completed after the paper examines the USDA price forecasts using parametric and nonparametric evaluation criteria. The quarterly hog prices ($/cwt) were collected from “Red Meats Yearbook” which reports the U.S. average market prices of gilts and barrows from 1979 to 2004. U.S. average corn and soybean prices ($/bushel) were collected from the National Agricultural Statistical Service. The USDA’s hog price forecasts were obtained from various monthly “Livestock, Dairy and Poultry Situation and Outlook” (ERS/USDA). The midpoints of the USDA’s hog price forecast are used for evaluating the USDA quarterly forecasts. The USDA’s LDPSO report was published on a monthly basis, and is released between the 22nd and the 30th of each month from January, 2000 to January 2005 and between the 15th and 19th of each month from February, 2005 to June, 2005. The USDA price forecasts are collected from February, May, August, and November reports for each calendar quarter. For instance, the forecasted price for the first calendar quarter is collected from the November report in the previous year to have a complete one-quarter-ahead forecast. The ex-post sample period for forecasting comparison is from the first quarter of 2000 to the second quarter of 2005, resulting in 22 quarterly observations of one-step-ahead price forecasts and realized values. The first objective of this study is to provide a comprehensive evaluation of USDA hog price forecast. To achieve this, the USDA’s forecasts are compared to those of vector autoregressive (VAR) models. Feed, consisting of mostly corn and some soybeans, accounts for approximately 66% of the total production costs of hogs sold to the market weighing 220 to 250 pounds (ERS/USDA, 2004). Martin (1994) suggested that these input prices are major factors that attribute to changes in output prices of hogs and that quality information on input prices are useful to predict changes in hog prices. Thus, corn and soybeans prices are used in this research to construct two bivariate VARs: one consisting of corn and hog prices and the other using soybeans and hog prices. In addition, a trivariate VAR of corn, soybeans, and hog prices are constructed for forecasting comparisons. To evaluate the performance of the USDA forecasting model for hog prices, we adopt various parametric and nonparametric validation techniques that Sanders and Manfredo (2003) utilized for their recent publication. The parametric validation methods are based on certain assumptions regarding the probability distribution of estimators. The RMSE provides a measure of the average error measured in the same units as the actual observations, whereas the MAPE is unit invariant measure. The smaller the measures are, the better the forecasting ability of the model is. Theil inequality coefficient is an extended version of RMSE. The TIC normalizes RMSE by dividing by the volatility of the forecast and actual prices, and lies between zero and one, where zero indicates a perfect fit. Similar to the MAPE, the TIC is unit free measurement. Diebold and Lopez (1998) defined optimal forecast as unbiased and efficient one. Third type of parametric forecasting testing procedure considered in this paper is a forecast encompassing test.
Understanding the Violent Offender in the Workplace
Dr. Bella L. Galperin, The University of Tampa, Tampa, FL
Dr. Joanne D. Leck, University of Ottawa, Ottawa, Canada
Violence in the workplace is increasing at an alarming rate. Despite the growing prevalence of workplace violence, little is known about the perpetrators of these violent acts. This exploratory study attempts to assess how many potential perpetrators are in our workforce, and what organizational characteristics, such as human resources practices and procedures, foster the creation of the violent offender. Future research directions and practical implications are also discussed. Workplace violence has become an important issue for organizations today. According to a report published by the U.S. Department of Justice, approximately one thousand employees are murdered yearly while performing their work duties (Bureau of Labor Statistics, 1999). Homicide has become the leading cause of on-the-job death for women, and the second leading cause for men. Although a great majority of workers who are killed on the job die during the commission of another offence, such as robbery, death resulting from the violent acts perpetrated by a fellow colleague is the next most common reason. Other studies on workplace violence yield similar alarming findings (see Atkinson, 2000). For instance, the Northwestern National Life Insurance Company found that 2.5 percent of respondents had been physically attacked on the job at least once. The American Management Association found that 52 percent of respondents reported experiencing at least one incident or threat of violence in the workplace over a three-year period. The Society for Human Resources Management found that 48 percent of surveyed employees experienced a violent incident in the workplace over a two-year period. These incidents included verbal threats (39 percent), pushing and shoving (22 percent) and fist fights (14 percent). Despite the prevalence of violence in the workplace, little is known on the perpetrators of these violent acts. While a number of books and articles on workplace violence have profiled the violent offender as “a Caucasian male, in his 40s or 50s, divorced or separated, a loner with an interest in guns” (Braverman, 1999), these profiles only provide a limited understanding of the factors that encourage employees to engage in violent behaviors. Environmental factors, such as the organizational context, can also contribute to violence. The primary objective of this exploratory study is to develop a greater understanding of the potentially violent offender. Unlike the majority of the literature on workplace violence that focuses on the personal characteristics of the offender, this study also examines the organizational context that fosters the creation of the violent offender. It is argued that organizational factors, such as human resource practices and procedures, can play an important role in contributing to the creation of the violent offender. Future research directions are also highlighted. Finally, practical implications are discussed. According to the Occupational Safety and Health Administration (OSHA) and the Long Island Coalition for Workplace Violence (1996, p.1), workplace violence includes: “the commission of proscribed criminal acts of coercive behavior which occurs in the work setting. It includes, but is not limited to homicides, forcible sex offences, kidnapping, assault, robbery, menacing, reckless endangerment, harassment, and disorderly conduct”. There are many other terms used to describe worker mistreatment, such as bullying (Leck, 2003), aggression (Neuman & Baron, 1998), abuse (Keashly, Trott & Maclean, 1994), incivility (Cortina, Magley, Williams & Langhout), deviance (Galperin, 2005; Robinson & Bennett, 1995), antisocial behavior (Giacolone & Greenberg, 1997), harassment (Einersan, 1999) and dysfunctional behavior (Griffin, O’Leary-Kelly & Collins, 1998). Violence differs from these other forms of mistreatment in two important ways. First, an employee usually commits a violent act only once (e.g., shoots the boss). Second, the act is physical in nature. Because of the severity of violence, it is important that ‘loose cannons’ be identified (and managed) before the act occurs. The literature on profiling the violent offender is discussed below. Many researchers and practitioners have attempted to profile likely offenders based on the characteristics of past offenders and retrospective analysis (Braverman, 1999, DiLorenzo & Carroll, 1995, Paetzold, 1998). Based on an extensive literature review, the profile of the “typical” violent offender is male, Caucasian, between 40 and 50 years old, and divorced or separated (Braverman, 1999). Empirical research supports some of these contentions. Meta-analytic results suggest that men are more likely to engage in overt physical aggression compared to women (Eagly & Steffan, 1986). Similarly, a number of more recent studies on violence have found that men both instigate and receive more physical aggression compared to women (Baron & Richardson, 1994; Bjorkqvist, 1994). The incidence of violent behaviors also decreases with age (Stets & Straus, 1989). Sugarman and Hotaling (1989) reported higher rates of involvement in interpersonal violence among younger individuals compared to older individuals. More recently, Harris (1996) found age was negatively correlated with aggressiveness. These findings also suggest that older people are less aggressive compared to younger people. With respect to the psychological and attitudinal factors, the violent offender is anxious, depressed, irrational, suffering from bipolar disorder, paranoid, stressed, and delusional.
Studies in Strategic Management in Taiwan
Dr. Chiou-Hua Lin, Ming Chuan University, Taiwan
Yuan-Kai Chi, Ming Chuan University, Taiwan
Competition among corporations is increasingly complicated and multi-faceted, which, together with the impact of drastic changes in the macro environment, is forcing companies to focus on being able not only to foresee such changes, but also to respond to them very quickly. This process is the core of strategic management, and it goes beyond just managing a company’s internal efficiency. Markets today are complicated and fast changing, as both globalization and high tech development have greater impact on industry. Corporations can no longer rely on one or two specialists to formulate strategy and implement tactics. Instead, comprehensive strategic planning and its execution drive the long-term success of the corporation. No corporation can any longer rely on a single competitive strength to maintain its lead in the market in perpetuity, as its competitors will continue to seek new ways to overtake it. Instead, by using the concept of Hyper-Competition, a corporation can apply different strategies in response to different competitive environments, to gain competitive strength. Strategic management can be divided into four major areas as shown in Fig 1-1. These areas are: strategic management factors (strategist, mission, objective), competitive strength analysis, strategy layout, strategy implementation, and control. Each area, in turn, results in a series of steps, starting from a strategist defining a corporation’s mission and main objective, followed by external and internal analysis, then by formulating consortium strategy, business strategy and function strategy, and finally by the design of an implementation and control system. This paper will take the case of Hon Hai Precision Industry Co., Ltd. ("Foxconn"), a famous multinational corporation in Taiwan, to analyze its strategic management process. Strategic management key factors include strategist, mission and objective. A strategist is the strategy formulator of a corporation. While in small companies it may be a single individual, on most companies there will be several members on a strategic management team, with major members being the board chairman, high-level managers and key planning personnel. In most cases, the high-level managers formulate the strategy, while the board and planning staff function as advisors. A corporation’s mission (or mission statement) proclaims the reason for its existence, its role in society, and industrial environment within which it chooses to operate. An objective clearly outlines those things the corporation must do to fulfill that mission. A corporation’s mission and objective are closely related, with both providing guidance as to the corporation’s overall direction. The strategist, however, is the decision maker. Remember that the concept of a strategist may involve a team of individuals each of whom will play different roles in the strategic management process. Internal analysis: outlines both the strengths of a corporation and its weaknesses. A well-managed company will provide value to customers, and gain in strength against its competitors. These international operations are of particular importance to the resource-oriented corporation and competition viewpoint theories and research & development. Ways used to analyze internal strength and weakness of a corporation include: Critical success factors: First each industry has its own success factors, determined by such elements as the product portfolio, inventory turnover, sales promotion and pricing. Next, external macro environmental factorsinfluence a company’s success. Lastly, internal organizational development may impact other success factors, such as human resource management. Value Chain: Value chain analysis observes individual activities inside a business that contribute to the entire value generated by a company, and its final financial performance. This analysis includes inbound logistics, operations, outbound logistics, and services, and such supportive activities as human resources management, technology development, and the firm’s infrastructure. External analysis: External environment analysis attempts to understand potential threats and opportunities, in order to facilitate a corporation’s ongoing response to changing conditions. External analysis includes: Industrial competition analysis: M.E. Porter described a “Five Forces Model”（1985）, where the dimensions of industrial competition include existing competitors, potential participants, alternate parties, buyers and suppliers, as shown in Fig 2-1. Market competition analysis: including consumer market and consumer behavior analysis, corporate market and corporate behavior analysis, market information analysis and competitor analysis. Supply dimension analysis: each corporation lies somewhere along a value chain for its industry. There are segments both upstream and downstream of that position that can impact a corporation either positively or negatively. At one extreme there is Toyota of Japan’s Lean Production method, which includes joint improvement of technology, joint regulation of a win-win labor-capital relationship, joint formation of JIT manufacturing, and joint upgrading of manufacturing capacity, all focusing on consumer-oriented cost saving, quality assurance, and customized design, with flexible value adding opportunities available, and a global supply chain objective. Strategy deployment includes both developing a strategy and selecting a strategy. Overall strategy includes Corporate Strategy, Business Strategy, and Functional Strategy.
The Mediating Effects of Leader-member Exchange Quality to Influence the Relationships between Paternalistic Leadership and Organizational Citizenship Behaviors
Dr. Shing-Ko Liang, National Chiao Tung University, Taiwan
Hsiao-Chi Ling, National Chiao Tung University, Taiwan
Sung-Yi Hsieh, National Chiao Tung University, Taiwan
Military leaders play a very important role in their organizations. However, past research on military leadership in Taiwan has mainly verified theories developed by western countries, without taking cultural differences into account. Our study uses the model of paternalistic leadership, an indigenous Chinese leadership style, to examine the relationships between Taiwan’s military leadership behaviors and organizational citizenship behaviors (OCBs). Additionally, this study adopts the leader-member exchange quality to explore the mediating effect between leadership behavior and leadership effectiveness. Our study took 215 military leaders and 430 subordinates from 21 military units in Taiwan as subjects, and found that: (1) benevolent leadership and moral leadership have a positive effect on OCBs, whereas authoritarian leadership has a negative effect on OCBs; and (2) the relationship between leaders’ benevolent and authoritarian leadership and OCBs is mediated by leader-member exchange quality. The directions for further research and implications are discussed in the conclusion. For a long time, in both academic circles and industry, “leadership” has been an important topic. But since leadership is a very complicated phenomenon, we cannot grasp the nature of leadership through conjecture. After the 20th century, leadership finally became a subject of scientific research (Cheng, & Huang, 2000). However, despite the abundance of research on the theory of business leadership, there is a lack of research on the effectiveness of military leadership, even though there are vast differences between the two in terms of structure, nature, contents and culture (Cheng, & Zhuang, 1981; Cheng,, 1985a; Ling, 2001). Therefore, this research aims to construct a fitting leadership model for Taiwan military organizations and proposed a model of effective military leadership. Taiwan’s military leadership is different from that of western countries. Leaders’ behaviors that are effective in some cultures, regions or countries are not effective in others (Hofstede, 1980). In order to understand the leadership of Chinese organizations, some researchers, following Silin (1976), have adopted an indigenous approach to the study of middle and top rank leaders of Chinese family-owned businesses in Hong Kong, Indonesia, Singapore and Taiwan (such as Cheng, 1995a; Redding 1990). They pointed out that the leadership of Chinese business organizations had distinctive characteristics and strong authority but also cared about and understood their employees and showed highly individual moral principles. Such a leadership style is not only found in family businesses, but also in the public sector and in other organizations. The researchers call this “paternalistic leadership” (PL) (Westwood & Chan, 1992; Farh & Cheng, 2000; Pye, 1985; Cheng, Bo-Shun, Chou, Li-Fang & Fang Jing-Li, 2000). The researchers therefore planned to explore the effectiveness of paternalistic leadership. What is effective leadership? In the research, organizational citizenship behavior (OCB) will act as an index. Organizational citizenship behavior refers to voluntary and self-motivated actions by members. Such actions are not formal or necessary behaviors directly requested by organization, but they can help an organization reach its goals. In a military organization, if they want to enhance administrative efficiency and strengthen training effectiveness, it is important that the members should not only do their best to complete “their on-duty” training but also show “altruistic behaviors when they are “off-duty.” Therefore, the research will explore the effects of paternalistic leadership model on the organizational citizenship behaviors of subordinates. However, through what kind of mechanism does a leadership behaviors affect the organizational citizenship behaviors of subordinates? In the research, the leader-member exchange quality consists of the mediating variables of leadership behaviors and organizational citizenship. The study will observe and compare the mediating effects of the variables, such as trust and satisfaction, in measuring leader-member exchange quality on the leadership behaviors and organizational citizenship behaviors .It will then describe the process that the leadership affects. This research has three goals: 1) to understand the effects of paternalistic leadership on organized citizenship behaviors of subordinates in military organizations; 2) to explore the relationship between paternalistic leadership and leader-member exchange quality; and 3) to observe the mediating effects of leadership relationship quality on paternalistic leadership and organizational citizenship behaviors. Paternalistic leadership is a uniquely Chinese mode of leadership. It is characterized by fatherly benevolence, authority and moral selflessness in a patriarchal environment. There is scant research on paternalistic leadership and its effectiveness. However, all of the research on paternalistic leadership contrasts the Oriental culture that emphasizes clans and privileges with the Western culture that emphasizes on individualism and universalism. The conception of leadership constructed by the Western scholars is therefore not applicable to China. So it is necessary to find another conception of leadership that fits traditional Oriental culture. The model of paternalistic leadership can meet this requirement (Chen, 1995a; Redding, 1990; Silin, 1976). There are three types of paternalistic leadership: “authoritarian,” “benevolent” and “moral.” Such leaderships originated in Chinese traditional culture. The authority to control subordinates originated from laws, art and power, official Confucianism and three-thousand years of imperial history. Benevolent leadership, the caring for subordinates, is traceable to the monarchy and fatherly obligations,
The Influence of Need-for-Uniqueness on Loss Aversion and Framing Effect
Dr. Chien-Huang Lin, National Central University, Taiwan
Li-Huei Wang, National Central University & Chin Min Institute of Technology, Taiwan
This research explores the influence of need for uniqueness (NFU) in loss aversion and framing effect. Two studies have been done to verify that high-NFU people are less likely to choose the popular options and are less susceptible to loss aversion and framing effect. In addition, we also support the work of Simonson and Nowlis (2000) that the combination of explanations and high-NFU will generate more unconventional choices than the others. There are three factors (careless, anticonformity, and defense) extracted from the NFU scale developed by Snyder and Fromkin (1977). This presentation of empirical works show that without providing reason, high-careless people will be less likely to exhibit loss aversion and are less susceptible to framing effect than high-anticonforming people. Conversely, when people were asked to explain their decisions, high-anticonforming people will be less likely to exhibit loss aversion and be less susceptible to framing effect than high-careless people. Prior work on the effect of the social environment on consumer decision making have emphasized the tendency of consumers to conform to norms (Burnkrant and Consineau, 1975; Cialdini and Trost, 1998; Schiffman and Kanuk, 1994). However, consumers considering unconventional, minority options have noticed that such choices and reasons that can support them deviate from the norms. Many social psychologists (Brewer, 1991; Snyer and Fromkin, 1977) have manifested people’s desire for distinctiveness and uniqueness, but there are a few studies to explore significant effects which the need of uniqueness has on choice behavior. Simonson and Nowlis (2000) indicated that when consumers are encouraged to explain their decisions and are not concerned about others’ evaluations, the need for uniqueness can play an important role and a minority option might become the majority choice. According to the theory of uniqueness, high-unique consumers as compared to low-unique consumers should be less responsive to conformity pressures (Pepinsky, 1961). Furthermore, high-unique consumers are more willing to express their uniqueness behaviorally and risk social disapproval. Need for Uniqueness（NFU） scale developed by Snyder and Fromkin(1977) has been used widely. In Snyer and Fromkin’s study, “a lack of care others’ reaction to one’s different ideas, action, etc ” and “a desire to follow the rules” are two constructs of NFU. Both of the factors exhibit the high-need for uniqueness, but with different reasons. The former people are confident for their own beliefs or preferences. Thus, they do not care about others’ criticisms. The later are intention to anti-norm and like to break the rules. It implies that non-conformity people will anticipate to be evaluated by others and then act anti-norm on purpose. The differences between the two characteristics are seldom mentioned in the decision making field. Simonson and Nowlis (2000) proved that when high-NFQ people need to explain their decisions, they would make more unconventional choices. They demonstrate when consumers need to explain their decisions, they might be biased in favor of unconventional reasons, particularly among those who are predisposed to express their uniqueness. However, they did not further distinguish the two types needs for uniqueness. Since people in both types, lack of concern for others’ reactions and desire to disobey the rules, usually observed and consume by different motivations, we believe that clearing the mechanism will be very useful to the marketers. Independence is a quality that can be perceived as a sign of strong character, convictions and autonomy. People also appear to derive intrinsic satisfaction from the perception that they are unique, special, and separable from “the masses”, which is referred to as “need for uniqueness” (Fromkin and Snyder 1980; Snyder 1992). The assumption of uniqueness theory is that although people at times conform, they do not favor similarity relative to others (Snyder and Fromkin, 1977). People who do not want to conform may exhibit a stronger need to maintain their uniqueness (Pepinsky, 1961). On the one hand, people try to conform to social norms, please others, and avoid criticism and rejection. On the other hand, social interactions may encourage dissension and deviation from norms, which fulfill a positive function in one’s self and public image (Brewer, 1991). Simonson and Nowlis（2000）demonstrated that anticipated evaluation by others tend to promote conformity and has a pro-norm effect on preferences and judgments (Zajonc, 1965). To measure the degree of “need for uniqueness” of individuals, Snyder and Fromkin （1977） developed a “Need For Uniqueness” (NFU) scale. Snyder’s（1992）research proved that people with high NFU are more sensitive to the degree which they are seen as similar to others and are most likely to show their special sense. Snyder (1977) described high-NFU individual who are developed, independence, anticonformity, inventiveness, achievement, and self-esteem. Freeman & Doob (1968) proposed two kinds of deviance: independence and anticonformity. To further explore the internal structures of NFU, three factors were extracted from the factor analysis (Snyder and Fromkin, 1977). One factor is defined as “a lack of concern regarding others’ reactions to one’s different ideas, actions, and so on” (hereafter, “careless”). Another is defined as “a person’s desire to not always follow rules” (hereafter, “anticonformity”). The last is defined as “a person’s willingness to publicly defend his or her beliefs” (hereafter, “defense”). We will not discuss too much of the third one here since its construct is very clear. What we are interested in and try to clarify is the careless and anticonformity factors.
The Economics of Managing Information Networks
Dr. Mona Yousry, National University, CA
This paper develops the economics and the business model for a specialized wireless network solution for intelligent devices (IDs). This network utilizes software and hardware that needs to be put together so that it provides users a flexible user-friendly environment. The research investigates a Network Applications Service Operator (NASO) that concentrates on interconnected IDs. Intelligent devices (IDs) include sensors, switches, appliances, etc., from a large variety of manufacturers. IDs are capable of receiving instructions and returning information. In some cases they may be able to process data and issue commands via intelligent agents. Each type and brand of ID can be characterized by its parameter categories and formats, supported communications protocols, medium, security support and billing methods. In order for a user to access the NASO they would enter through a server portal that is intelligent, secure, and provides the applications and network connections. We call this an ISAP or Intelligent, Secure, Applications Portal. The following research will contain these areas: Tiered Intelligent Link Environment “TILE” Software, Software Definable Radio Links (SDR), Orthogonal Frequency Division Multiplexing (OFDM), Data Management Services via our Intelligent Secure Application Portal (ISAP), and network connections. The market for IDs is growing at an incredible rate, with 10 times the number of IDs than desktop PCs in use today. This solution overcomes many problems in the embedded technology marketplace; namely, the difficulty of networking and controlling IDs remotely, upgrades and maintenance. The technology in our research consists of Tiered billing, and a next-generation OSS suite all of which are network and application aware. Critical factors in the production of our “TILE” OSS suite are Quality of Service (QoS), which includes Security. Our “TILE” OSS is unique because there are currently no NASOs available for converged ID networks. This is perhaps the single most critical area of technical focus. Intelligent devices (IDs) include sensors, switches, appliances, etc., from a large variety of manufacturers. IDs are capable of receiving instructions and returning information and in some cases they may be able to process data and issue commands via intelligent agents. Each type and brand of ID can be characterized by its parameter categories and formats, supported communications protocols, medium, and security support. At the simplest level ID network is comprised of a sensor attached to a low cost robust radio transmitter which sends signals to a local receiver or gateway connected to an outside network server. The server runs in this case on our “TILE” software and controls the information coming from the sensor. This basic model can be expanded to include bi-directional communications if necessary. Once the basic components are in place they can be utilized in any situation that requires a wireless IDs solution. The utilities sub-stations provide a good application for testing. Utilities are seeking to automate the sub-stations allowing them to replace old manual relays with automated ones. They are also interested in monitoring various points such as dissolved gasses and “hot-spots” within a power transformer. The research will supply them with the basic wireless network, software, secure server and applications that will allow them to extend into any type of monitoring, control, data-processing or security that might be required in the future. All on a unified open platform. The important point to note here is that the same basic configuration applied to the utilities can be utilized in all types of IDs networks from medical to home. The system has been designed so that 90% of the functionality remains consistent and reusable. Only slight modifications to the user interface will be required to extend into additional markets. An all together new application for this research would be tapping emerging markets such as remote control of oil / gas fields, micro-medical or implanted devices, automotive maintenance and extensions of fault tolerance telecommunication networks. Devices-Sensors IDs represent a relatively well established market, supported by large players that can be utilized by our research and can address the needs of the entire range of IDs needs. The research needs to develop and write the software code for TILE. In order to do so, we would need access to comprehensive Web Enablement, Client/Server Solutions, Computer Security, Network Engineering, Wireless Systems, Power-line Systems, and Systems Integration methods. The final aspect is to integrate the complete system so it will function as the next-generation ASP or what we call ISAP. The ISAP allows users to design, build, control, monitor and maintain all the applications, software, hardware, gateways and data-management required too support the IDs network. The ISAP will offer the first OSS suite designed for IDs networking. The ISAP ties the system together, linking sensors to radio to gateway to server to user. A. Create a solid foundation of specific measurable, achievable, relevant and time-based goals and objectives. B. Identify risk areas early in the process. Provide the basis for a balanced, integrated, realistic, risk-adjusted plans and strategies that fulfil the client’s business objectives. The framework comprises the following process areas and activities: Requirement Management (analysis phase) System requirements are identified, allocated and controlled to establish a baseline for software engineering and management use. Software plans, products and activities are kept consistent with the system requirements allocated to software. Software Project Planning (analysis phase) Software estimates are documented for use in planning and tracking the software project.
Establishment and Termination of Unincorporated Partnership in the Turkish Law
Dr. Mustafa Can, Ankara, Turkey
Firms have to use their resources and tools effectively and productively and measure their performance correctly in order to face globalization in an increasingly competitive environment. In addition, they have to meet the requirements of rapidly changing technology, globalization, and marketing and to have the share from the world marketing, to satisfy demand and profile of the customers. Therefore; the law must be reconstructed on the base on definition of law, characteristics of a unincorporated partnership, rights of partners, purposes of a unincorporated partnership, firm name, who may be partners, creation of unincorporated partnership, unincorporated partnership agreement, determining the existence of a unincorporated partnership, control, sharing profits and losses, sharing profits, contribution of property, unincorporated partnership property, assignment of a partner’s interest, effect of dissolution, dissolution by act of the parties, agreement, expulsion, alienation of interest, withdrawal, dissolution by operation of law, death, bankruptcy, illegality, dissolution by decree of court, insanity, incapacity, impracticability, equitable circumstances, winding up unincorporated partnerships affairs, distribution of assets all aspects in Turkish. This study is important for the Europe Union to know the Turkish Partnership Law System during the period of Turkey’s entering the Europe union. The simplest form of business enterprise is the unincorporated partnership. However, for numerous reasons, including the complexity of business transactions, demands of competition, need for capital and skill, and the desire for limitation of liability, persons cooperate in the form of associations (Ansay, 2002, 80). When two or more persons combine their capital and skills to achieve a common economic purpose, their exists a unincorporated partnership. Unincorporated partnerships are established by agreement. A codification of unincorporated partnership law is found in the Turkish Obligation Code. The Turkish Obligation Code is in effect in the Turkish Republican. After the proclamation of the Republic in 1923 radical reforms were introduced in legal matters as in other spheres of social life in Turkey. For example, the adoption of the Swiss Code of Obligations which contain the law of contracts, torts, unjust enrichment and unincorporated partnership which was adopted in 1926 with some minor alterations represented a profound change in the social life of Turkey (Guriz, 1987, 10). A unincorporated partnership today are relatively common form of business organization. Law historians tell us that early partnerships or organizations having some characteristics as early as 2000 B.C. In England, where partnership were particularly prevalent, the common law of partnerships was well-developed by the time the American colonies were founded (Kogan, 1977, 12). Today, most countries have adopted this form of business organization. Businesspersons take into account the advantages and disadvantages of establishing and operating different types of business associations in choosing the most suitable form for their purposes. For example, a small group of persons can choose to form a unincorporated partnership rather than corporation. Or, they can prefer to establish general partnership. But, unlike a unincorporated partnership, a general partnership has legal personality under Turkish law. Business associations are of various types and can be classified by differing characteristics. They either have or do not have legal personality. A unincorporated partnership has no separate legal personality, whereas general partnership, limited partnerships, limited partnerships in which capital is divided into shares, corporations, partnership with limited liability(limited liability company) have legal personality. The other type of business association which has legal personality is the cooperative. These business associations or companies, such as corporation and limited company, differ significantly from unincorporated partnerships and are not discussed in this chapter. They are found in the Turkish Commercial Code. Among these types of partnerships, only the unincorporated partnership is used frequently. Foundations and clubs have legal personality, but they can not operate as business enterprise. In other words, they can not operate a business enterprise as their main purpose in Turkish law system. A unincorporated partnership is regulated by the Code of Obligations (C.O. Art. 520-541). Legal provisions that govern unincorporated partnerships may also be applied to other business associations in situations where there is no particular code provision applicable to them. Similarly, if a business association does not have the characteristic elements of one of the associations described in the Commercial Code, it will be subject to the provisions governing unincorporated partnership. A unincorporated partnership is a relationship established by the voluntary “ association of two or more persons to carry on as co-owners a business for profit” (The Turkish Obligation Code(TOC) 520).
Guanxi with Government as a Source of Competitive Advantage in Mainland China
Fangtao Zou, Huazhong University of Science & Technology, Wuhan city, China
Dr. Yongqiang Gao, Huazhong University of Science & Technology, Wuhan city, China
Guanxi is considered a unique Chinese phenomenon and a product of China’s contemporary political and socio-economic systems. Resource-based view of the firm argues that the distinct resources or capability hold by a company are the source of competitive advantage. Government guanxi acts as a source of corporate competitive advantage in China since it is valuable, rare, nonsubstitutable, and imperfectly imitable. In order to build and maintain a close guanxi with governments, three steps should be taken, namely finding out guanxi base or intermediary, taking actions to build guanxi, and taking actions to maintain guanxi. Guanxi is considered a unique Chinese phenomenon (the Economist, 8/4/2000) and a product of China’s contemporary political and socio-economic systems. Guanxi plays an important role in Chinese society. It may serve as means of signaling trust and integrity in a system that lacks strong background (Lovett et al., 1999). It may also constitute an informal network allowing individuals to bypass the inefficiencies inherent in a communist bureaucracy (Xin and Pearce, 1996). To date, the popular and academic literature has focused on descriptive and instrumental (Xin and Pearce, 1996; Leung et al., 1996) and ethical dimesions (Dunfee and Warren, 2001; Fan,2002) of guanxi. Guanxi has also been identified as one of the most important success factors in doing business in China (Yeung and Tung, 1996; Abrarnson and Ai, 1999); regarded as a source of sustainable competitive advantage (Tsang, 1998; Fock and Woo, 1998) and the glue that holds the Chinese society together (Lovett et al., 1999); linked with some western concepts such as relationship marketing (Ambler, 1994; Simmons and Munch, 1996). However, when we take guanxi as a source of competitive advantage, there are still some questions need to be answered. For example, does guanxi in general can act as a source of competitive advantage? Further, does guanxi in general is rare and imperfectly imitable? Since business guanxi is very popular in China and is very easily imitable (because it is simply built by money or quasi-money giving). Therefore, this kind of guanxi can’t act as a source of competitive advantage. This paper doesn’t take guanxi in general as a source of competitive advantage, instead, we only discuss special guanxi with governments as a source of competitive advantage. Guanxi generally refers to relationships or social connections based on mutual interests and benefits (Yang, 1994). Specifically, it refers to a special type of relationship that bonds the exchange partners through reciprocal exchange of favors and mutual obligations (Alston, 1989; Luo, 1997). The concept of guanxi is tacitly embedded within the Confucius philosophy and it subtly defines the Chinese moral code (Fock and Woo, 1998). The Confucius social hierarchical theory, i.e. the five relationships of emperor-subject, father-son, husband-wife, brother-brother and friend-friend perpetuates its influence in modern China (Yau, 1994; King, 1993; Buttery and Leung, 1998). As a result of Confucianism culture, the legal system of China is relative weak while “guanxi” between individuals is very strong. In China, guanxi begins with a “guanxi base” (Tsang, 1998) entailing either a blood relationship or some social interconnection. The latter may involve having gone to the same school, lived in the same neighborhood, belonged to the same organization and so on. The former is reflected in the tendency of Chinese in many countries to organize around family firms. Connections based on blood or kinship represent “ascribed” or inherited guanxi while other connections must be cultivated or “achieved” (Yeung and Tung, 1996). However, a guanxi base alone is insufficient to establish strong guanxi. The individuals must interact, exchange some favors, build trust and credibility, and work over time to establish and maintain the relationship. There are three different types of guanxi relationships---expressive tie, mixed tie, and instrumental tie (Hwang, 1987). Expressive ties are permanent and stable relationships based on egalitarian norms (i.e. need based). Yet, these very personalized and affective ties are fixed and limited in scope (e.g. family and relatives). Instrumental ties are unstable and temporary. These relationships are based on the norm of equity (Hwang, 1987), and are impersonal and utilitarian. Mixed ties are in between, and are somewhat permanent and stable (e.g. friends, same home town, same area, same school). They are personal and affective relationships between exchange partners. The major norms in mixed ties are the reciprocity of favors (renqing) and face saving (mianzi). It should be noted that the boundaries between mixed ties and instrumental ties are rather permutable (Hwang, 1987). That is, mixed ties can turn into instrumental ties, and vice versa. Similarly, Yin (2002) distinguished three types of guanxi: family guanxi, helper guanxi and business guanxi. Family guanxi and helper guanxi are similar to what Hwang(1987) termed “expressive ties” and “instrumental ties” respectively. Business guanxi refers to the process of finding a solution to a business rather than personal problem by using “personal” connection. Although the title of the paper doesn’t distinguish the type of guanxi, here we want to explain that the type of guanxi with governments in this paper is mainly focused on the family guanxi and helper guanxi, especially the helper guanxi. Governments are considered generally as the most important and powerful stakeholder of businesses.
Teaching and Learning in Internationalized MBA Programs
Dr. Linda E. Parry, Western Kentucky University, Bowling Green, KY
Dr. Robert Wharton, University of South Dakota, Vermillion, SD
Many U.S. MBA programs now recruit heavily from overseas, and international students are now a major part of the student body at several schools. These students bring with them many intellectual and creative assets; along with the financial support they bring their institutions. However, they also arrive with cultural predispositions that are often quite different than those of their U.S. schoolmates and instructors. This study focuses specifically on possible differences in locus of control, tolerance for ambiguity, and work motivation. Surveying seventy-one MBA students, we find that there are differences in tolerance for ambiguity among Asian students. International students also reported less satisfaction with their MBA experience. In the early 1990s the number of graduate business schools across the U.S. grew. There were several reasons for this growth: MBA students brought in more tuition; MBA programs attracted the attention of the business and academic community; and MBA programs attracted faculty. However, as popular as the MBA became with administrators, the number of student applications began to decline. This forced universities to become more creative in their methods of attracting MBA students. At the same time there was a growing interest in globalization and preparing students for a global marketplace. Many universities began to look outside of their national or regional territories and concentrate on encouraging students from other countries to come to their institution to pursue an MBA degree. As a result, most MBA programs in the U.S. recruited a substantial number of international students. Culture is the context within which we live and work. It extends beyond individual differences, beyond family patterns, beyond organizational climate. Culture is “a set of attitudes, values, beliefs, and behaviors shared by a group of people, but different for each individual, communicated from one generation to the next” (Matsumoto, 1996, p. 16). Although modern cultures are not monolithic, there have been numerous studies that show that culture influences individual behavior and beliefs. As international students began enrolling in MBA programs in the U.S., they brought with them their diverse cultural heritage. As a result, one would anticipate that there might be learning differences among the groups. However, there have been few empirical studies that concentrate on the differences between international and U.S. students even though these differences could be important for student success. In this study we focus on three factors that have been shown to impact student learning – locus of control, tolerance for ambiguity, and work motivation. Our goal is observe if there are differences between international and U.S. graduate students and if these differences influence performance and satisfaction with their MBA program. Locus of control is a psychological construct that is used to identify if a person feels self-control over the external environment. There have been many studies conducted to measure locus of control (James, 1957; Rotter, Liverant, & Crowe, 1961). However, these measures were closely related to a need for social desirability (James, 1957). In 1966 Rotter posited that individuals have a strong internal locus of control when they perceive that a specific event has occurred as a direct result of their personal actions. Individuals with a strong external locus of control perceive that events are the result of luck, powerful others, or some other actions that have nothing to do with their own personal actions. Numerous studies have focused on the impact of locus of control and student learning. Research suggests that locus of control is an inherited trait (Miller and Rose, 1982) and is likely to be linked to differences in cerebral functioning (DeBrabander, Boone, & Gertis, 1992). As a result internally and externally oriented people will pursue different strategies to acquire knowledge. Evidence suggests that internally oriented people learn more from past experiences than their external counterparts. Cassidy and Eachus (2000) found that external locus of control was associated with apathetic learning approaches whereas internal locus of control was associated with the adoption of strategic approaches. They concluded that academic self-efficacy is positively correlated with an internal locus of control. Other studies suggest that students with internal locus of control are more likely to pursue successful study strategies and achieve higher grades than their externally oriented classmates (Grime, Millea & Woodruff, 2004). Thomas and Mueller (2000) examined the relationship between culture and four personality characteristics commonly associated with entrepreneurship motivation. Two of those characteristics were risk propensity and locus control. Using Hofstede’s (1980) seminal work on culture, they posited that there would be differences in locus of control among students from different countries. Conducting a study of third and fourth year students at a university in nine different countries, they found that there were differences in locus of control based on culture. The results indicated that the likelihood of an internal locus of control decreased as the culture from the U.S. increased and the degree to which a person felt in control of his or her destiny diminished. These results led them to the conclusion that locus of control may be a culture-specific quality related to Hofstede’s individualism dimension.
Field Research on Impacts of Some Organizational Factors on Corporate Entrepreneurship and Business Performance in the Turkish Automotive Industry
Dr. Cemal Zehir, Gebze Institute of Technology, Turkey
Dr. M. Sule Eren, Kocaeli University, Turkey
This research investigates the relationships between organizational factors (customer orientation and learning orientation), corporate entrepreneurship and business performance. Data were collected from 90 manufacturing firms operating in the automotive and automotive parts and components industry in Turkey with more than 50 employees (medium and large sized firms). The research was conducted among senior, middle level managers and white collar employees of these firms. Results indicate that learning orientation and customer orientation have positive effects on new business venturing, self renewal about the organization, and proactive dimensions of corporate entrepreneurship. On the other hand, innovativeness and new business dimensions have a positive effect on business performance. In addition to these results, there is a positive relationship between customer orientation and business performance. Since the beginning of the 1980s, due to its beneficial effect on the performance of firms, the corporate entrepreneurship concept has gained great interest from both academicians and practitioners. This interest has arisen in response to a number of pressing problems such as an increasing number of competitors, international competition and an overall desire to improve efficiency and productivity (Kuratko and Hodgetts, 1998, p. 56). Rising global and domestic competition have increased the importance of corporate entrepreneurship for successful firm performance. Corporate entrepreneurship, the sum of a company’s venturing and innovation activities, can help the firm acquire new capabilities, improve its performance, enter new businesses and develop new revenue streams in both domestic and foreign markets (Zahra et al., 2000, p. 947). In this research, in order to examine the relationships between customer orientation, learning orientation, corporate entrepreneurship and business performance, correlation and regression analyses are conducted and evaluated. Corporate entrepreneurship (entrepreneurship within existing organizations) has been of interest to scholars and practitioners for the past two decades. Corporate entrepreneurship is viewed as beneficial for the revitalization and performance of corporations, as well as for small and medium-sized enterprises (Antoncic and Hisrich, 2001, p. 495). Corporate entrepreneurship refers to ‘‘the process by which firms notice opportunities and act to creatively organize transactions between factors of production so as to create surplus value’’ (Jones and Butler, 1992, p. 735). Corporate entrepreneurial processes may exist in established organizations at any level and within any area of an organization (Liu et al., 2002, p. 370). Previous views of corporate entrepreneurship can be classified into four dimensions: 1. new business venturing, 2. innovativeness, 3. self-renewal, and 4. proactiveness. New business venturing is the most salient characteristic of corporate entrepreneurship because it can result in new business creation within an existing organization by redefining the company’s products (or services) and / or by developing new markets (Stopford and Baden-Fuller, 1994). For the new ventures dimension, emphasis is on the formation of new autonomous or semi-autonomous entities, such as units and firms (Antoncic and Hisrich, 2003, p. 18). The innovativeness dimension refers to product and service innovation with an emphasis on development and innovation in technology (Schollhammer, 1982). This dimension emphasizes the creation of new products and services (Antoncic and Hisrich, 2003). The self-renewal dimension reflects the transformation of organizations through the renewal of key ideas on which they are built (Guth and Ginsberg, 1990). This dimension emphasizes strategy reformulation, reorganization and organizational change (Antoncic and Hisrich, 2003, p.18). The fourth dimension, proactiveness, is related to aggressive posturing relative to competitors (Knight, 1997). A proactive firm is inclined to take risks by conducting experiments. These firms take the initiative and are bold and aggressive in pursuing opportunities (Covin and Slevin, 1991). This dimension reflects top management orientation for pioneering and initiative taking (Antoncic and Hisrich, 2003, p.18). Narver and Slater (1990) defined customer orientation as ‘‘the sufficient understanding of one’s target buyers to be able to create superior value for them continuously’’ (p. 21). Desphande et al. (1993) defined this concept as ‘‘the set of beliefs that puts the customer’s interest first’’ (p. 27). Customer orientation is the firm’s ability and will to identify, analyze, understand, and answer user needs (Gatignon and Xuereb, 1997, p. 78). Learning orientation refers to the organization-wide activity of creating and using knowledge to enhance competitive advantage. This includes obtaining and sharing information about customer needs, market changes, and competitor actions, as well as developing new technologies to create new products that are superior to those of competitors (Calantone et al., 2002, p. 516).
Influencers of Exam Performance: An Empirical Replication in the Middle East
Dr. Ravi Chinta, American University of Sharjah, UAE
Evaluating and grading student performance in many collegiate business courses are done through exams. However, exam anxiety experienced by students impacts the ability of testing to measure students’ learning of course material. By replicating and expanding the scope of Burns (2004) research, this study examines the relationships between exam anxiety experienced by students at the time of the final exam and students’ performance expectations, actual performances, and the level of preparation for the final exam. Evidence was observed supporting the relationship hypothesized between test anxiety and performance expectations at the time of the final exam. Implications for managing the learning processes and future research are discussed. Based on replicating and expanding prior research done by Burns (2004), this study further explores the relationship between anxiety and performance in the Middle Eastern context. Examinations frequently represent the primary tool for evaluating students’ learning of course material in many collegiate business courses, especially those at the undergraduate level. Although other assignments, such as in-class discussions, case analyses, discussion boards on electronic Blackboard, exercises involving the Internet, service learning opportunities, and project presentations appear to be growing in use (Bacon, 2003), grades in many introductory or “principles” courses are still often based on students’ performances on a limited number of exams. With so much emphasis on the results of two to three performances, a poor grade on a single exam often has the ability to materially affect a student’s final course grade in these courses. Indeed, so important is the examination process in these courses that it is considered to be an integral component in the determination of teaching quality (Kelley, Conant & Smart, 1989). Although the reliability and validity of the examination process has received substantial research attention, the extent of the research examining problems faced by the students who take the exams has been correspondingly much less (Anderson & Sauser, 1995). Perhaps one of the most troubling problems reported by instructors relates to students who report or exhibit problems with the testing process – students who report or exhibit abnormal levels of anxiety over their performance on tests. The purpose of this study is to increase understanding of this occurrence through replication and expansion of Burns (2004) study. First, exam anxiety, or “the set of phenomenological, physiological and behavioral responses that accompany concern about possible negative consequences or failure on an exam or similar evaluative situation” (Zeidner, 1998, p. 17), will be explored. Second, anxiety experienced by students at the time of the final exam will be compared with performance expectations (expected course grade at the beginning of the course, and expected final exam performance and expected course grade at the time of the final), actual performances (actual grades on midterm exams and the final exam, and actual final course grade), and the level of preparation for the final exam (time spent studying and the number of absences from the course). Finally, implications of the findings to enhance learning processes and for future research will be discussed. Exam anxiety, as other forms of anxiety, is one of the most pervasive reactions which individuals experience (Sarason & Sarason, 1990). The transactional model of stress (the most influential contemporary stress model (Zeidner, 1998)) depicts stress as “a relationship between the person and the environment that is appraised by the person as taxing or exceeding his or her well-being” (Lazarus & Folkman, 1984, p. 21). The extent to which stress is encountered, and likewise the extent to which stress is experienced, is dependent on the degree to which the situation is viewed as emotionally threatening (Zeidner, 1998). Specifically, the threat value of a situation is determined by the personal salience of the situation, the subjective probability of negative outcomes, the imminence of the event, perceived aversiveness of the event, and the perceived unavailability of coping strategies and skills (Eysenck, 1992). The academic environment appears to possess several of these qualities. In an exam-conscious environment, such as that which is so pervasive in the academic environment, individuals are greatly affected by their exam performance (Keogh & French, 2001; Speilberger & Vagg, 1995). One’s performance on exams affects whether the student will need to retake courses and whether the student will ultimately graduate. Furthermore, upon graduation, one’s performance on exams and the effect that performance has on grades, will affect graduate school admittance and future employment. As a result, anxiety about and during examing (exam anxiety) is viewed by many as a pervasive problem (e.g., Hembree, 1988; Keogh & French, 2001; Pekrun, 1992; Schwarzer & Jerusalem, 1992; Tobias, 1992). Academic performance is a significant determinant of subsequent success in one’s (advanced) academic and professional pursuits. Therefore, it is not surprising that most students experience anxiety both before and during examinations. As a consequence of their emotional reactions during tests, the level of achievement of many of these students is substantially lower than would be expected on the basis of their intellectual aptitude (Gonzalez, 1995, p. 117). McKeachie (1951) personifies this anxiety phenomenon as follows:
Kostas Zotos, University of Macedonia, Greece
Sports arbitrage is the best method of making money from home on your computer without selling, marketing or recruiting. Arbitrage is using the difference in markets in such a way that a risk-free profit can be guaranteed whatever the outcome of an event. In sports betting arbitrage we are taking advantage of bookmakers various opinions about the outcome of a sporting event to ensure a certain profit. In the financial markets this may involve buying a commodity or financial instrument in one market and simultaneously selling the same commodity or financial instrument at a higher price on a different market to ensure a risk-free profit. In sports we are profiting from bookmakers having different opinions about the outcome of a sporting event. This paper is a step by step sports arbitrage guide for anyone. The word arbitrage is defined by the Compact Oxford English Dictionary as "the simultaneous buying and selling of assets in different markets or in derivative forms, taking advantage of the differing prices". The simultaneous nature of the exchange is not as important as the concept of buying with the prior knowledge of being able to sell at a higher price. An example of this comes from people buying second hand good cheap from garage sales and markets because they know that the same goods are being sold for more on Internet. This kind of arbitrage is actually quite common on Internet. With the expansion of the Internet, betting on sports has globalized and the number of registered bookmakers has multiplied (Table 1 demonstrates all famous sport betting Internet sites). In many situations the experts who decide the odds on a particular sporting event are not in agreement. In other instances they offer better odds in order to attract bettors to their bookmakers. For whatever reason, disparities in odds are produced, which then allows the bettor to make a risk-free bet. Each day dozens of these arbs(arbitrages) are produced between the different bookmakers, with a benefit of between 1% and 20%. Sports betting arbitrage is not to be considered as a quick get rich scheme. Sports arbitrage is unlike traditional betting. In other words its risk free betting but an element of risk is involved in every bet no matter how safely you place your odds. This paper is to be used for informational purposes only and any investment decisions you make in future are entirely at your own risk. Our advice will be to invest money that you can afford to lose or play with. There are tons of different sportsbooks in different countries and each specializes in particular sports and is more familiar with competitors from their own local area. To compete for global business they must offer a wide range of sports from all around the world often well outside their areas of expertise. As a result many bookmakers overstretch themselves when offering odds – they try and cover every possible market to get as many customers as possible. This is mistake because in doing so they will sometimes offer odds on events that they have little or no expert knowledge in. For example, a bookmaker in the USA may know very little about English Division 2 football games, and yet offer odds on them. In contrast, an English bookmaker will be much more “clued up” as to the likely outcomes from the same games. The result? Wildly different odds on the same games, and a feast of arbitrage opportunities and free money. Also, bookmakers are busy people – and when they are forced to offer odds they can sometimes make mistakes! Prices may also be based on the anticipated flow of bets rather than the probabilities of the outcomes. For example when England play soccer most bets with UK bookmakers will be supporting England. Bookmakers may offer the opposition at an inflated price to create a balanced book. Bookmakers don't create arbitrage situations with their own prices. If this does ever happen it's because of a mistake. You can't go to a single bookmaker and bet on all outcomes without losing money. From a business perspective bookmakers are only interested in making money. An arbitrageur's bet is still a good bet because, in the long-run, the odds are still in the sportsbooks favor. There is a misconception concerning a bookmaker's need to balance his book. It is believed that with a balanced book, the bookmaker can make a risk-free profit. This is true. But even if a book isn't balanced and a bookmaker is "short", and the bookmaker exposes himself to a possible loss, he still makes money in the long-run because of the overall diversification of all of his bets on all different games. But even taking this into consideration, some bookmakers may be opposed to clients making money from dealing with them, without incurring risk. Slightly different to the buying and selling of a single product, sports arbitrage trading relies on the possibility of backing both competing sides at a sporting event at different bookmakers in such a way that profit is guaranteed. This possibility arises from a difference in opinion about the fair odds of the match between the two bookmakers, and one bookmaker has given the favorite higher than usual odds, while the other bookmaker has given the underdog higher than usual odds. Individually neither bookmaker will make a loss, however if you take the combination of the two higher odds, it may be possible to bet on both sides so that no matter who wins the sports event, your winnings will cover the two bets completely and return some profit.
Moderating Effects of Age, Gender, Income and Education on Consumer’s Response to Corporate Reputation
Dr. Wei-Ming Ou, Shih Chien University, Taiwan
Demographic variables are important factors that influence shopper’s perceptions and consumer behavior. The principal objective of this research is to determine whether demographic segments, such as age, gender, income, and education, moderate the effect of corporate reputation on consumer’s store patronage behavior. An empirical research of 356 qualified supermarket consumers was conducted in the United States to investigate the moderating effects of demographic variables. The results obtained in this study assert that shopper’s demographic characteristics, namely, age, gender, income, and education, influence the relationship between retailer reputation and shopping expenditure and frequencies of store patronage. Corporate reputation is relatively stable, long-term, collective judgments by outsiders of an organization’s action, and achievements. It implies a long lasting, cumulative assessment rendered over a long time period (Gioia, Schultz and Corley, 2000; Ou, Abratt & Dion, 2006). Both academics and practitioners propose that positive corporate reputation results in business survival and profitability (Balmer, 2001; Roberts and Dowling, 2002 and Van Riel and Balmer, 1997), and is an effective mechanism to preserve or accomplish competitive advantage (Fombrun, Gardberg, and Sever, 2000; Van Riel and Balmer, 1997). One of the critical decisions confronting the shopper in interacting with the retail stores is store patronage (Nevin & Houston, 1980). Store patronage behavior strongly influences retail performance including total number of shoppers, total store visits, and average spending per shopping trip (Tang, Bell, & Ho, 2001). The principal objective of this research is to determine whether demographic segments, such as age, gender, income, and education, moderate the effect of corporate reputation on consumer’s store patronage behavior. One of the critical decisions confronting the consumer in interacting with stores concerns where to shop (Nevin & Houston, 1980). Numerous approaches have been used to determine consumer store choice behavior. Hackett, Foxall and Van Raaij (1993) reported that principal determinants of shopping behavior are: general evaluation, including safety and quality of merchandise, physical environment, efficiency, including travel distance from home, accessibility, and the social environment, including store atmosphere. Bell, Ho and Tang (1998) developed a shopping destination choice model whose fundamental principle is that each shopper is more likely to patronize the store with the lowest total shopping cost. Moreover, Baker, Parasuraman, Grewal and Voss (2002) proposed a store choice model that includes; three types of store environment cues as exogenous constructs, various store choice criteria and store patronage intentions as the endogenous construct. Corporate reputation is relatively stable, long-term, collective judgments by outsiders of an organization’s actions and achievements. It implies a lasting, cumulative assessment rendered over a long time period (Gioia, Schultz, & Corley, 2000; Ou, Abratt, & Dion, 2006). Shoppers were inclined to consume the products and services of businesses with good reputations (Balmer & Wilson, 1998), and are more loyal to those retailers who they perceived having favorable reputations (Nguyen & Leblanc, 2001). Previous research has indicated that the positive reputation associated with the retailer is one of the significant antecedents of shoppers’ intentions to purchase (Grewal, Krishnar, Baker & Borin, 1998). Demographic characteristics, such as age, gender, income, and education, may change the effect of retailer reputation on consumer behavior (Kim & Park, 1997). Age may serve as a proxy for many factors including life experience and the socialization process, and it is usually robustly related to consumption patterns and preferences in patronage choice (Gonzalez-Benito, Greatorex & Munoz-Gallego, 2000; Joyce & Lambert, 1996). Given that older shoppers are assumed to exhibit shopping patterns that differ from those of younger shoppers, the shopper’s perceived reputation of the retailer may be examined as a function of age-related perceptions (Gonzalez-Benito, Greatorex & Munoz-Gallego, 2000; Joyce & Lambert, 1996). Thus, it is hypothesized that: H1: The effect of shopper’s perceived reputation of the retailer on shopping expenditure, travel time, and patronage frequency, is moderated with the age of the shopper. Previous research has identified gender differences in shopping behavior, travel time sensitivity, retail format choice, and household shopping responsibility (Kim & Park, 1997; Otnes & McGrath, 2001; Ou, 1999). It is likely that they men and women have different perceptions of retailer reputation, and shop differently in terms of amount spent, travel time to the store, and shopping frequency. It is expected that the moderating effect of shopper’s perceived reputation of a retailer on patronage behavior varies with the gender of the shopper. Hence, it is hypothesized that: H2: The effect of shoppers’ perceived reputation of a retailer on shopping expenditure, travel time, and patronage frequency varies with the gender of the shopper. Household income is an important variable robustly associated with some psychological factors. Preceding research indicates that the sensitivity of travel time to store is different in different income segmentations, and, shopping destination choice behavior of high-income shoppers differs from that of low-income ones (Gonzalez-Gento, Greatorex & Munoz-Gallego, 2000; Hoch, Kim, Montgomery & Rossi, 1995; Ou, 1999). Given that high-income shoppers tend to reveal shopping patterns that differ from those of low-income ones, the effect of shoppers’ perceived reputation of the retailer on consumer behavior may be investigated as a function of an income-related variable.
Comparison of Gearing Ratio and Earnings Per Share in Two Branches: A Statistical Investigation
Dr. Paraschos Maniatis, Athens University of Economics and Business, Athens, Greece
The scope of this study is to compare the gearing ratios and the earnings per share (EPS) in two relative sectors: that of the food processing industry and the food-retailing branch. For the study we have considered firms, whose shares are quoted in the London Stock Exchange. Further, we investigate possible relationships of the above-mentioned financial ratios within and between the two branches. A set of data was obtained by taking randomly two sectors in the London stock exchange market: one consisting of 21 food-processing companies (Group B) and the other consisting of 16 food retailer firms (Group A). For the comparison we have employed the appropriate statistical tools- regression, correlation and means tests. In particular, we have adopted the following approach: -Check of the normality of the variables under consideration. This is done by the construction of the P-P plots of the variables, which is the necessary condition for the application for the parametric tests and the analysis of variance (ANOVA) techniques (part II). Application of correlation and regression techniques in order to investigate relationships between the variables. The regressions are followed by tests of significance of the regression parameters (part III). Application of nonparametric parametric techniques for testing equality of distributions and equality of means of the financial ratios (part IV). There follows discussion of the statistical findings (part V), bibliography and appendices including data, calculations and graphs. For the data presentation we have used the excel program- the most appropriate for spreadsheet tasks. For all calculation and the graphs the SPSS program has been employed. All calculation details and the graphs are shown in the appendix to this study. However, for easy reference and comparisons we have inserted the main arithmetic results and some graphs into the text. So far as the size of the samples is concerned we have used the most of the firms included in each selected branch for comparison. The selection was random in the sense that each member of the population has the same calculated chance of being selected. In our case in order to select the sample, a list was used where each to each member of the branch was given a number. Then a series of random numbers were used to select the firms to be included in the sample. The earnings per share ratio is defined as: Profits after tax available for distribution/Number of shares. The gearing ratio is defined as long-term debt/(long-term debt + equity). The linear regression analysis assumes that the residual values (observed minus predicted values) are normally distributed, and that the regression function (the relationship between the independent and dependent variables) is linear in nature. If any of these assumptions is grossly violated, then the regression coefficients (B coefficients) may be affected (inflated or deflated), and the statistical significance tests inflated or deflated. If "all is well," one can expect the residual values to be normally distributed. Normal probability plots (P-P plots) provide a quick way to visually inspect to what extent the pattern of residuals follows a normal distribution. If the residuals are not normally distributed, they will deviate from the line. Outliers may also become evident in this plot. If there is a general lack of fit, and the cluster seems to deviate from the principal diagonal and to form a pattern (e.g. an S shape) around the line, this is a sign of lack of normality in the data. In the following tables 1 and 2 are exhibited the normal P-P plots for the earnings per share variable for group B (variable B-EPS) and for group A (variable A-EPS), which are going to be the dependent variables in the regressions.In this part we investigate in each branch the correlation between the gearing ratio and the EPS. Values of correlation close to 1 or to –1 indicate a high degree of correlation (positive or negative accordingly). However, a high degree of correlation does not necessarily imply that a cause and effect relationship exists between the two variables. The parametric statistic to investigate the correlation is the Pearson’s coefficient of correlation. It is defined as r=cov(x,y)/sxsy where cov(x,y)is the covariance between x and y and sx , sy are the standard deviations of x and y. The calculation of the Pearson’s r gives r=0,159677972 for group B and r=-0,091472059 for group A. The following tables 3 and 4 give the values of the Pearson’s coefficient as obtained by the SPSS program: It is interesting to investigate the relationship between the gearing ratio and the EPS in each branch using as correlation statistics the Spearman’s rs (rho) and the Kendall’s τ (tau) coefficients of rank correlation. The Spearman’s coefficient of rank correlation is defined as: rs = 1-6Σdi2/(n3 –n), di = the difference between the ranks of the ith measurement and n the number of measurements pairs. The Spearman’s coefficient is the ordinary Pearson’s coefficient of correlation between the ranks of the variables if the measurements are replaced by their ranks. The Kendall’s coefficient of rank correlation is defined as: τ = S/[n(n-1)/2], S=sum of algebraic sum of pairs respecting the natural order of the ranking in the one variable if the other variable is ordered in the natural order, and n the mumber of measurements pairs. This statistic also varies in the interval [-1 ; 1]. T We have submitted the variables gearing ratio and earnings per share in each branch in a simple regression analysis in order to investigate the relationship between the variables.
Financial Review of Taiwan Industrial Park Independent Operation
--Using Industrial Parks in Northern Taiwan as an Example
Dr. Li-Hsing Ho, Chung Hua University, Taiwan
Chao-Lung Hsieh, Chung Hua University,Taiwan
The industrial district development by the Taiwan government has lasted for half of the 20th century. The development was significant in creating the miracle of Taiwan economic development. However, under circumstances in which political and economic conditions change rapidly and where the government has different thoughts on public management, the past industrial district operational and managerial model that “the government operates and manages according to the related regulations” faces significant challenges. Besides, because of the difficult national financial condition and the constant deficit of industrial districts, the problem of how to upgrade the financial operation performance of industrial district managerial organizations draws the public’s attention. Since the financial executive units of the industrial district generally divide the arrangement of labor service costs into service center and industrial district sewage operational costs, and do not provide trial balance standards with regard to the cost of each business, this research adopted an Activity-Based Costing System to manage basic analysis when proceeding with the trial balance of the industrial district in the northern area of Taiwan to acquire the base of the trial balance; subsequently, the research complied with “the affairs, schedule and work divisions of institutions transformed into administrative corporations of the Organizational Transformation Committee, Executive Yuan, to proceed with a business review in order to ensure the selection of each business and the sources of funds in the future. We plan the future developing innovation service items and manage the financial trial balance according to the current business upgrading charge benefit and the core competence of industrial district managerial organizations. The synthetic financial effects will be divided into three aspects: upgrading the charge profits of the original business, increasing the profits of the innovation service charge and striving for a related fund subsidy from the government. After the trial balance, at the fifth year of the implementation of the administrative corporation system, the government should undertake about NT$150 million in agent business fees. The financial revenue structure is as follows: legal revenue (78%), governmental agent business revenue (13%) and innovative business revenue (9%). Since the government announced the “investment promotion regulation” in 1960, the large scale development of industrial districts has created the miracle of Taiwan economic development. However, in recent years, industrial district managerial organizations faced plenty of regular cost expenditures; since the firms in the industrial districts were restricted to a global competition effect and life cycle of products, some companies left for other countries or gradually dissolved. The trend affected the income of industrial district management and maintenance charges. In addition, when facing the deflation of governmental finance and budgets, industrial district developmental and managerial funds could not arrange additional budgets to respond to the needs of the industrial district managerial organization fund. The operational cost of industrial district managerial organizations was seriously influenced. The problem of how to prevent the constantly expanding financial deficit of industrial district managerial organizations, calculate the gap between industrial districts’ average managerial costs and profits or effects, and further plan the solution have become the critical issues of territory planning. According to the equivalent fund principle of economics, the least average operational and managerial costs of an industrial district should be able to be calculated. From the perspective of property rights, the services which are not in the scale of the least average operational and managerial costs should be the service items which charge; according to the enterprising spirit of real estate management, there is still much public real estate, new economic behaviors and company services which do not have useful benefit in the industrial districts. The above involves the potential opportunities for the industrial districts to reach independent operation and development. This research intends to follow the trend of a knowledgeable economy, commercialization and market characteristics of innovative industrial district operational knowledge; through the force of the market, the operational competitiveness of industrial districts will be upgraded. The research also complied with the organizational reengineering of the government to adjust the operational and managerial model of industrial district managerial organizations, which follows “the government operates and manages according to the related regulations” to the developing strategy of “industrial district independent operation and management.” According to an equivalent economy model, we realize that the operational and financial problems of industrial districts are complicated. However, it seems that the essence of these problems can be described by an economic model figure. It is assumed that the service price provided by industrial district operation (P) is Y axis, and the quantity of services (Q) is X axis. The curves drawn are shown as Fig. 1. It is also assumed that the average operational and managerial cost of an industrial district is AC, and the marginal cost is MC. The current operation and management of an industrial district refers to a monopoly market. The lowest price requirement of monopoly firms (industrial district managers) is decided by the market demand curve. Since the firms face the whole market demand aspect, the demand curve of the market equals the average revenue AR. The operational and managerial revenue of an industrial district are mainly the following: general public facility maintenance fee, sewage dealing fee and other incomes (national housing rent, premium, etc.). The general public facility maintenance fee is calculated by the square of the land purchased by the firms. The calculation standard is divided into six levels.
Managing an Occupational Hygiene and Safety Administrative System in Taiwan
Dr. Chou-Kang Chiu, Ching Kuo Institute of Management & Health, Taiwan
Dr. Chun-Yu Chen, Ching Kuo Institute of Management & Health, Taiwan
Dr. Luan-Ying Wei, Ching Kuo Institute of Management & Health, Taiwan
Effectively managing occupational hygiene and safety in business organizations is important today since harmful workplace agents and factors often lead to significant financial loss due to the burden on health and social security systems, to the negative impact on production, and to the associated environmental costs. To avoid potential risks and negative impacts that are caused by inefficient and ineffective business management for occupational hygiene and safety, it is critical to understand and explore business management from a perspective of occupational hygiene and safety administrative system. This research discusses the management of occupational hygiene and safety in Taiwan from the following seven perspectives, namely; (1) structure and responsibility; (2) training, knowledge and ability; (3) consultation and communication; (4) evaluation and Documentation; (5) documents and case control; (6) operational control; (7) organizational preparation and response to dramatic change. Finally, this research finishes by presenting conclusions and limitations. Work represents an influential and rewarding aspect of human life, and is also critical and indispensable for individual employees, the community, and national development (Goelzer, 1996). Nevertheless, work might be the source of considerable suffering due to carelessness resulting in workplace accidents (Peter and Siegrist, 1999). For instance, work-related musculoskeletal disorder has been one of growing issues among employees in the industrialized countries during the last three decades (Guo, Chang, Yeh, Chen, and Guo, 2004; Lee, Yeh, Chen and Wang, 2005). These disorders are likely to generate considerable human suffering and result in decreased production and a lower work capacity (Lee et al., 2005; Theorell and Karasek, 1996). In the meantime, harmful workplace agents and factors often lead to appreciable financial loss due to the burden on health and social security systems, to the negative impact on production, and to the associated environmental costs (Goelzer, 1996). Working professionals and employees are not supposed to endure the workplace accidents, and countries should not afford the associated damage. Thus, it is crucial to effectively manage occupational hygiene and safety from a business management perspective to prevent unnecessary occupational risks. Effective occupational hygiene and safety management represents a process of safeguarding employees by continuously making right decisions (Toffel and Birkner, 2002). Albeit Taiwan has gone through a dramatic transformation over the last three decades, from an agriculture-based economy towards high-tech and service industries, injuries resulting in permanent disability are still high compared with other developed countries (Lee et al., 2005; Karasek, Brisson, Kawakami, Houtman, Bongers and Amick, 1998). Restated, Taiwan is facing some challenges, such as considerably high rates of occupational injuries and diseases. Hygiene and safety problems related with traditional manufacturing industries sustain to threaten employees’ health (Shih, Chang, Yeh, Su, Huang, Chang, Ho and Guo, 2004). With the fast advance of information technology and changes in manufacturing processes, employees are encountering more complicated workplaces and environment than ever. Many organizations and governments do their best to take practical measures of hygiene and safety to lower occupational accidents and diseases in the workplace (Siegrist, 1996; Su, Tsai and Yu, 2005). The current national policies of developed countries are to develop effective and efficient methods to protect employees from work-related injuries or illness (Su et al., 2005). For instance, the Labor Standard Law demonstrates that the employers are responsible for preventing from occupational hazards, building proper work and welfare facilities for their employees (Shih et al., 2004). Therefore, the Labor Inspection Law, enacted to implement labor inspection system, enforce labor laws, guard the labor-management benefits and rights, and keep the stability of the society to preserve economic development (Shih et al., 2004). Meanwhile, labor affairs associated with governmental agencies and international organizations throughout the world have made efforts to establish national Occupational Health and Safety Management System (OHSMS) guidelines or standards (Su et al., 2005). Later, the International Organization for Standardization (ISO) established the ISO 9000 series on Quality Management and ISO 14000 series on Environmental Managemet, although the establishment of an OHSMS standard was suspended in 1996. The government in Taiwan has sought to change the way of enforcement of occupational hygiene and safety and to cooperate with enterprises voluntarily developing the OHSMS based on risk assessments of their workplace (Su et al., 2005). As a result, the workplace fatality rate has declined 51.4% after the strict enforcement of labor laws by the government in 1987 (Su et al., 2005).
Marketing and Marketing Managers in the New Era: A Relational Perspective
Dr. Osman Gok, Yasar University, Izmir, Turkey
Contemporary research in industrial and services marketing indicates that a relational approach to marketing is required. These two streams of contemporary research have resulted long-term relationship recognition as key to competitive strength. Proposed new faces of marketing have changed its role and influenced the relationship between the theory and marketing as a management task. Marketing managers in the new era must understand a broader perspective of marketing. S/he will be an integrator, organizer, master of information and most importantly, a relationship manager. This paper’s contribution to the field is its comprehensive up-to-date review of the managerial issues in today’s marketing domain. Since the 1960s, the marketing mix approach, with its 4P model, has dominated the marketing theory and practice. The model has been so widely accepted among academics and practitioners that it is called a model of marketing only. It has been assumed that the marketing lead taken by Fast Moving Consumer Goods (FMCG) manufacturers is relevant and wholly transferable to other industries and should set the precedent for those that follow in their wake (Denison and McDonald, 1995). This assumption is now being questioned, particularly in light of growing interest on long-term oriented buyer-seller relationships in industrial markets and emerging unique aspects of service marketing. Proposed new faces of marketing have suggested a change in the role of marketing and influenced the relationship between theory and marketing as a management task. A functionalistic approach to marketing which is a consequence of 4P philosophy is also under debate. Marketing has long been defined as the integrated analysis, planning, and control of the marketing mix variables (product, price, place, promotion) to create exchange and satisfy both individual and organizational objectives. The traditional marketing approach, sometimes referred to as a conventional or classical marketing (Ahmad and Buttle 2001, p.30), requires that firms should first determine customers’ needs and wants. Customers should be organized into market segments for which firms should develop products. Firms should then organize their functional activities in order to satisfy these targeted segments. Marketers, in turn, assume that they can exert unilateral control over customers through timely manipulations of the '4Ps' or other elements of the marketing mix, particularly by using financial rewards such as price discounts, gifts and promotions. However, a number of authors (e.g. Hakansson 1982; Gummesson 1997; Grönroos 1996, 1997, 2004; Payne 1995, p.30; Christopher et al. 1991, p.8; Bennett 1996; Aijo 1996; Brodie et al. 1997; and Ahmad and Buttle 2001) consider this definition, based on marketing mix, to be irrelevant, not only for industrial and service, but also for consumer markets. They argue that, the generally accepted definition of marketing is largely based on short-term transactions and product oriented. These debates on traditional marketing approach emerged from mainly two streams of research. Contemporary research into industrial and services marketing suggested that a relational approach to marketing is required. The first stream came in the late 1970s with the “interaction/network approach” as proposed by the Industrial Marketing and Purchasing (IMP) Group. The IMP Group carried out co-operative research into the nature of the relationships between companies in industrial markets. Interaction approach marked the first real reaction against existing research tradition in business markets, stating that the majority of business to business purchases are not individual events and hence cannot be fully understood if each is examined in isolation. According to the interaction approach, both buyer and seller are active participants in the market, and the inter-organizational links become institutionalized into a set of roles that each company expects the other to perform. The interaction between companies is a dynamic process, varying in intensity, and may require significant adaptations by either or both parties (Paliwoda and Druce, 1987). In the network approach, the relationship between two companies is affected by the relationship they have with other companies. Therefore, a business relationship between two companies exists within the context of a wider network of relationships. Consequently, according to the network approach, when there is a business relationship between two companies, they will also be affected by other companies that they work with and these will have an indirect effect on the relationship. Another major breakthrough came with the concept of marketing of services. In the early 1970s, marketing of services started to emerge as a separate area of marketing, with concepts and models of its own, geared to typical characteristics of services (Grönroos, 1997). Delivering quality service is now considered an essential strategy for success and survival in today’s competitive environment (Gummesson 1998; Peck 1995, p.104; Grönroos 1996; and Caruana and Pitt 1997). Meanwhile, the move towards competition through superior service in the manufacturing sector is now clearly visible across a wide range of industries and it is becoming harder and harder to compete on manufacturing excellence alone. As McKenna (1991) stated, the line between products and services is fast eroding. What once appeared to be a rigid polarity now has become a hybrid: the ‘servicization’ of products and the ‘productization’ of services.
Transnational Corporations’ R&D Localization in a Developing Nation – A Game Theory Analysis
Dr. Chen-kuo Lee, Ling Tung University, Taiwan, R.O.C.
Tzu-yun Chang, Ling Tung University, Taiwan, R.O.C.
In the traditional theoretical analysis of transitional corporations’ R&D, developing nations are not considered an internal decision subject. Instead, developing nations’ characteristics are predetermined and included in the framework as a restriction for the transnational corporation’s investment decisions. Apparently, traditional theories concentrate on investors’ behaviors and are short of the analysis on developing nations’ behavior mechanisms and characteristics. Therefore, traditional theories are not sufficient for the authors’ theoretical requirements. By making use of game theory, this study has developed a theoretical framework based upon the creation and distribution of the benefits derived from the investment in connection with the transnational corporations’ R&D localization, and has thus interpreted developing nations’ foreign investment policy and results effectively, and has also formulated the rules regarding the transnational corporations’ R&D localization investors and investment behaviors, as well as the developing nations’ investment behaviors and the related interactions, thereby creating a comprehensive transnational corporation’s investment theory. As the globalization process accelerated and international competition grew fiercer than ever in the late 1990s, more and more transnational corporations adjusted their strategies accordingly—from technical resource allocation worldwide to global strategic administration. Apparently, transnational corporations have adjusted their global development strategies from market globalization and production globalization to technique globalization and R&D globalization (Reddy, 2000; Amsden, Tschang & Goto, 2001). Actually, international competition nowadays entails technical capabilities and technical innovation as core competency. In this connection, the ability to develop new techniques faster than competitors, and to apply new techniques to new products, are the highlights of competition (Barry, *2005). At the time that R&D costs and risks increase and hi-tech products’ life cycles decrease, more and more international corporations have understood that techniques are extremely important in terms of international competition advantages and, most importantly, that no corporation can obtain all techniques internally (Guellec et al., 2001) R&D localization refers to a transnational corporation’s transfer of R&D activities to its subsidiary outside home country and participation in R&D activities by home country’s resources. R&D is localized in two methods: (1) to establish an R&D branch in the subsidiary’s host country; (2) to take part in R&D activities in cooperation with the universities or research institutes located in the subsidiary’s host country. The first method works best and is the most important R&D localization method. In the last ten years, a number of transnational corporations have taken the developing nations’ manpower strength, technological capabilities, and scientific fundamental advantages into consideration and have established research institutes around the world to undertake the R&D tasks related to new technology and new products, thereby accelerating R&D globalization process (Reddy, 2000; Von Zedtwitz, Gassmann and Boutellier, 2004). As far as R&D globalization is concerned, the transnational corporations’ overseas R&D institutes play the most critical role (Barry, 2005). In the conventional sense, R&D is considered the core activity that is often undertaken domestically. Therefore, transnational corporations’ R&D activity in a host country means the dramatic changes caused by globalization strategy, not just a matter of R&D resource allocation. Transnational corporations establish overseas R&D institutes via foreign direct investment (FDI), or merge overseas R&D institutes via stock-control or merger and acquisition, or establish R&D institutes under joint venture. That’s how R&D localization has become an irresistible trend. The study on transnational corporations’ overseas R&D began in 1980s. All researches were implemented systematically and are divided into three categories. The first category consists of case studies with details, such as Behrman and Fischer 1980 and Chen, S.H. 2004). The second category consists of surveys and researches, such as Mansfield, Teece, and Romeo1979; Cantwell 1989, 1995, 1999, 2001; Kenney and Florida1994; Florida1997; Dalton and Serapio 1993, 1995, 1999; and Barry 2005). The third category consists of major sample studies, such as Hirschey and Caves (981; Pearce 1989; Howells 1990; Kogut and Chang 1991; Patel and Pavitt 1991; Teece 1992; Westney 1993; Dunning 1994; Patel and Vega 1999;,Kuemmerle 1998, 1999; Gassmann and Zedtwitz 1999;,and Contwell and Piscitello 2002. Case studies concentrate on the motives and process of transnational corporations’ FDI for R&D, while survey and major sample studies focus on the structure of FDI, especially the R&D investment direction. Most studies concentrate on developed nations, such as United States, Japan, and European nations. Most researchers developed their theories from the standpoint of transnational corporations’ R&D investors, especially from the individual transnational corporations. In their theories, developing nations are merely treated as background data, not included in the analysis.
Recent Lessons from Turkey’s Experiences with State Owned Banks After the Banking Crisis of 2001
Dr. Ilhan Uludag, Professor, Kadir Has University, Istanbul, Turkey
This article examines the state banking share at the regional level and internationally as a result of privatization and nationalization within the banking system. The study also examines the changes the state’s share of banking has undergone over time, as well as the relationship between financial crises and privatization in the banking sector to determine whether it is beneficial to take shares of state banks within the sector to the world average level, in terms of ensuring that the financial system works in a more efficient and competitive manner, and also discusses increasing the performance of the banks to be privatized, to ensure superior financial systems performance. State Banks - which have successfully provided for consumer banking needs as well as the development and industrialization requirements for years – have been recently supplanted by private sector banks as a result of privatization that occurred in conjunction with the effect of the free market economy of the 1980’s and capitalism. Major transitional factors affecting the move from state banks to private ones can be stated as the state bank’s finalized role within the economy being, their activities diverse from main banking. In addition, state banks have begun to declare large losses as a result of being used by governments to exercise their political and populist power, which leads to damage to the financial system’s reputation. In total, these factors combine to create state banking’s damage to the overall financial sector. For developing countries in particular, despite privatizations that have occurred over the last 25 years, state banks still play an important and dominant role within the banking sector- . In these countries, state bank’s share in the overall sector is still relatively high. As a result of international globalization and the effects of crises based upon them, countries are obliged to follow the economic and financial developments of every country. Considering the matter from Turkey’s point of view, one can see that state banks that had been founded due to economic requirements similar to those of other countries and the lack of expertise in the private sector, have contributed to the economic developments of the country and have lead its financial systems for some time. Parallel to this, in similar countries, problems have occurred in the state banks. Based upon these problems, a restructuring program had been applied in the state banks, and by year 2001, the positive outcomes of the program had lead to increased performance and profitability. Despite privatizations over the last 25 years, state banks still dominate the banking sectors of developing countries. When we analyze worldwide the state banks’ market share in various regions, we can see differences in large asset share within assets of the banking sector. According to 2003 data, worldwide asset share according to regions and countries, is stated as in Table 1. Despite primary privatizations in many Latin American developing counties and in few African economies, state banks still maintain their dominant role in the banking sectors of most developing economies. However, the microeconomic performance of state banks is weak. International research has determined that economic and financial development in countries with large scale state banks is slower than in others. (Hanson, 2004) The weak financial performance of state banks is often due to various irrelevant government duties, government interference to lending and collection processes; and also the common practice of bad government debts. The majority of restructuring programs in worldwide state banks have failed as they have ignored the consequences of these factors. In some successful restructuring programs the state has set attainable goals for state banks, to encourage reform. These goals include privatization in a set and a short period, working with well paid professionals, development of IT technologies so as to monitor the goals and prevention of any kind of government intervention. In many countries and especially in Latin America, privatization of state banks is the result of the high cost of state banking, as well as the transformation of state ideology into an open market model. Especially in countries that forbid foreigners to bid for a state bank , local private sector groups are at a distinct advantage and via privatization they aim to lend their company firms freely. If privatization of state banks has been actualized properly, well known international banks that apply professional - risk management techniques and aim to implement effective problem solving management, are offered significant potential for a serious positive outcome. Then, the outcome shall prove that the said -negative effects of privatization on small size lending is an overestimation.
Governance Choices for External Technology Sourcing: Taiwanese Firms in Global High-Tech Industries
Wiboon Kittilaksanawong, National Taiwan University, Taipei, Taiwan
This paper discusses the governance choices of firms in high-tech industries that seek technological know-how externally. While much of the early study adopted transaction cost economics to determine the governance choices, this study takes into account the importance of social relationships. Taiwanese firms entering into the global high-tech strategic alliances between 2000 and 2006 were employed as the research context. The results indicate that these firms are likely to adopt the governance choices based on the relational perspective. Specifically, Taiwanese firms possessing higher degree of centrality as measured by degree, closeness, and eigenvector and occupying more structural-hole positions as measured by effective size and efficiency in their industry networks are likely to form a less hierarchical form of governance such as contractual agreement or other non-equity strategic alliances. There is also an indication that these governance choices are influenced by the institution being passed on to the alliances from countries to countries. These challenges are discussed and proposed as an opportunity for the future research. Firms in high-tech industries mostly compete with each other intensely as they must constantly invest in the creation of new technological capabilities that are different from their competitors to create sustainable performance (Leonard-Barton, 1994). To accelerate the innovative capabilities internally, however is barely or even no longer sufficient to cope with the increasing cost, speed, and complexity of technological development in the high-tech industries (Cohen and Levinthal, 1990; Gomes-Casseres, 1989; Harrison, Hitt, Hoskisson, and Ireland, 2001; Lambe and Spekman, 1997; Teece, 1992). Importantly, as technological know-how is tacit and embedded in nature and could not be inspected without the risk of attenuation of property rights (Das, Sen, and Sengupta, 1998) or transmitted easily from one firm to another (Larsson, Bengtsson, Henriksson, and Sparks, 1998), the market for it is inefficient. Therefore, a firm may decide to gain access to such technological know-how through an acquisition of another firm in which the technology is embedded, or through a strategic alliance in which the know-how and assets of both firms are combined in a certain degree (Doz, 1996; Hamel, 1991; Kogut, 1988; Mowery, Oxley, and Silverman, 1996; Stuart, 2000; Young-Ybarra and Wiersema, 1999). Earlier studies of governance choices were mostly informed by the logic of transaction cost economics (Hennart, 1988; Pisano, 1989; Pisano, Russo, and Teece, 1988) which suggests that transaction costs determine the structure of alliances. These studies however, have been criticized because they did not take into account the importance of social relationships in mitigating transaction costs. Particularly, they undermine how embeddedness of firms in the network of relationships engenders trust that reduces transaction costs and in turn, influences alternative forms of governance. This paper therefore, contributes to the literature by considering such governance choices specifically, between equity and non-equity (or contractual) strategic alliances of firms in high-tech industries that seek technological know-how externally concurrently from both relational and transactional perspectives. Adopting these two perspectives, this paper particularly focuses on how the structural attributes of firms in the network of high-tech industries influence their decision on these two different hierarchical modes of governance. It is argued that available direct and indirect linkages of firms through the network of strategic alliances are essentially the sources of information that potentially alter the trade-offs in the choices between these two governance forms. Firms in high-tech industries could attain the required technological know-how through internal development or from external sources. Given that a firm has resolved to externally source technological know-how and it is able to identify another firm that possesses the desired resources and capabilities, this firm would need to choose a mode for linking up with these resources and capabilities. As a dichotomous choice, such firm would have to choose between equity and non-equity strategic alliances, as alternative organizational forms of external technology sourcing. These organizational forms serve as control mechanisms that align incentives of the parties involved. These classifications of alliance governance structure in terms of the equity have been the dominant perspective in distinguishing alliances. For example, allocating joint equity in equity alliances creates a mutual hostage situation that limits partners’ incentives to behave opportunistically. Alliances involving equity are considered to be more hierarchical than non-equity alliances. Moreover, alliances with equity are generally thought to have many more control mechanisms (Hennart, 1988; Pisano, 1989; Pisano, Russo, and Teece, 1988; Teece, 1992). Transaction cost economics views the question of alliance governance as parallel to the question of vertical integration and the basic make-or-buy decision (Williamson, 1975). Because alliances combine elements of both market and hierarchy, firms enter into such arrangements when the transaction costs associated with an exchange are intermediate but not high enough to justify vertical integration or when full integration is not possible or desirable due to differences in size or corporate strategy (Eccles, 1981; Williamson, 1985). The scholars in transaction cost economics have argued that hierarchical control is justified because when firms are faced with high transaction costs, they aligns the interests of the parties by creating a mutual hostage situation to curb the potential opportunistic behaviors (Pisano, Russo, and Teece, 1988).
Analytic Hierarchy Process: An Approach for Determining the Collection Strategy
Yuh-Neu Chiang, Hsing Wu College, Taiwan
Yung-Chun Lin, Hsing Wu College, Taiwan
Su-Hsiu Lin, Hsing Wu College, Taiwan
Taiwan is an island with scarce resources; it depends heavily on international trade. Among international trade practices, accounts settlement is the most important one. This paper introduces the analytic hierarchy process to solve a decision-making problem with two advantages. (1) It adopts the pair-wise comparison process by comparing two criteria at one time to formulate the weights of those criteria. (2) It uses a consistency test to ensure logical judgments. Finally, we suggest a formula for the SMEs in Taiwan to measure the scores of four alternatives of payment collection: letter of credit, collection (D/P,D/A),T/T, and open account. The international economic environment is changing all the time. Since Taiwan was acceded to the World Trade Organization (WTO) in 2002, Taiwanese industries have had to face increasingly fierce competition from global corporations. Globalization and liberalization are now at the heart of the global economy. The trend towards globalization has created major challenges for Taiwan’s small and medium enterprises (SMEs). This is making it more and more difficult for SMEs to manage and grow their existing business. The number of SMEs as a proportion of all businesses remains at an average of over 97% in Taiwan with an annual export values of over NT$130 trillion (white paper 2005). The employment and the number of employees on the payroll exceeded 77% and 68% respectively. In relation to the social stability and protection of people’s livelihood, SMEs indeed contribute to a great degree. Though Taiwan is famous for its high-tech products, e.g. IT products, the amount of production in traditional industries (manufacturers) still accounts for about 70% of the total production (Directorate-General of Budget, Accounting and Statistics, Executive Yuan). In recent years, however, both the growth rate and export rate of the amount of production for the traditional industries (especially for those labor intensive manufacturers) are decreasing in comparison with those of large enterprises for many reasons. First, large enterprises have performed better than SMEs because many of them are multinational corporations. Second, many of Taiwan’s large manufacturers export electric and electronic products, which are high value-added products. Third, most SMEs are composed of traditional industries. Many of them have established their factories overseas due to the high costs. There has been no slowdown in outward investment from Taiwan (especially to Mainland China). It is not easy to keep track of their every export order. In the past few years, Taiwan’s investment in Mainland China has surpassed its total investment in other countries, accounting for nearly 70% of the total outward investment (white paper 2005). Among these investing corporations, most of them are labor-intensive manufacturers. Though they have set up their factories and manufactured products overseas, they hold the main functions in Taiwan, e.g. R&D, procurement and payment collecting. As a result, the triangular trade between Taiwan and Mainland China is more thriving than ever. The international economic environment has gone through an enormous change attributable to the trade globalization and liberalization, greater efficiency and facility of high technology, and the boundless and comprehensive accessibility of the Internet. This has made international trade more expedient. An international trade usually begins with the establishment of business relationships and ends with the settlement of payments. Among those international trade practices, accounts settlement is the most important one of all. It requires for safety, liquidity, profitability, as well as expediency. The advance of the Internet has improved the asymmetric information problem during the transactions. The method of payments collecting has evolved from how it was 20 years ago due to the changing environment mentioned above. Since great distances separate the buyer from the seller in an international trade, the seller cannot be absolutely certain that the buyer will pay for the cargo. Therefore, the exporter wishes to collect the payments as soon and safely as possible. But the importer would rather postpone paying for as long as he/she can so that he/she could make good use of the funds. There are many ways of payment collection (paying) for international transactions: cash in advance, letter of credit, open account, documents against payment, documents against acceptance, consignment, installment, cash against document, and cash on delivery (Tsai, 2006). More than a decade ago, many researchers gave approvals and praises to the letters of credit.
Assessing Knowledge Management System Success: An Empirical Study in Taiwan’s High-Tech Industry
Chung-Hung Tsai, Tzu Chi College of Technology, Taiwan
Hwang-Yeh Chen, National Dong Hwa University, Taiwan
Effective knowledge management is the foundation for organizations to stay competitive. The knowledge management system (KMS) plays a key role in facilitating knowledge management process in term of knowledge creation, storage, transfer, and application, particularly in high-tech industry. The purpose of this study is to develop a KMS success model primarily built on DeLone and McLean’s updated information systems success model (2003) and trust theory by assessing the impact of system quality, information quality, service quality, and trust on system use and user satisfaction, leading to the individual performance change. An empirical study will be conducted to examine proposed hypotheses and model using SEM from a selection of Taiwan’s high-tech companies. By including the factor of trust in KMS success model, the proposed model extends beyond traditional IT-based view of knowledge management systems, and can provide academics and practitioners with better assessment of KMS success. As the society enters into the knowledge-based economy, effective knowledge management is essential for organization to stay competitive. Knowledge management (KM) is widely recognized by both academics and practitioners for its increasing importance in gaining organizational competitive advantage (Sambamurthy and Subramani, 2005). Knowledge is considered as an extension of information in that knowledge is embedded with context (Gallupe, 2001). Because knowledge-based resources are embedded in multiple entities of organizations, i.e. organizational culture, routines, policies, systems and documents, as well as employees, and are usually difficult to imitate with social complexity, these knowledge assets may generate long-term sustainable competitive advantage (Alavi and Leidner, 2001). Specifically, high-tech companies are characterized by high level of intellectual work and have a majority of assets linked to intellectual human assets (Rogers, 2001). The key to knowledge management is to capture intellectual assets and help employees better perform their work for the benefit of organization. The IT-enabled knowledge management systems (KMS) can play a key role in helping organizations manage knowledge in more effective and efficient way. Although early work in information systems research primarily focuses on the design of KMS, the ‘soft’ or ‘social’ aspect demands equal attention. Sambamurthy and Subramani (2005) indicates there is increasing realization that technical and social processes may interact in complementarities to shape knowledge management efforts. Structuring people, technology and knowledge content is required for KM projects or initiatives to achieve organizational objectives (Davenport, Long and Beers, 1998). Organizations must take into consideration of social factors to ensure success when designing and implementing KMS. Alavi and Leidner (2001) raise the research question on how trust can be developed to enhance individual’s use of knowledge in a KMS. However, social factors in determining KMS success have not been fully explored and examined in information systems and knowledge management research. Since knowledge management is still a young discipline, there is lack of accepted integrative model considering technological and social elements to measure the success of KMS implementation. The purpose of this study is to develop a KMS success model mainly built on DeLone and McLean’s updated information system success model (2003) and trust theory by assessing the impact of system quality, information quality, service quality and trust on system use and user satisfaction, leading to the individual performance change. An empirical study will be conducted to examine proposed hypotheses and model using SEM from a selection of Taiwan’s high-tech companies. The literature relevant to the development of proposed hypotheses and model is reviewed and discussed in the next section. Knowledge is at the center stage of knowledge management activities. Knowledge can be viewed as information combined with experience, context, interpretation and reflection (Davenport et al., 1998), or as personalized information related to facts, procedures, concepts, interpretations, ideas, observations, and judgments (Alavi and Leinder, 2001). Thus, knowledge may be considered as a key organizational resource (Huber, 2001) or as an asset embedded with context (Gallupe, 2001), and must be managed effectively. Given the importance of knowledge (an asset), KM activities designed to create, transfer, and apply knowledge are crucial for firms to gain competitive advantage, and IT can play an important role in facilitating and enhancing firms’ KM efforts. The research in knowledge management, knowledge management systems, information systems, and trust is reviewed to develop a KMS success model in the following section. It is becoming accepted that knowledge management is required for modern organizations seeking to stay competitive in an increasingly dynamic and competitive world. The sustainable advantage can be achieved by a set of knowledge management activities within an organization, that is, effective and efficient generation, distribution, and application of knowledge (Gallupe, 2001).
Transforming Perspectives on Health Care: Outcomes of a Management Education Program for Physicians
Dr. John P. Conbere, University of St. Thomas, Minneapolis, MN
Dr. Sharon K. Gibson, University of St. Thomas, Minneapolis, MN
This article reports on the outcomes of a management education (Mini MBA®) program for physicians in a large health care system based in South Dakota. The organization’s objective for this education program was to assist their physicians in becoming better leaders in health care. 103 physicians were enrolled in the program in three cohort groups. Notably, the instructional methods employed in this program differed significantly from the traditional continuing medical education that physicians generally attend. Using a qualitative case study methodology, three major themes emerged from the study with respect to the physician’s perceived outcomes. First, physicians reported that the development of business acumen influenced their ability to participate in health care decisions. Second, physicians perceived that their increased openness to new ideas impacted the way in which they interacted with hospital administrators, peers, and staff. Third, a sense of intimacy among physicians as they shared professional and personal experiences served to alleviate a sense of isolation. Moreover, these outcomes were seen in the context of their work, families, and communities. Avera Health, a large Catholic health care system based in Sioux Falls, South Dakota, was interested in providing its physicians with continuing education on the various factors that influence the success of health care. Their strategy was to use continuing medical education (CME) in the form of a Mini MBA® program created by the Center for Health and Medical Affairs at the University of St. Thomas, Minneapolis, Minnesota. This management education program differed significantly in structure of content and learning process from the traditionally didactic lecture format of most CME programs. This article reports on the individual and organizational outcomes of the Mini MBA® as perceived by the physicians who attended this program, and lends insight into the impact of this type of education on physicians’ work, families, and communities. Generally, most continuing education for physicians is focused on maintaining their ability to practice in their area of medical specialty or to keep current on new medical developments. Therefore, the basis for most education taken by physicians is content-focused. For example, Holmer (2001) found that 81% of physicians identified meeting state-based licensure requirements as their reason for participating in CME, while 41% of physicians stated that keeping up-to-date on medical developments was their objective. In addition, the typical CME involves a didactic lecture format and active physician involvement is kept to a minimum (Felch & Scanlon, 1997). This tends to parallel the instructional methods that have been traditionally found in medical schools. Thus, traditional CME, similar to their prior educational experiences, has tended to be a rather passive event for physicians. However, some have argued that the goal of CME is changing physician performance or outcomes and that traditional CME lecture fails to lead to change (O’Brien, 1999; Richards, 1988). Researchers have had limited success in demonstrating that the standard didactic approach works (Felch & Scanlon, 1997). In a study of the effectiveness of didactic, interactive, and mixed educational styles, the results showed that didactic CME had no effect on physician performance, and that other approaches, which included case discussion, role-play, or hands-on practice sessions, had a significant impact on physician behavior (Davis, O’Brien, Freemantle, Wolf, Mazmanian, & Taylor-Vaisey, 1999). Three developments are changing the standard style of CME (Felch & Scanlon, 1997). One is the use of experts in adult education, computer science, communications, and continuous quality improvement. The second is the growth of CME being provided by regional and local providers, such as community hospitals and group partnerships, which allows for CME professionals to be better able to accommodate physicians’ individual learning needs. The third is a change in emphasis from the teacher to the learner, so that any experience that leads physicians to change is understood to be a learning experience. This change from education by the expert to being self-directed and learner-focused allows for peer communication to be as legitimate as “expert” instruction (Felch & Scanlon, 1997). In addition, there have been efforts to use business concepts as the framework for a CME program. For example, the Geisinger Clinic in Danville, PA worked with the Sigmund Weis School of Business at Susquehanna University to develop a continuing education program that would teach physicians the “how’s and why’s of management decision making” (Radecki, 1986, p. 14). In summary, much of CME has used a didactic approach, which appears to have had limited success in changing physicians’ behaviors in their practice. CME is in a time of change, with a growing emphasis on the use of adult learning theory to shape the pedagogy of the education, an increased use of local and regional health care groups providing CME for the physicians in their systems, and a shift from expert-driven to learner-driven instructional methods. As health care continues to change and the need for physicians to assume a leadership role in health care increases, this type of educational approach has become even more critical. However, although interest in physician leadership is increasing, there is little empirical research on physician leader competencies or on the design of education to develop these competencies. As stated by McKenna and Pugno (2006), “Despite rapidly escalating interest in the topic of physician leadership, not much has been published regarding the competencies associated with physician leadership, or how those competencies are developed (p. 57). Given the changing nature of CMEs, the purpose of this study was to assess the perceived outcomes of a unique educational program for physicians that incorporated a variety of adult learning strategies to achieve its learning objectives. In addition to covering content characteristic of management education, this program, customized specifically to the health care context, employed instructional methods that were a distinct departure from the didactic approach to which physicians were accustomed, and was perceived to contribute to some unintended outcomes for physicians. Given that these outcomes were not previously identified, a qualitative case study, using an emergent data collection approach, was determined to be the best method to gain knowledge of learning outcomes that were perceived by the physicians who attended the program. Avera Health Systems, through its collaboration with the University of St. Thomas, began offering the Mini MBA® in Health Care Management in 2001.
Using Assessments Tasks to Shift Focus to Learning Rather than Evaluating Students
Shameem Ali, Victoria University, Melbourne, Victoria, Australia
Dr. Henry W. L. Ho, Victoria University, Melbourne, Victoria, Australia
This paper investigates students working on cases analysis as part of the summative assessment in an undergraduate marketing subject with the aim of developing student’s analytical, strategic thinking, problem solving skills and knowledge of marketing principles. The results indicated that a large majority of students preferred working on the analysis of the cases given as their assignments and perceived getting a high grade on these assignments. Overall students found case analysis to be more rewarding than other forms of assessment. These findings have several implications for the design of assessable assignments which take into account the general student tendency to avoid deeper learning in preference to surface learning that only reaches minimum standards. The paper suggests that a combination of varied forms of assessment is appropriate if greater and deeper learning is to be achieved. Educators have said that they never really learned a subject until they had to teach it themselves (Carlson and Schodt, 1995). This refers to the understanding of the content, which is the prime focus of many teachers, both within the secondary school and university systems. Today’s students have unlimited access to information and the modern challenge facing teachers is motivating students to engage with the subject. A frequently heard complaint about education today is that it does not teach students to think. The purpose of tertiary education should be to provide an opportunity for students to acquire and process knowledge. Unfortunately, all too often the emphasis is upon memorization of, rather than processing of, information (Bolton, 1996). According to Drea, Tripp and Stuenkel (2005), students learn best when the educators successfully create an active learning environment. To help marketing students learn to “think like marketers”, marketing educators need to consider seriously ways of moving beyond the traditional modes of instruction. The case study analysis method is on technique for doing this. As Case studies are tools that are widely used in tertiary education, it takes time to learn how to get the most out of them for both educators and for their students. Ideally, case studies are intended to help students make relevant connections among course materials; transforming course materials from opaque language or ideas into something students can integrate into their own long-term memory and knowledge bank (Grosse, 1988; and Hancock, 1993). This paper investigates students working on cases analysis as part of the summative assessment in an undergraduate marketing subject with the aim of developing student’s analytical, strategic thinking, problem solving skills and knowledge of marketing principles. Making judgments of student competency and the evaluation of their learning are complex matters, especially in a semester timeframe. This paper examines students’ perception as to whether the analysis and problem solving of the case study contributed to their learning. It evaluates students’ perception of the value of case study analysis and problem solving as part of the assessment. The results show that students’ perceived the case study analysis and problem solving exercise as contributing to their learning and that they benefited from seeing a problem examined from different perspectives. Summative assessment is comprehensive in nature, provides accountability and is used to check the level of learning at the end of the program (Angelo and Cross, 1993). It requires conflating the marks given for each task to produce the subject’s final outcome (Ritter and Wilson, 2001). Ideally, summative assessment is designed to measure student understanding following a sustained period of instruction with the focus on identifying the level of student mastery and the effectiveness of instruction. As such, summative assessments are outcome measures that emphasise student achievement rather than aptitude or effort. Summative assessment methods are the most traditional way of evaluating student work. From classroom tests to high-stake testing, summative assessments are used in universities and colleges across Australia. From a student perspective, summative assessments are primarily utilised to determine final course grades; from an instructor perspective, they are a means of accountability (Barrows 1986). Most educators (Angelo and Cross, 1993; Ritter and Wilson, 2001; and Scott, 2001) believe that summative assessments are a vital part of the educational process due to the wealth of information they provide. In other word, it is important that educators invest the necessary time and resources to develop quality summative assessments. To achieve successful outcomes in subjects such as marketing strategy, where the focus is on teaching students to think strategically and analytically, a deeper learning is essential. The nature of strategy is such that actions and activities are not necessarily sequential and evaluation has to be made on a series of overlapping actions, reactions and outcomes. One of the objectives of the subject is to develop critical thinking and analysis, and previous forms of assessments using group tasks, essays and presentations generally were unable to achieve some of the outcomes expected. Students were unable to devote adequate time on assessment pieces and tended to rewrite answers from readily available sources. In order to encourage students to develop deeper learning case analysis assessments were introduced. The expectation was that students would be forced to think through a small problem for which answers were unlikely to be found through the usual search mechanisms. It was expected that students would mull through a range of sources, draw on their experiences and material learnt in this and other related areas of study, to derive possible solutions. This can be contrasted with surface learning which depends on memory and the ability to reproduce material deemed relevant to a particular question (Boyce et al., 2001). Case study analysis removes the tendency to reproduce existing material, encourages critical thinking and requires the application of skills and knowledge and thereby motivating students to seek this knowledge.
Congruence/Incongruence Perception of Product Strategy and Business Performance: By Contrasting Organizations and Consumers from Taiwanese Telecommunications Industry
Chi-Jyun Cheng, The University of Birmingham, UK
Dr. Shuling Liao, Yuan-Ze University, Taiwan
Within competitive telecommunications industry, organizations usually come up with their product strategies in a short time, especially these strategies are mostly based on their working experience. However, organizations may recognize their product strategies in one way but customers may perceive them another. With data collected in Taiwanese telecommunications industry and consumers, namely 132 practitioners and 554 consumers, this study contributes to the body of knowledge by empirically contrasting the perception of product strategy between organizations and consumers and their effect on the business performance. The results of the study reveal that organizations with better consistency of customers’ perceptions of product strategy should benefit from a higher level of business performance. The implications of the study are presented from both the academics and managerial perspectives. Against the background of a global economy, a program of privatization has been widely employed by governments regardless of developed or developing countries. Of the privatized cases, telecommunications would be a typical one (Wymbs 2002). For example, in the last decade, telecommunications industry has been one of the most spectacular growths in globalization (Sarkar et al. 1999) while it also has become the most competitive in the world (Turnbull and Leek 2000). Within such a competitive environment, exclusive product strategies have to be emerged in a short time. Hence, practitioners are forced to shortly come up with product strategies. Under this circumstance, practitioners would encounter a problem: they see their own product strategies in one way but customers perceive them in another. For example, organizations perceptively assume their product strategies are able to meet customers’ needs, but customers do not. As a result, the organizations might have a major problem with losing customers. Even worse, this inconsistent perception between organizations and customers might lead to poor business performance (Langerak 2001) if organizations ignore it as a problem or do not act upon it. On the other hand, if these two groups have similar perception, this congruence could be regarded as strength for organizations. Previous research has suggested that this inconsistent perception can be examined by different gap analyses (Brown and Swartz 1989; Brennan and Gallagher 2002; Headley and Choi 1992; Kwan and Hee 1994; Min and Min 1996), but it is surprising that most studies of gap analysis are dealing with the same group’s perception (Clow and Vorhies 1993; Gagliano and Hathcote 1994; Kwan and Hee 1994; Sultan and Simpson 2000) and so little research has addressed the problem of inconsistent perception between organizations and customers (Bitner et al. 1990; Callan and Lefebve 1997; Samli et al. 1998). In addition, research on gap analyses focuses mostly on how to discover the gaps but rarely on how to deal with these gaps if they exist. What is more, research on gap analysis has done little to reveal the effect of product strategy on business performance based on comparing the perception of organizations and customers. Therefore, this research has four objectives. First, within telecommunications industry, the consistent/inconsistent perceptions of product strategy between organizations and consumers will be realized by a gap analysis. Second, the relationships between the consistent/inconsistent perceptions and the level of business performance will be tested through empirical data. Third, based on the results above, relevant corrective actions are suggested so as to improve either product strategies or business performance. To gain a product advantage, product strategies of organizations must be equal or superior to those of competitors. In addition, it is increasingly difficult to maintain an advantage by product innovation (Vandenbosch and Dawar 2002). Also, unlike mobile phone handset makers, mobile network operators cannot offer a physical product to customers (apart from a SIM card). Therefore, one possible product strategy is product differentiation (David et al. 2002). A differentiation strategy is able to emphasize customer requests, such as differentiation of range of cover, international roaming service, and so on. Another possibility is product bundling (Gal-Or 2004; Peel et al. 2000; Stremersch and Tellis 2002). Product bundling (e.g., a SIM card with a mobile phone handset) not only benefits network operators and handset makers, but also prevents customers from separately buying each item. In addition, a mobile network operator could charge a higher price by providing extra information in a bundle (Sinha 2000), such as information on movies or music. Moreover, bundling with a handset is financially attractive because it is less expensive for a new customer (Schiesel 2001). Product expansion can also be a good strategy (Mishina et al. 2004). According to Porter (1987), product expansion can lower costs by sharing similar services and then achieving economies of scale.
Pre-Assessment of Data Collection Procedures: Planning to Fail by Failing to Plan
Dr. John Knight, University of Tennessee at Martin, Martin, TN
Dr. Daniel Tracy, University of Tennessee at Martin, Martin, TN
Prior to making major decisions, managers sometimes instruct analysts to collect and to research some readily available databases for clues as to how relevant statistical data might impact any decisions to be made. Without a readily available database, managers might authorize the collection and analysis of data that might provide insight into a more probabilistically logical solution to the problem. In either case, management should be intricately involved with the pre-assessment process of the data variables so that the analysis will have positive impact on the decision-making outcomes. Management must insure that pre-assessment of data is thorough. Why? Management must recognize that the origins of potential problems come from different places and the omission of any significant causes and effects will obviously impact the analysis. Further, management should understand that good statistical analysis is expensive in both time and actual costs. Spending excessive time and money on the collection of poor or inappropriate data is a fruitless and sometimes even deceiving problem sometimes seen in numerical analysis. Finally, management must emphasize that without their prior agreement on appropriate operational variable definition and suitable measurement techniques and methodologies, conclusions may be dismissed if they are contrary to any preconceived plans or agendas. Management must insist that important technical pre-assessment activities are incorporated into the process. Any data collected needs to have appropriate accuracy and precision. Methods for selecting the appropriate levels of accuracy and precision for different stimulus versus response variables needs to be incorporated. The data collection method should be pre-tested and evaluated for problems – including difficulty of collection and measurement, incapability of obtaining representative data, and potential sampling error. Finally appropriate sample collection procedures need to be established, individuals trained, and sample sizes established. Without appropriate pre-assessment of the collection of statistical data for managerial decision-making, the potential rewards of such analysis will seldom be fully realized. This is demonstrated through an illustrative case study. When management attempts to employ statistics to define and analyze a problem prior to making a major managerial policy decision, a significant commitment to the development of a relevant database is simultaneously being made. The collection of data often takes an extensive amount of time and effort resulting in data collection costs having a significant impact on the analysis. In some cases, an existing database can be referenced and analyzed for a relatively minor cost. However, in most cases the existing database is only tangentially related to the specific problem at hand and will thus only provide tangentially related answers to the problem (Knight, 1999). In most cases where statistics are to be utilized, a decision to collect data will involve a commitment to significant sums of employee data collection time, actual costs of collecting data (for example, destructive testing even involves the cost of lost material), and decreased productivity time due to experimental results being interlaced with normal production procedures. In addition to these costs, significant amounts of statistical analysis time, effort, and report writing will be implied when utilizing statistics in the decision making process. In these cases management must be more than vaguely familiar with the concept of “garbage in-garbage out” when data are to be analyzed. Although the concept is theoretically easy to perceive and accept, the level of managers’ technical knowledge with respect to ensuring good data versus garbage is scant. Many examples exist in practice where tremendous expense has been incurred in the collection of statistical data where the results have been non-existent. For example, many cases exist where management has decided to implement statistical process control at the urging of a customer when the measurement variation is primarily measurement error rather than product variation (Knight, 2000). In such cases, the process appears to be in control, but in reality the process is simply verifying that the data is a sequence of random numbers.
Market Reaction to Accounting Regulatory Changes: Adoption of SFAS 142
Dr. Stephen C. Gara, Drake University, Des Moines, IA
This study examines the market reaction following announcements from the Financial Accounting Standards Board (FASB) that it intended to eliminate the requirement to amortize purchased goodwill (SFAS 142). SFAS 142 altered the long-standing treatment for purchased goodwill, ratable amortization, and replaced it with impairment testing. Consequently, reported earnings are no longer subject to the drag of amortization expense. While goodwill is still recognized and capitalized, it is now subject to annual impairment testing, potentially requiring large, but infrequent, write-downs of goodwill value. The rationale for the change was three-fold: to improve the quality of reported earnings, increase the comparability of U.S. accounting principles with those of other industrialized nations, and to mitigate the elimination of the pooling method for acquisition reporting. However, criticisms of SFAS 142 have been raised as well. The overall research question to be answered by this study is whether investors and other market participants consider the elimination of goodwill amortization to be a positive event, as evidenced by their changing assessment of firm value. An event study methodology is used to examine the market reaction associated with the adoption of SFAS 142 by the FASB, including the association between market reaction and the magnitude of reported goodwill. Additionally, consistent with Myring et. al. (2003), the debt contracting and political cost hypotheses are examined as contributors to the market’s reaction to SFAS 142. While, a negative reaction is found surrounding the FASB’s originals proposal shortening the goodwill amortization period from 40 to 20 years. The overall results indicate a positive market reaction for the initial event dates surrounding the FASB’s decision to eliminate amortization and impose impairment testing instead, though no significant reaction was found for the final vote by the FASB implementing SFAS 142. Finally, the reaction was generally positively associated with the reported level of goodwill. The Financial Accounting Standards Board (FASB) issued Statement of Financial Accounting Standards (SFAS) 142 in July 2001, drastically changing the financial reporting of goodwill. Goodwill is measured as the excess of the business acquisition price over the fair market value of a target firm’s identifiable net assets, and often comprises the single largest portion of the purchase price. Essentially, goodwill is based on the premise that the whole is greater than the sum of a firm’s parts (assets). It represents target value not otherwise disclosed on its balance sheet. For example, 90 percent of Philip Morris’ $13 billion acquisition of Kraft as allocated to goodwill (Gara and Karim 2000). As of 2003, Standard and Poor’s 500 firms have $1.3 trillion in goodwill reported on their books, making it the single largest recorded intangible asset (Churyk 2004). Furthermore, the transitioning of the U.S. economic base from manufacturing to services has only increased the role and significance of intangibles such as goodwill (Henning et. al. 2000). The accounting treatment of goodwill has experienced a long turbulent history. The very nature of goodwill as a residual intangible asset makes the accounting for it difficult. Prior to 2001, the governing rules for goodwill were provided by Accounting Principles Board Opinion No. 17, Intangible Assets (APB 17) (APB 1970). APB 17 provided that acquired goodwill was initially recognized as an asset on the acquirer’s books. Subsequently, goodwill was systematically amortized over its estimated useful life, up to a maximum of 40 years. Consequently, an acquirer’s future reported earnings were dragged down by amortization expense. The Financial Accounting Standards Board in July 2001 dramatically altered the reporting treatment for goodwill following the issuance of SFAS 142, Goodwill and Other Intangible Assets (FASB 2001). SFAS 142 prohibits firms from systemically amortizing acquired goodwill. The new standard requires that goodwill be annually tested for impairment. If impairment is found reported goodwill is reduced and a write-off against earnings is reported. Consequently, acquirers no longer face a systematic drag on their earnings due to amortization. However, the required annual impairment testing does force acquirers to disclose unwise acquisitions and overpayments, as goodwill write-offs were reported (Henning and Shaw 2004). Additionally, impairment-based write-downs may potentially lead to erratic earnings reports. The objective of this study is to examine the market reaction to events leading up to the issuance of SFAS 142. While the new standard is generally income increasing, any subsequent write off of goodwill is likely to adversely impact reported earnings (Moehrle et. al. 2001; Hirschey and Richardson 2002). Additionally, since the standard does not have a real cash flow effect for the firm, the market should not exhibit a significant reaction, unless the new standard increases the quality of reported earnings, by reducing the noise surrounding the treatment of goodwill. However, positive accounting theory posits that changes in required accounting treatment, despite a lack of a cash-flow effect, affect firm value due to their impact on debt-contracting and political costs (Myring et. al. 2003). As a result, this study will contribute to the goodwill accounting literature by revealing the market’s perception, if any, of the change in reported treatment. The next section will discuss the history of accounting for goodwill and the reasons for the recent change. A review of the relevant goodwill literature will follow. This will be succeeded by a discussion of the tested hypotheses, sample selection and model development. A presentation of the results will be next, with a summary and conclusion to follow.
The Relationship between Electronic Business Process Reengineering and Organizational Performance in Taiwan
David W-S. Tai, National Changhua University of Education, Taiwan
C-E Huang, National Changhua University of Education, Taiwan
With the advent of free, open and globalized economic development, industries are confronted by more competitors in both domestic and international markets. Under this dynamic environment, the traditional labor-intensive industries in Taiwan are losing their global competitive advantage. This research investigated the top 850 of Taiwan’s enterprises in order to examine the relationship between internal and external environmental changes in the industry. Data from 103 industries revealed that, the more flexible a company is in its industrial environment, the better its chances of being able to reconstruct itself by applying information technology and operational procedures; when corporations’ internal and external environments change, a company must also adjust its processing activity to improve its organizational performance. Finally, this research found that manufacturing industries are more focused on the application of information technology than are other industries because of the need to reduce the reaction time to customer complaints, the response to mistakes response, and shorten the manufacturing life-cycle of productions and services. Business reengineering has been a central issue among corporations since Hammer (1990) first presented the concept. Business reengineering provides a way for manufacturers to effectively respond to the challenge of changing environments. Since its objectives and methods are applicable to a wide range of industries, many companies consider business reengineering as the means by which to achieve industrial competitiveness (Bradley & Rosenzweig, 1992) Information Technology (IT) applications play a crucial role in the process of business reengineering; computers, applied software, and Internet technology help organizations become more flexible and more responsive and to provides quality products or services. IT can decrease or substitute for human resources, thus increasing production efficiency and promoting more sound internal working processes. It can also combine with other functions to make a succession of processes work well. , Since IT has contributed so strongly to business effectiveness, electronic business process reengineering has become an important new trend in the face of increasing competitive pressures. This research includes literature reviews and empirical studies with the purpose of examining how organizations adjust electronic-business process reengineering while faced with the pressure of changing environments and industrial upgrades. The research summarizes concrete and suggestions objectively that will help business process reengineering work. The objectives of this research are: (1) to examine business process reengineering in Taiwan, (2) to examine the obstacles confronted during the process of business reengineering, and (3) to determine the relationship among the administrative environment, business reengineering and organizational performance. Xie (1980) considered the industrial environment to be an uncontrollable factor for a company, whether it comes from inside or outside the organization. Duncan et al. (1972) suggested that the internal environment includes individual personnel, functions, staff and organizational structure, while the external environment includes customers, suppliers, competitors, social politics and technologies. Jackson and Schuler (1995) proposed instead that internal environmental factors are technologies, structures, scales, industrial life cycle, competing strategy and industrial culture, and that external environmental factors include laws, regulations, national culture, trade unions, labor market and the service or manufacturing industries the company relies upon. This research separates the factors which influence the industrial environment into external environment and internal administration. The internal administration has four parts—administration management, production and sales, financial structure, and marketing—while the external environment consists of social, economic, technology, legal/political and industrial factors. Davenport and Short (1990) defined business process reengineering as a working process to design and analyze an organization internally and cross-organizationally. Hammer and Champy (1993) contended that the objective of business reengineering is to focus on “process,” which they defined as a series of activities from which an organization gathers raw material and produces products to meet customer needs. The current research defines business process reengineering as a way to improve performance by reconsidering the procedures of business administration, and by reconstructing operating steps, organizational structure and IT from top to bottom. Hammer (1990) considered IT to be the key to comprehensive business process reengineering, a way to rethink the drawbacks and the possibility of improvement in present operating steps. Business process reengineering requires support from IT in order to increase efficiency by continuously applying the latest technology to business process reengineering. Marchand et al.(2000) proposed that an information-orientation industry must acquire the ability to execute IT, information administration, and information behavior and values effectively in order to increase enterprise performance.
The Market Innovative Acceptance Framework for High-Tech Firms: An Example of Ultrasonic Cleaning Equipment
Jung Huang, Minghsin University of Science and Technology, Taiwan
Dr. Chih-Hung Wu, Takming College, Taiwan
Wen-Ta Hsu, Chung Hua University, Taiwan
The purpose of this study is to propose a market innovative acceptance framework for high-tech firm accepting the innovated technology of ultrasonic cleaning equipment. The study focuses on the identification of the local businesses with the equipment and a survey was made to investigate how the equipment is accepted in the market. We target local hi-tech businesses having clean rooms (class≦10000) and extract the constructs influencing the acceptance in the market by Delphi Method and through Analytic Hierarchy Process (AHP), we calculate the relative weighting of each construct and principles to explore the future development of ultrasonic cleaning equipment. From our research, we found that two elements—government policies and publicity are the keys for local hi-tech businesses in selecting the ultrasonic cleaning equipment. Therefore it is suggested that manufacturers, when promoting the ultrasonic cleaning equipment, combine with the government publicity and regulations while enhancing the stability and quality of the equipment so that consumers can identify with the cleaning level after using the equipment. In the meantime, the study also predicts the future requirements and trend of the ultrasonic cleaning equipment and we believe there will be great market requirements with business opportunities. In hi-tech industries, the cost of filter nets used in clean rooms is disproportionately low when comparing to the total operating costs. Filter nets, however, play a crucial role in determining whether the plant can bring off a successful process and in affecting the defective rate. IC Fabs of the semiconductor industry, for example, demands highly the capacity of filter nets in clean rooms to rid themselves of contamination particles, to the purpose that the clean rooms can achieve a prescribed standard and attain a high-quality process control. In Taiwan, the filter nets of clean rooms are now all of disposable type, discarded and replaced by new ones after being used. Although this practice assures the quality of the products, the ever increasing waste filter nets cannot be decomposed in landfills on one hand, and they, when incinerated, block fire grids and generate dioxin on the other hand, consequentially increasing social costs. Owing to the raging tide of environment protection, green consumption becomes the norm. Enterprises make the goal of sustainable development attainable by both decreasing the devastation of environment and taking on an image of a green enterprise. A Japan company, of late, utilizing the ultrasound technique, devised cleaning equipment for filter nets, which can be applied to the recycling processes of clean-room filter nets in hi-tech plants. Cleansed by this ultrasound cleaning equipment, filter net can restore its function without sacrificing the requirement of the clean room. Accordingly, in addition to reducing the procurement costs, this equipment lessens the amount of waste as well as social costs, making it environment-friendly and cost-effective. This research probes principally the acceptability of ultrasound cleaning device in Taiwan's hi-tech industries; in the meanwhile, by employing Analytic Hierarchy Process to assess the relative weighting of each principle, the future development of such instruments in Taiwan will also be scrutinized. With hi-tech industries having clean rooms (class ≦10000) as its subjects, this research explores the acceptability of ultrasound cleaning device in Taiwan. Extracting the constructs influencing the acceptance in the market by Delphi Method and calculating the relative weighting of each construct and principle through Analytic Hierarchy Process (AHP), we make clear the potential development of ultrasonic cleaning equipment. In a word, the research intends to achieve three aims: (1) extracting the factors that influence the acceptability of the market by having interview with experts and Delphi Method, as well as setting up the hierarchy structure of this research (2) appraising the key factors that impact on the acceptability by using Analytic Hierarchy Process to assess the relative weighting of each construct and principle (3) presenting conclusions for reference about importing ultrasound cleaning devices, popularizing them, and even undertaking the local enterprise or transferring the relevant techniques. For starters, a bibliography correlating with the techniques of ultrasound cleaning device would be advanced as a sketchy introduction. In general, among methods of cleaning surface pollutants, chemical cleaning and physical cleaning are two principal ones. The former applies detergents directly to the object’s surface, either in bringing pollutants away from the surface by dissolving them, or in separating them from the surface under different cohesion of detergents and pollutants.
Comprehensive Income and Holding Gains and Losses: Evidence from a Pilot Empirical Research on Italian Corporations
Dr. Marco Maffei, Università di Napoli “Federico II”, Italy
The starting point of this research comes from the decision of the European Union to compel listed companies to prepare their financial statements in accordance with the IFRS. Consequently, there is the need of understanding the present and potential ability of international accounting standards to effectively modify and improve the true and fair representation of financial position and performance of domestic corporations. Thus, the aim of the paper is to test a statistical methodology, which is capable to appreciate the level of diversity between the Italian and international measures of income. Specially, the work focuses on the concept of comprehensive income (consistent with items included both in net income and in equity) and it investigates if the accounting treatment of holding gains and losses under IFRS involves a substantial modification of the income measurement respect of the Italian practice. This analysis is accompanied with a pilot empirical research conducted on a sample of Italian listed companies. It has been verified that the examined companies have presented just a few items in equity. Moreover, the holding gains and losses comprised in net income (with regard to the limits related to the IFRS first-time adoption) are not so heavy. Effective for fiscal year 2004, in Italy two different accounting standards are cohabiting: the OIC standards (OIC is the acronym of Organismo Italiano di Contabilità, which is the Italian standards setter) and the International Financial Reporting Standards. The OIC allows the determination of the traditional net income, inspired by the transaction-based model, despite being affected by a rigid and slow-to-change civil law. Notwithstanding, Italian listed corporations are obliged to adopt the IFRS rules in consequence of the European Union enforcement (Regulation 1606/2002). Moreover, there is an amendment to the fourth directive, leading to the possibility of preparing a statement of performance, instead of the traditional profit and loss (EU directive n° 51/2003, article 1). The IFRS conduct to the measurement of a comprehensive income and the IASB is currently dealing with a project related to the opportunity of recommending a statement of performance (exposure draft of proposed amendments to IAS 1 “Presentation of financial statements”, 2006). Since accounting models have to compete in the marketplace (Watt-Zimmerman, 1986), it is believed to be extremely important to verify whether cohabitation is possible or differences may be reduced in the long run between the OIC standards and IFRS. This effort has to take into account that the underlying approaches are almost influenced by different cultural conditions and economical environment in response to specific needs and that dissimilarities affects book-keeping and disclosure behavior (Hofstede, 1980), leading differences almost in accounting systems (Caldarelli, 1997), carrying values (Gray, 1988) and financial reporting (Nobes, 1980). Because of the increasing use of fair value, the paper specially focuses on the accounting treatment of changes in value of assets and liabilities. According to this topic, it is necessary to consider almost two aspects. As far as the first issue is concerned, it is known that consistent with the Italian practice gains are recognized in net profit only when they result in the receipt of cash or the acquisition of assets that are reasonably certain to be turned into cash and, due to the conservatism concept, anticipated gains do not enter the measurement of income. On contrary, anticipated losses are generally taken into the profit and loss. According to the IFRS, holding gains and losses are recognized despite unrealized (IASB, 1989). Specially, it is possible to find fair value in the revaluation of properties, plants and equipments, in the actuarial gains and losses of employee benefits, in the exchange differences on monetary items and on net investment in foreign operations, in the revaluation of intangible assets, in the holding gains and losses of financial instruments, in the gains and losses of investment properties, in the gains and losses of agriculture items. As far as the second issue is regarded, it is possible to find some inconsistency in the accounting treatment of the changes in value of assets and liabilities. The framework states that incomes and expenses are respectively a form of inflow or enhancements of assets or decreases of liabilities and a form of outflow or depletions of assets or incurrences of liabilities. Notwithstanding, some standards require to include the changes in equity, while other standards require to include the changes in the income statement. The items registered at equity are classified as other recognized income and expense as suggested (ED of proposed amendments to IAS 1). This category regards changes in revaluation surplus, gains and losses arising from translating the financial statement of a foreign operations, gains and losses on remeasuring available-for-sale financial assets, the effective portion of gains and losses on hedging instruments in a cash flow hedge and, eventually, actuarial gains and losses on defined benefit plans.
Using an Online Store to Augment the Learning of Leadership Fundamentals
Dr. Robert L. McKeage, University of Scranton, Scranton, PA
Dr. Len Tischler, University of Scranton, Scranton, PA
Dr. Cynthia W. Cann, University of Scranton, Scranton, PA
Students in our Business Leadership Program began an online retail store for the Alumni Society. This paper describes how operating the online store has been helping students learn leadership fundamentals more effectively than only doing traditional course work. The paper discusses challenges the students have faced and the learning they have gained in setting up and operating the business. Questions and suggestions for future directions will be discussed. Many schools try to create leadership learning opportunities. Our Business Leadership Program, established in 1991, has proven to be a success: our students earn approximately $10,000 more in starting salary than the average graduate at our university. In the spirit of continuous improvement we recently had our students develop and implement an online retail business: a store that sells university-related goods for the Alumni Society. We are finding that this experience serves as an effective vehicle for learning about both business and leadership. This paper is focused on how running this on line store has enhanced students’ learning about leadership. This paper will be organized along the lines of selected leadership skills. To determine the appropriate skills to focus on we analyzed three leadership textbooks (Daft, 2002; DuBrin, 2004; Yukl, 2006). We found that they present common leadership concepts and issues (see Table I below). Since these leadership concepts and issues generally form the foundation for teaching leadership, we have used them as a framework to explain how running the online store has led to learning gains by our students on the same concepts and issues. Leadership educators over the years have dealt with many challenging questions that include whether leadership is a skill, trait or behavior. (Doh, 2003). Beyond this are the questions of “can leadership be taught?” – “are great leaders born or made?”, “is leadership the same as management?” Terry Pearce, instructor at the Haas School of Business at the University of California, Berkeley, states “that true leadership must be experienced not taught” (Bisoux 2005, p.40). Paula Hill Strasser, director of Business Leadership Center at Southern Methodist University, recently stated, “We don’t believe leaders are born, but that people are born with different potentials to lead…leadership can’t be taught, but it can be learned through facilitation, simulation, and one-on-one coaching. It’s a process of self-discovery” (Bisoux, 2005, p.42) Today, most educators agree that leadership includes both skills and behaviors (Doh, 2003). That being the case, educators need to provide students with various learning opportunities that will sharpen both. The development and operation of our on-line store provides our students opportunities to try out skills and behaviors, to reflect on their behaviors both in isolation and with their cohort group, to receive feedback from faculty, alumni and fellow participants, and to reflect on their and others’ behaviors in light of management and leadership theories. Altogether, we believe that this online store has been helping our students to gain more than they could without it. The student store began with three groups who had different strategic purposes: the University’s Alumni Society wanted to make money for the Alumni Society and to connect alumni back to The University. The Business Leadership Program Director wanted to provide hands-on opportunities for the students to learn about business and leadership. The students wanted to do a good job and to learn. These purposes sometimes conflicted. A number of issues needed to be settled before the store could operate effectively: Because students are not on campus throughout the year, a vendor would have to be found who could provide full service to customers year round. The vendor would order goods from manufacturers, warehouse them, take orders from the web site, ship the orders, and keep records of transactions and monies. The vendor would need to conform to social justice, social responsibility, ethical, and environmental issues. This stipulation arose from the university’s larger student body. A web site would have to be developed that would handle the store’s business through the vendor in a professional and customer-friendly manner. Students needed to organize themselves to get the decisions made and work accomplished. Clarity about the different roles, responsibilities, and authority was needed among the students, Alumni Society, Alumni Office, vendor, and other university administrative units. For example, students would need to make marketing decisions: whom to target, with what products, and how to merchandise and market the products, yet the Alumni Society wanted to maintain control because it was their money and reputation at stake. Agreements would need to be reached with the campus administration and the campus bookstore so that the online store would not violate any legal contracts. The students were involved in decision making on all of these issues. Each cohort of students formed its own organizational structure to run the company. So far, almost all work has been done in self-organized teams. There has been one overall student leader (who emerged each year) and leaders have emerged from some of the teams, but not necessarily from all. At times, the students have had difficulty working in their designated teams and coordinating across teams. They have also found it challenging at times to work effectively with outside people and organizations (vendor, Alumni Society, etc.).
The Choice of Entry Mode Strategies and Decisions for International Market Expansion
Dr. Lisa Y. Chen, I-Shou University, Kaohsiung, Taiwan
Dr. Bahaudin Mujtaba, Nova Southeastern University, Ft. Lauderdale, FL
Previous research on entry mode strategies has identified numerous factors that influence firms’ strategic decisions for selecting foreign market entry modes. The decisions involved in the foreign expansion are complex, requiring consideration of many factors. This study investigates the factors that comprise multinational firms’ decisions to operate in foreign markets in one of four entry modes. This study utilizes a transaction cost approach and synthesizes non-TCE (Transaction Cost Economics) approaches as a conceptual basis to develop a framework by investigating the findings in previous studies of the effect of several governance structures on the choice of foreign market entry modes, and by exploring key factors that may influence the choice of foreign market entry modes. Firms attempting to seize new business opportunities for growth or cost reduction through foreign market investments often face complex option decisions (Osland, Taylor, & Zou, 2001). A method for decision-making, entry mode provides a range of options from which firms can choose that allows them to begin business in a foreign market. Because each entry mode offers specific benefits and risks (Chang & Rosenzweig, 2001), the issue of foreign entry mode choice is one of the most important aspects of international marketing management (Bradley & Gannon, 2000). When a firm seeks to enter a foreign market, the firm must make the important strategic decision of the most appropriate entry mode to use for that market (Agarwal & Ramaswami, 1992). Of additional importance, the choice of entry mode defines the strategic flexibility with which the firm will be able to identify and adjust its resources in the long run as it attempts to generate a sustainable competitive advantage (Domke-Damonte, 2000). Entry modes are understood to vary in three major aspects: (a) cost as resource commitment; (b) control as level of ownership; and (c) risk related to the level of resources committed and the complexity of the environment entered. Greater control requires higher resource commitment and may raise the level of risk associated with operating in a foreign environment, with which the investor firm is potentially unfamiliar (Rhoades & Rechner, 2001). Control generally refers to a firm’s need to the influence systems, methods, and decisions in the foreign market. Control is highly desirable to improve a firm’s competitive position and maximize its returns on assets and skill. The greater the ownership in the foreign venture, the higher the resulting operational control (Taylor, Zou, & Osland, 2000). Risks also are likely to be higher in proportion to the assumption of responsibility for decision-making and higher commitment of resources (Agarwal & Ramaswami, 1992). Thus, the choice of entry mode has important and sometimes subtle implications for the level of resource commitment, and may significantly affect the foreign venture’s performance and survival potential (Bradley & Gannon, 2000). Although previous studies have made substantial contributions to the understanding of firms’ entry mode behavior, an important gap in the empirical literature remains: how the interrelationships among determinant factors influence firms’ entry choices (Agarwal & Ramaswami, 1992). As noted above, although much prior research has focused on multinational corporations’ (MNCs) marketing strategies, little is known about the factors that influence MNCs’ foreign market entry mode choice (Taylor et al., 2000). Neither have existing studies assessed how certain factors that the firm can at least partially control would affect its entry strategy (Tse, Pan, & Au, 1997). Whether a firm will decide to enter the foreign market independently or with partners and, if so, under what mode of association, depends not only on the intended transaction or characteristics of knowledge within a firm but also on the broader structure of the firm and its specific industry (Contractor & Kundu, 1998). In the entry mode literature, transaction cost economics (TCE) has been used in many empirical studies leading to the identification of several crucial factors. The cost of implementing a particular mode of entry is an important consideration in the choice of entry mode (Rajan & Pangarkar, 2000). TCE posits that firms’ choice of organizational structure, including the mode of foreign entry, is based on efficiency criteria for organizational structures that will economize on transaction costs (Yiu & Makino, 2002). TCE-related variables have been recognized as major determinants of entry mode decision (Zhao, Luo, & Suh, 2004). The literature suggests that TCE is associated with the bargaining power of the costs of negotiating and monitoring the actions of venture performance (Taylor et al., 1998). In addition to reducing transaction costs, firms often have numerous non-TCE motives in which the TCE model can be extended by exploration of the ability to integrate (Erramilli & Rao, 1993).
Copyright 2000-2017. All Rights Reserved