Most Trusted. Most Cited. Most Read.
The Journal of American Academy of Business, Cambridge
Vol. 11 * Num.. 1 * March 2007
ISSN: 1540 – 7780 * The Library of Congress, Washington, DC
All submissions are subject to a double blind peer review process.
The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Journal of American Academy of Business, Cambridge will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work. All submissions are subject to a two person blind peer review process.
The Journal of American Academy of Business, Cambridge is published two times a year, March and September. The e-mail: email@example.com; Website, www.jaabc.com Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via the e-mail address above. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above. Address advertising inquiries to Advertising Manager.
Copyright 2000-2016. All Rights Reserved
The Crucial First Three Months: An Analysis of Leadership Transition Traps and Successes
Dr. Steven H. Appelbaum, Professor of Management and Concordia University Research Chair
John Molson School of Business, Concordia University, Montreal, Quebec, Canada
Miguel Valero, MBA, General Manager, UniFirst Canada Ltd., Montreal, Quebec, Canada
Transitions are critical time when small differences in the manager’s actions can have disproportionate impact on results. Leaders, regardless of their level, are most vulnerable in their first few months in a new position. Failure to create momentum during the first few months virtually guarantees an uphill battle for the rest of the manager’s tenure in the job. A survey was conducted among 175 managers. The respondents were asked to rank Watkins’s seven common traps and seven principles for success by importance order. The results were not only analyzed for the overall group of respondent, but also, by position (manager vs. executive), by sex (male vs. female) and by years of experience (5 years and less vs. 6 years and more). The authors obtained a 38% response rate. Sixty eight percent of the respondents were at the manager level while 32% were executives. Overall and in every category of respondent, the mistake that ranked as the most important was “Being Isolated”. “Coming with the answer” ranked #2 in all category of respondent with the exception of the executives who ranked this mistake equally #1 with “Being Isolated”. Most executive failures are not the result of commonly cited causes, such as insufficient intelligence, questionable motivation, dishonesty, or even lack of leadership capabilities. Most top executives actually have the intellect, skill, and experience to lead their companies through the inevitable challenges they encounter. It turns out that “softer” issues, such as communication mishap, misaligned expectations, and the notion that you have to be the savior are more often than not the real culprits, especially in the early days. In their research Neff and Citrin (2005) listed ten traps for new leaders. They are: 1) Setting unrealistic expectations. The most universal trap for a new leader wants to do so much so fast that you over promise and over commit. Setting unrealistic or unsustainable expectations is one of the most seductive and common pitfalls for new leaders. Real pressures lead to this, such as the all-too-human need to impress your higher authority. 2) Either making rash decisions or suffering from analysis paralysis. While it is encouraged to listening and seeking input, the new executive can’t engage in a study for too long or postpone a tough decision indefinitely. 3) Being a know-it-all. Another serious pitfall believes that you have all the answers. By not recognizing or admitting that you don’t have – and can’t possibly have – all the answers, you shut out new perspectives, as well as the possibility of getting the valuable information and input that may lead to new discoveries and answers. 4) Failing to let go of your past identity. Sometimes without even realizing it, the new leader simply talk about its former company or its past success too frequently, creating the impression that its former employer is better than its new one., or that he/she have remorse. In doing so, he/she disenfranchise its new organization or simply annoy people, undermining their ability to be effective in the first hundred days. 5) Sporting “the emperor’s new clothes”. If the new manager doesn’t get accurate feedback or honest advice because people around them don’t feel comfortable to do so, he/she will not be able to develop the best strategy. 6) Stifling dissent.. Executives who smother dissent cut themselves off from the chances to see and correct problems as they arise. They create an environment of fear and control that turns off the most talented employees and eventually drives them out the door. Hesitant employees are given a draconian choice: “It’s my way or the highway.” Usually only mediocre talent ends up submitting to work in such an atmosphere. 7) Succumbing to the savior syndrome. Trying to do it all alone is a serious trap. If you operate as a lone wolf who refuses to ask for help or involve others, you will cut yourself off from valuable input and feedback. Even if you are on the right track, you will invariably burn out, which will only further hurt the organization. 8) Misreading the true sources of power. One common attribute of the most successful business leaders is sensitivity to the unwritten rules of an organization, empathy Daniel Goleman popularized this attribute with the term emotional intelligence. 9) Picking the wrong battles. New leaders tend to want to focus on problem areas and figure out how to solve them. That’s commendable, but not if it comes at the expense of sustaining its success in existing areas of strength. There is a tendency in the beginning to think that it’s more important to be visible and out at functions rather than taking care of business. It can be the black hole for a person who’s new. 10) “Dissing” your predecessor. Under all circumstances, the new executive has to be respectful and sensitive of its processor’s position and tenure, regardless of how he/she feels. About everyone who is there when the new manager arrives has worked for the old manager and probably has some degree of loyalty to him or her (Neff and Citrin, 2005).
Balancing the Hybrid Self in the Competing Landscapes of Consumption
Kritsadarat Wattanasuwan, Ph.D., Thammasat University, Bangkok, Thailand
The paper explores how a group of provincial women exercise consumption to balance their hybrid identities when they move to study in the capital. Ethnographic fieldwork is employed to achieve an insight of the group’s consumer acculturation processes. The interpretations reveal the complexly dynamic and paradoxical selves of these informants. Although they aspire to urbanise themselves in order to assimilate properly into the new consumption space, they still wish to persevere their ties with the provincial roots. Evidently, they seem to emerge in the third space where they can metaphorically be in both side at once through everyday consumption. As migration fabricates hybridity of cultures and identities (Hall 1990), the self needs consumption practices tailored to the third space (1) (Bhabha 1990) in order to balance the hybrid self. Indeed, the relationship between place, identity and everyday consumption is profoundly intertwined (Penaloza 1994; McDowell 1999). The term ‘place’ which I discuss here does not refer to just a physical area, rather it embraces local ways of life such as customs, values and certainly consumption practices. The notion of place also comprises symbolic meanings that we often incorporate into our identities. Thus, changing place (e.g. migration or even moving home) can frustrate and relocate our sense of identity. In order to understand this complex relationship, I employ interpretive research via ethnographic fieldwork. Specifically I examined a group of six female students from rural areas who came to study at a university in Bangkok. I explore how these informants employ everyday consumption to re-negotiate and re-settle themselves in a new spatiality, in this case, the cosmopolitan Bangkok. The interpretations aim to convey insightful understandings of the interplay between the self, geographical identity and consumption symbolism that emerged from the fieldwork. The interpretations reveal the complex dynamics and paradoxical selves of these informants. Although they aspire to urbanise these selves in order to assimilate properly into the capital’s way of life, they still wish to preserve their ties with their provincial roots. Accordingly, they engage in various symbolic consumptions to create, express, negotiate, and balance their hybridity. The primary aim of my fieldwork is to explore the interplay between an attempt to negotiate the sense of self in a new cultural space and everyday consumption practices. Principally, the data collection methods are observations, both non-participant and participant observations, and a series of ‘the long interview’ (McCracken 1988). Auto-driving like collages as well as diaries are also used as supplementary methods. Deliberately, I employ triangulation across methods not only to enhance the research creditability, but also to generate a multiplicity of perspectives on the behaviour and contexts of the phenomena (Elliott 1999). The research informants were recruited from a friendship group of six female students, Bird, Nat, Da, Auan, Win and Nud (2), all of whom are about twenty years old. Their majors are in business-related fields. All of them are from the rural region, which is approximately two hundreds kilometres away from Bangkok. Before attending university, they never lived in the capital. Altogether the fieldwork was conducted over sixty weeks. Bangkok, the capital of Thailand, is not only the ultimate example of the nation’s consumer culture, but also the national centre of everything. Consequently, each year there are large numbers of people coming to Bangkok for employment and education. Essentially, they need to acculturate Bangkok’s ways of life in order to settle down comfortably. I use the term ‘to acculturate’, which generally refers to the general process of movement and adaptation to the cultural environment in one nation by persons from another nation (Penaloza 1994), in order to portray that moving from other provinces to Bangkok may be relatively equated to migrating to another nation. As Bangkok is viewed as a first-world city in a developing nation (i.e. Thailand), its social life is much different from lifestyle found outside the capital. Influenced intensively by multi-national capitalism, Bangkok becomes a cosmopolitan city bound up with globalisation and mediaisation. While ways of life in many provincial areas are still simple, the social life in Bangkok is complex as it is loomed large by postmodern conditions. To acculturate successfully into Bangkok culture, the provincial consumers need to acquire cultural capital and skills not only to urbanise themselves but also to cope with the threats posed by postmodernity.
Are Customers’ Dissatisfaction and Complaint Behaviors Positively Related? Empirical Tests
Dr. Godwin Onyeaso, Concordia College-University System, Selma, AL
A large number of studies on customer dissatisfaction and complaints behavior as well as other related consumer behavior studies are predicated on the belief that there is a statistically significant positive relationship between customer dissatisfaction and complaints behavior. Using time series panel data, this study tested this assumption and found that: dissatisfaction and complaints have a stable long run equilibrium relationship which permits them to positively influence each other, that past dissatisfaction explains current changes in complaints, that past complaints explain current changes in complaints, and that current changes in dissatisfaction explain current changes in complaints. Finally, strategic services management implications of these results about how service managers can use them to leverage their organizational performance through superior complaints management programs were briefly discussed. Empirical works on customer dissatisfaction and complaint behaviors have been growing (Crosby & Stephen, 1987; Goodman & Ward, 1993; TARP, 1981, 1986; Richins, 1980; Singh, 1990a,b; Bearden & Mason, 1984; Folkes, 1984; Singh & Wilkes, 1996; Peters, 1988; Granbois et al, 1987; Kim et al, 2003; Davidow, 2003, and Dellande, for a review) but shrinking recently (Lemmink, 2005). The major force behind the renewed interests in this area proclaimed that strategic market feedback benefits (Fornell & Wernerfelt, 1987, 1988) accrue to organizations when their managers maximize customer retention by minimizing customer dissatisfaction with better quality offering (Kim et al, 2003), and encouraging customers to voice out their complaints rather than exit (Peters, 1988) as prescribed by the defensive marketing strategy concept (Fornell & Wernerfelt, 1988). A common assumption underlying these works is that a positive relationship exists between customer dissatisfaction and complaints. The empirical validity of this assumption has not been tested. The purpose of this paper is to test this assumption. The bulk of the works in service management rooted in customer dissatisfaction and complaints behavior are predicted on the belief that there is statistically significant positive relationship between customer dissatisfaction and complaints behaviors. The genesis of this assumption can be traced to early research hypothesizing that the intensity of complaint behavior is directly proportional to the degree of customer dissatisfaction (Bearden & Teel, 1983). That is, the greater the dissatisfaction the more the complaints behavior. Therefore, consistent with this line of logic, there must be a positive relationship between customer dissatisfaction and complaints. Therefore, in line with this logic, some studies have been motivated by the presumption that customer dissatisfaction is the fundamental cause of customer defection (Crosby & Stephen, 1987; Goodman & Ward, 1993). Therefore, reducing customer dissatisfaction will reduce customer exist under the service failure recovery paradigm, so that boosting customer retention rates can leverage organizational revenue and profits in many firms in diverse industries (Hogan, Lemon & Libai, 2003; Reichheld, Markey & Hopton, 2000; Reichheld, 1996; Reichheld & Sasser, 1990). As a consequence, several studies on organizational complaints management are increasingly emerging (Andreassen 2001; Davidow, 2000; Liu, Sudharshan & Hamer, 2000; Maxham, 2001; Walsh, 1996; Tax & Chandrashekara; 1992; Smith & Bolton, 1998), more so because managers believe that the genesis of complaints is customer dissatisfaction on the presumption that customer defections are caused by dissatisfaction (Keaveney, 1995; Stewart, 1998).This line of reasoning would suggest a positive association between customer dissatisfaction and customer complaint behaviors. However, dissenting opinions of the positive relationship are emerging as discussed below. The validity of the assumption of positive relationship between dissatisfaction and complaints has been challenged on the grounds that it is not merely the intensity of dissatisfaction that proportionately translate into complaint behaviors because other factors beyond the intensity of dissatisfaction influence complaint behaviors (Best & Andreasen, 1977; Day, 1984). Second, consistent with this reasoning, research found that dissatisfied consumers do not usually complain even when they suffer huge losses in time and money (Andreason, 1988), which suggests that the relationship between dissatisfaction and complaints should be weak at best. Corroborating Andreason (1988), research found that “no-action or no-complaint” is one of several options dissatisfied consumers take (Day, 1980; Krapfel, 1985).Third, in line with above evidence, work by Andreasen and Best (1977:98) appears to suggest that the correlation between dissatisfaction and complaints is insignificant, statistically. Finally, evidence suggests that only about 5 to 10 percent of dissatisfied consumers decide to complain after service failures (Tax & Brown, 1998). Again, this suggests weak statistical relationship between dissatisfaction and complaint behaviors. Therefore, consistent with the preceding accounts, dissatisfaction has been called a trigger of complaints which is not sufficient in itself to cause complaints but must be present for complaints to occur (Volkov (2003:52). Clearly, the trigger concept suggests that the relationship between dissatisfaction and complaints will be minimal at best. But, how minimal is minimal? This empirical question is the purpose of the present study.
Real Estate Investments with Stochastic Cash Flows
Riaz Hussain, Ph.D., University of Scranton, Scranton, PA
This paper examines the ownership of real estate as a long-term, risky investment. Using stochastic calculus, the risk is analyzed by assuming that the cash flows in a property investment are growing as arithmetic Brownian motion with the possibility of becoming negative, while the value of the property is growing as a geometric Brownian motion. The analysis takes into account depreciation and taxes. The results are useful for a corporation or a long-term individual investor interested in real-estate investments. Both individuals and corporations invest in real estate. A family may invest in a home to live. A landlord will invest in a rental property to earn a living. A corporation can invest in a shopping mall on behalf of its stockholders. A university has to invest in a parking garage to alleviate the parking problem on campus. A real estate investment is usually considered to be a safe investment. A bank may lend up to eighty percent of the value of a house to a homeowner, but the stockbrokers can lend only up to fifty percent of the stock purchased on margin. The stockbroker can monitor the price of a stock every day and send a margin call as soon as the value of the stock drops below a certain level. A real estate holding should be a long-term investment for a firm or an individual. Brown and Geurts (1) investigate the holding period of real estate properties by individual investors. By analyzing the real estate transactions in San Diego, they find that the average holding period by the investors is somewhat less than five years. The investor behavior is contrary to the theoretical calculations in this paper, which demonstrates that the optimal holding period is much longer. In another paper, Brown (2) explores the reasons for owning real estate by private investors. In particular he examines the risk peculiar to real estate investments, including the entrepreneurial abilities of the owner. Geltner and Miller (3) look at the risk in the real estate investments for individuals and try to measure it in terms of CAPM. Their conclusion is that one cannot really do so. Some researchers focus on the institutional investors and their investments in real estate. Chun, et.al. (4) report that the institutional investors hold a surprisingly small fraction, perhaps 2 or 3%, of their investments in real estate assets. In their estimation, if CAPM is the proper model to assess risk, this fraction should be about 12%. French and Gabrielli (5) look at the overall concept of risk as applied to real estate investments. They consider the uncertainty in the valuation of British real estate. One simple way to assess the risk of real estate investments is to look at the beta of REITs. According to Corgel and Djoganopoulos (6), who examine the b of 60 REITs, the mean b is about 0.36, implying the relative safety of such investments. They also stress the special characteristics of REITS that contribute to their low betas. It is also possible to treat the uncertainty in the real estate valuation in terms of stochastic variables. For example, Buttimer and Ott (7) assume that the spot lease price follows a geometric mean-reversion stochastic process. The aim of this paper is to consider real estate as a risky investment. The risk is due to two factors: the uncertainty of the operating cash flows and also due to the unknown rate of growth in the value of the property. The analysis leads to an optimal holding period for a real estate, including the possibility of abandoning the property. To start, section (2) of the paper looks at a real estate investment with certain cash flows. Section (3) develops a theoretical framework to evaluate real estate investments under conditions of uncertainty, including the abandonment option. Finally section (4) has a discussion of the principal results of this investigation and some conclusions.
Mutual Fund Acquisitions and the Wealth of Target Shareholders
Dr. Xiyu (Thomas) Zhou, University of Alaska, Fairbanks, AK
Dr. Kevin C. H. Chiang, Northern Arizona University, Flagstaff, AZ
Dr. Craig H. Wisen, University of Alaska, Fairbanks, AK
The impact of mutual funds acquisitions on target shareholders’ wealth is an important topic considering the predominant role that mutual funds play as financial intermediaries. The results of the present study indicate that while target funds experience lower distribution and operation costs in post-acquisition period, the overall impact of acquisition on target funds’ performance is negative. These results suggest that mutual fund acquisitions destroy value in the long run. This phenomenon is partially driven by an implicit desire to achieve diversification on the part of some bidders whose main businesses are not in the asset management industry. The notion of shareholder wealth maximization is often advocated in corporate governance and control literature (see Jensen and Meckling 1976). Because of this emphasis, shareholder wealth has been extensively examined in the corporate merger and acquisition literature (see Becht, Bolton, and Röell 2002; Gondhalekar and Bhagwat 2003; Knapp, Gart, and Becher 2005). Relatively few studies though have focused on target shareholders’ wealth within the mutual fund industry. Jayaraman, Khorana and Nelling (2002) studied mutual fund mergers that involve the combination of two funds across fund families. (2) They found that this type of cross-family mergers results in a reduction in expense ratios for target shareholders, but this restructuring does not lead to significant changes in post-merger performance. Khorana, Tufano, and Wedge (2005) found that regardless of board structure post-merger fund performance and fees revert to the mean of the investment objective. An unanswered question within the mutual fund industry is whether fund acquisition in general improves the wealth of target shareholders. Unlike conventional companies, a mutual fund is often externally managed. In this case the fund and its investment adviser operate as separate units. The sponsor/advisor has a controlling role in fund operation while shareholders are usually passive in corporate governance. The fund and its sponsor/advisor are linked through service contract(s). These unique features make the acquisition of service contracts a key to improving the understanding of how target shareholders’ wealth is affected during and after consolidation. This study examines the effects of fund acquisitions on target shareholders’ wealth. Specifically, this study focuses on the following type of events: fund company A acquires the right to provide advisory service to mutual fund i of fund company B, and fund i is kept as an independent entity, i.e., not merged with another fund of fund company A. This framework for analysis allows for an incremental understanding in the analysis of acquisition effects since this paper does not study mutual fund mergers and because it has a distinct sample from that of Jayaraman et al. (2002). This distinction is necessary because when a fund company acquires a fund, the company in essence controls the fund through selecting the manager and managing the underlying assets. This control remains regardless of whether there is a subsequent merger. This distinction also provides a clean setting in which one is able to measure the post-event performance of the target fund without an infusion of the acquiring fund’s assets. We believe that this paper helps readers gauge a holistic view on mergers and acquisitions in the world of mutual funds. Examining mutual fund acquisitions is important for several reasons. First, pension funds and other institutional investors are playing an increasingly active role in corporate governance. This monitoring system is largely absent in mutual fund governance because mutual fund investors are mainly small investors. This makes mutual fund acquisition a particular fruitful area for examining the issue of moral hazard. Second, a mutual fund acquisition is different from a conventional corporate acquisition in that the bidder is after the contract rights for providing investment advice to the target fund’s shareholders for a management fee. The bidder does not need to own or control the majority of fund shares. The negotiation is between the bidder and the sponsor/advisor of the target fund. This process does not involve the shareholders of the target fund. Furthermore, consideration is paid by the bidder to the sponsor/advisor of the target fund. The shareholders of the target fund do not receive any premium in the form of a higher mutual fund share price. It is not obvious whether the desire for shareholder wealth maximization plays a role in fund acquisitions. Finally, an understanding of the impacts of mutual fund acquisitions on shareholders’ wealth should help regulatory agencies such as the Security and Exchange Commission (SEC) formulate their policies toward mutual fund governance and control.
The Iran - China Alliance
Kamrouz Pirouz, Ph.D. and Farahmand Rezvani, Ph.D., Montclair State University, Upper Montclair, NJ
The impact of China’s rapid economic growth in the last fifteen years has led to a substantial increase in its demand for energy. Iran’s ample supply of oil and gas and its need to counterbalance pressure from Europe and especially the US, have combined to initiate an alliance between the two countries. Our position in this paper is that this partnership is extremely beneficial to both countries and it should be strengthened in the years to come. The recent rise in the price of oil which broke the $60 a barrel mark in late August of 2005 is due to underlying fundamentals. The combination of a strong global demand, especially from China and India, coupled with tightness in supply and lack of excess capacity in refinery production worldwide, have been the main cause of the surge in oil prices. But political instability in the oil producing regions, especially the Middle East and Venezuela, and the resulting fear of supply disruptions have also added an element of unpredictability which becomes reflected in the price of oil. This paper, after considering the causes of the rise in the price of oil since 2005 seeks to examine the impact of China’s rapid economic growth on total world energy demand and the possibility of its alliance with Iran as a rich energy source to provide China with ample supply of oil as well as natural gas. A look at the history of oil price movement- Crude oil prices behave very much like those of any other commodity. They respond to shifts in demand as well as supply, which currently has two components, OPEC and non-OPEC production. In this section we look at a brief history of oil price changes in international markets since World War II. Crude oil prices ranged between $2.50 and $3.00 from 1948 through the end of the 1960’s. However, when viewed in 2004 prices, crude oil prices fluctuated between $15 to $17 during the same period. From 1958 to 1970 prices were stable at about $3 per barrel, but in real terms the price of crude oil declined from about $16 to below $13 per barrel. This decline in the price of oil in real terms was amplified for international oil producers in 1971 and 1972 due to the depreciation of U.S. dollar. Throughout the post-war period oil exporting countries found increasing demand for their crude oil but a 40% decline in the purchasing power of a barrel of crude oil. In 1960 OPEC came into existence with five founding members: Iran, Iraq, Kuwait, Saudi Arabia, and Venezuela. By the end of 1971 six other nations had joined the group: Qatar, Indonesia, Libya, United Arab Emirates, Algeria and Nigeria. (1) In 1972 the price of crude oil was about $3 per barrel. After the historic meeting of OPEC in Tehran in January 1974, following the Arab-Israeli Yom Kippur war of previous year, the price of oil was quadrupled to over $12. During that war the U.S. and many Western countries showed strong support for Israel. In retaliation, many Arab countries imposed an embargo on nations supporting Israel. The embargo reduced the oil supply in the international markets by 5 million barrels per day. About one million barrels were made up by increased production in other countries. The shortage of 4 million barrels a day in international markets was a clear indication that the ability to control oil prices had largely passed from the United States to OPEC, even though it did not have a monopoly. From 1974 to 1978 world crude oil prices were relatively flat, ranging from $12.21 per barrel to $13.55 per barrel. But events in Iran and Iraq led to another round of crude oil price increases in 1979 and 1980. The Iranian revolution resulted in the loss of 2 to 2.5 million barrels of oil per day between November1978 to June 1979. In September 1980 Iraq invaded Iran. By November the combined production of both countries was 6.5 million barrels per day less than a year before. Worldwide crude oil production was now 10 percent less than what it was in 1979. This shortage resulted in crude oil prices more than doubling from $14 in 1978 to $35 per barrel in 1981. (2) The rising oil prices in the early1980’s caused several reactions among consumers. There was more insulation of new as well as old homes, more energy efficiency in industry, and automobiles with higher mileage. These factors along with a global recession led to falling demand. The higher prices of oil also resulted in increased exploration and production outside OPEC. From 1980 to 1986 non-OPEC production increased 10 million barrels per day. These factors as well as shift to other alternative energy sources (e.g. coal and nuclear energy) led to falling demand and increased supply, and consequently a fall in the international price of oil. Between 1982 to 1985 Saudi Arabia acted as a swing producer, cutting its production to prevent further fall in oil prices. But by mid-1985 the Saudis departed from this policy, and by early 1986 they increased production from 2 to 5 million barrels a day. The result was a drastic reduction in crude oil prices to $10 per barrel by mid 1986.
The Capital Allocation in Developing Economies and their Vulnerability to External Shocks
Dr. Vefa Tarhan, Loyola University Chicago, IL
This paper investigates emerging market characteristics and whether or not emerging markets allocate scarce capital resources in an efficient manner. One of the findings of the paper is that while there has been a steady improvement in these markets, there still are some distortions. One of the significant problems is that the governments dominate the bond markets in these countries to the extent of completely crowding-out the private sector. At the macroeconomic level, this in turn simultaneously encourages inflation and curtails economic growth. At the corporate finance level, it forces firms to rely on short-term bank debt, exposing them to maturity risk, and also to borrow foreign currency denominated debt which exposes them to exchange rate risk. Additionally, due to the volatility of these economies firms have higher betas and higher cost of equity than developed economies. Using the events experienced during May-June 2006, the paper argues that these markets are susceptible to financial shocks created by hedge fund activities. Finally, the paper provides a recipe for the measures that governments and other institutions can take for further development of these markets by broadening and deepening these markets. This paper discusses the general characteristics of developing country capital markets and compares the workings of developing countries’ capital markets against the mature capital markets of developed countries. A partial list of developing country capital markets’ deficiencies include factors such as the “crowding-out” of the private sector from public debt markets, lack of financial instruments with long-term maturities, and the inability of firms’ to hedge their interest rate and foreign exchange rate exposures due to absence of liquid derivatives markets. Other developing country’s specific capital market imperfections are caused by asymmetric factors that limit the deepness and broadness of these markets. At the outset it can be said that, one way or another, from the investors’ perspective emerging markets create additional risk premiums that are typically not faced by the developing country investors. Thus, from firms’ standpoint, the developing capital market imperfections discussed above translates into a higher cost of capital, which obviously adversely affects their investment decisions. Firm investment decisions that are made in high cost of capital environment, in turn, causes problems at the macro level, by creating a potential underinvestment (low growth rate) problem. It is to be expected that when cost of capital is contaminated by imperfections caused by risk premiums, there will be distortions in the allocation of capital in developing countries. The recent dramatic negative developments in the capital and foreign exchange markets of these economies showed that the relatively small size of capital markets in these economies makes them vulnerable to the actions of the hedge funds. Furthermore, as expected, the transmission of these financial shocks to the real sector produces unfavorable macroeconomic conditions. This paper will address both the reasons behind the financial shocks in question and the transmission mechanism through which financial shocks create adverse macroeconomic conditions. In this paper, some of these developing country specific issues will be analyzed by using some stylized facts obtained from the Turkish capital markets. Additionally, this paper introduces a corporate finance perspective to understand macroeconomic implications of the “crowding-out” phenomena observed in many developing country capital markets. Finally, this paper also addresses the issue of the structural reforms that are necessary for an improved capital allocation process. In this section data from the Turkish Capital Markets will be used to highlight some of the typical characteristics of developing country capital markets and the problems they face. Table 1 shows the distribution of financial products available for investors in Turkey during the recent past (2002-2005, and the first quarter of 2006), as well as the market capitalization of the equity market. Some of the important facts that emerge from this table is that first, even though its share has shown some decline over the years (65% in 2002 versus 57% in 2005), bank deposits continue to be the most preferred investment form of Turkish investors. Fixed income securities constitute about a third of the financial holdings of investors during this period, while equity investments’ share has increased from 7% to 19% from 2002 to 2006 (first quarter). As will be shown below, some of this increase has been due to high equity returns in recent years and not just due to increased infusion of capital to common stocks. Another fact that emerges from Table 1 is that the relatively large size of the estimated gold holdings of investors. In fact, even though the ratio of gold holdings to financial investments shows a steady decline over the 4 years in question, it still represents a significant portion of investor portfolios. The importance of gold hoardings, is of course is a common phenomenon in developing countries. Investments in precious metals can be thought of as “unproductive investments” from the macro economic perspective, since the funds allocated to purchases of gold stocks do not represent channeling of investor savings towards real sector investments. Finally, another interesting fact that emerges from Table 1 is that the size of capital markets (including bank deposits and excluding gold holdings) has increased from $123.3 billion (2002) to $296.9 billion in 2005. This represents a compounded annual growth rate of 34% during this time period. At a compounded annual increase of 83%, the equity portion of capital markets has shown the largest growth rate for the time period in question. While this growth rate is impressive, the total size of the equities market is still relatively small at $98.1 billion (in 2005). This represented 27% of the GNP in 2005. (1)
Pricing American Options with Counterparty Risk
Lung-fu Chang, National Taiwan University, Taiwan
Dr. Mao-wei Hung, National Taiwan University, Taiwan
This article evaluates the impact of default risk on the prices of American options by the two-point Geske and Johnson method. Using the two-point Geske and Johnson method, we provide the analytical formula to assess vulnerable American options by pricing vulnerable European and multi-exercisable options under risk-neutral measures and employing Richardson’s extrapolation. To demonstrate the accuracy of our proposed method, we compare values of the vulnerable American option from our proposed method with benchmark values from the least-square Monte Carlo simulation method. Numerical evaluations illustrate the impact of counterparty risk on the prices of European and American options and demonstrate the accuracy of our proposed method. Over-the-counter (OTC) markets have come into the limelight in recent years. In the OTC market, financial derivatives of all types are widely traded. Financial institutions and corporate clients are the main participants in the OTC market. In contrast to exchange-listed options markets, there is no organizing exchange in OTC markets requiring options positions to be resettled daily and sufficient collateral posted. Therefore, the holders of OTC options are exposed to possible defaults from their counterparties. The traditional option pricing formulas suggested by Black and Sholes (1973) or Merton (1973) do not take counterparty risk into consideration. Several studies have concentrated on pricing vulnerable options, which are private contracts with the option payoff but without the protection of counterparty risk, as described by Johnson and Stulz (1987), Hull and White (1995), Klein (1996), Klein and Inglis (1999, 2001) and Hung and Liu (2005). Johnson and Stulz (1987) assume that options are unique liabilities in the option writer’s capital structure. If the option writer cannot make the promised payment, option holders will receive all of the option writer’s assets. Johnson and Stulz provide an analytical formula for pricing some vulnerable European options. By extending the Johnson and Stulz (1987) model, Hull and White (1995) allow the counterparty to have other equal ranking liabilities and to assume that option holders only receive the proportion of its no-default value when the counterparty defaults. They construct a three-dimensional lattice model to evaluate vulnerable European and American options. Klein (1996) argues that the assumptions of these papers are not realistic in many business situations. He assumes that the option writer has other obligation in his/her capital structure and the proportional nominal claims paid out in default are endogenous and depend on the terminal value of the option writer’s asset. Klein (1996) provides the closed-form solution for vulnerable European options. Klein and Inglis (1999) extend Klein’s (1996) model to derive an analytical formula for European options that is subject to financial distress and interest rate risk. Klein and Inglis (2001) also enlarge the results of pricing vulnerable European options from Johnson and Stulz’s (1987) and Klein’s (1996) models. Because the increase of the option’s obligation may result in financial distress, Klein and Inglis (2001) assume that the capital structure of the option writer also contains the obligation of the option. Hung and Liu (2005) provide the pricing formula for vulnerable options as the market is incomplete. Previous literature has focused on evaluating vulnerable European options. However, most financial derivatives in OTC markets have American-style properties. Generally speaking, the main numerical approaches to the valuation of American options have been suggested in the literature: the lattice model, the Monte Carlo simulation method and the finite difference method. In order to assess vulnerable American options, Hull and White (1995) orthogonalize two stochastic processes of the stock price and the option writer’s asset to eliminate the correlation between these state variables and then construct a three-dimensional recombining lattice model using two new transformed state variables. The prices of the vulnerable American options are computed via backward induction through the tree and obtained by converting the transformed new state variables to the original form at each node of the lattice. Longstaff and Schwartz (2001) develop the least-squares Monte Carlo (hereafter, LSMC) simulation method to evaluate American options. They utilize the least-square regressions to estimate the holding value of an American option. The early exercise decision is then acquired by comparing the value of immediate exercise and the estimated holding value. In contrast to the numerical approaches, Geske and Johnson (1984) provide an analytical formula for the valuation of American options. Values of American options are forecasted, via Richardson extrapolation, utilizing the prices of a European option and multi-exercisable options. The Geske and Johnson method is attractive from a computational viewpoint and has been extended by Omberg (1987), Bunch and Johnson (1992), Ho, Stapleton and Subrahmanyam (1994, 1997), and Chung (2002). Omberg (1987), Bunch and Johnson (1992), Ho, Stapleton and Subrahmanyam (1994, 1997), and Chung (2002) use different extrapolation techniques to value American options. Ho, Stapleton and Subrahmanyam (1997) have generalized the two-point Geske and Johnson method to evaluate American options in a stochastic interest rate economy. Chung (2002) also uses the two-point Geske and Johnson method to value American Quanto options that are subject to interest rate risk.
A Structural Equation Model of Total Quality Management and Cleaner Production Implementation
Dr. Ming-Lang Tseng, Ming-Dao University, Taiwan
Dr. Yuan-Hsu Lin, Ming-Dao University, Taiwan
Dr. Anthony SF Chiu, De La Salle University, Philippines
Chi-Horng Liao, De La Salle University, Philippines
A review of the literature revealed gaps in this area of organizational factors and cleaner production implementation, particularly inadequacy in this area of empirical testing of the organizational factors on cleaner production implementation. The aim of this study was to examine the total quality management elements and cleaner production implementation of numbers of manufacturing firm in order to determine the relationships among these variables. This research used of 267 Taiwan manufacturing organizations. The reliability (construct and item) and validity (convergent and content) of the constructs were evaluated. The result showed the Total quality management (TQM) elements were significantly and positively related to each other and cleaner production construct. A structural equation model constructed. The National Center for Cleaner Production Center, Taiwan has promoted cleaner production concept for several years. The concept of cleaner production is a preventative integrated continuous strategy for modifying products, processes or services, has been considered as the best technological strategy and good housekeeping toward sustainable development (Grutter, 2004). It embodies the more efficient use of natural resources and thereby minimizes waste and pollution as well as risks to human health and safety. Kjaerheim (2004) discussed on the intangible benefits and human factors derived from cleaner production project, cleaner production should no longer be viewed as a stand-alone option but be integrated in all business development activities to improve quality of life, protecting public health and improving safety. And, Stone (2006a) emphasized the importance of social factors in the implementation of cleaner production projects and identified a number of inadequacies in the change management processes used. Stone (2006b) “… a set of key internal organizational factors that strongly contributed towards the uptake of cleaner production and affected the potential on going improvement.” And yet, those studies are focus on qualitative method and point out the limitations of cleaner production implementation are commitment, on going improvement, leadership, support, communication, involvement and program design. TQM is a management style based upon producing quality service as defined by the customer. It defined as a quality-centered, customer-focused, fact-based, team-driven, senior-management-led process to achieve an organization strategic imperative through continuous process improvement. TQM is a systematic, integrated, and organizational way-of-life directed at the continuous improvement of an organization (Cartin, 1993). It is a proven management style used successfully in organizations around the world. TQM may be a "profit generator". If implemented properly, it may identify costly processes and cost-saving measures. TQM has been a popular intervention all over the industrialized countries (Garvin, 1991). Under TQM similar banner, most manufacturing firms have tried working in some way on improving the following key components of TQM: leadership; customer focus; strategic plan and continuous improvement. However, there is an inadequacies empirical evidence or model to prove the relationship of cleaner production and TQM existed. The need to address this gap in the literature arises from increasing concern over under TQM and cleaner production initiatives. Our study is motivated by the lack of empirical evidence on the impact of TQM on the cleaner production implementation. Using empirical data collected from Taiwanese electronic manufacturing firms, this study attempts to achieve a primary objective. This research seeks to find the inadequacies empirical evidence, which have appeared in the literature concerning the relationships among cleaner production and TQM and constructs an organizational theoretical model. The following research questions are empirically investigated in this research: 1. Are the elements of TQM reliable and valid for cleaner production implementation? 2. Which elements of TQM are incorporate with cleaner production implementation? Answering the above questions contributed to a deeper understanding of cleaner production implementation and strategic role of elements of TQM practice. This paper is organized as follows. Section 2 discusses the literature review and research hypotheses. Section 3 addresses the research method while Section 4 presents the empirical results. Section 5 presents the discussion and conclusion of the paper. A significant number of articles in the literature on cleaner production implementation have studied its limitations focusing on different perceptions (Stone, 2006a&b). Moors et al (2005) found that some firms regard the environment a new strategic arena; firms are taking a proactive stance towards the environment to capture a competitive advantage. Process development is regarded as an important instrument for development is regarded as an important instrument for developing better and environmentally compatible products at lower costs. Environmentally conscious business practices and management has evolved with influences from reactive and proactive activities and policies set forth by organization. Thus far, very few researchers have discussed the relationship between TQM practices and cleaner production implementation.
Evaluating the Decision to Adopt RFID Systems Using Analytic Hierarchy Process
Koong Lin, Tainan National University of the Arts, Taiwan
Chad Lin, Edith Cowan University, Australia
Interests continue to grow in recent years for the adoption of Radio Frequency IDentification (RFID) technologies due to their capability for real-time identification and tracking. A good example of this was when Wal-Mart asked its top 100 suppliers to use RFID tags in 2005. This had a profound effect on the projected growth of RFID technology as well as potential applications in industries such as defense, wholesale, and retail. However, there are some business and technical problems and issues with the use of RFID technologies (such as data accuracy, costs and benefits, and security and privacy) and these warrant further research. Thus, this research aims to establish a decision analysis mechanism that can assist organizations to judge if they are suitable to adopt the RFID systems. The AHP (Analytic Hierarchy Process) methodology is employed to analyze the RFID adoption decision processes of both RFID expert and industry evaluators. Global spending on Radio Frequency Identification (RFID), according to Gartner, is likely to reach US$3 billion by 2010 (CNET, 2005). RFID can be used to integrate processes and technologies in order to conduct and manage global trade. Several major retailers such as Tesco in the UK and Metro in Germany are already rolling out large-scale RFID initiatives. For example, Wal-Mart asked its top 100 suppliers to use RFID tags by 2005 (EPCglobal, 2006). However, organizations often encounter challenges and problems when implementing new IT technologies (Lin et al., 2005b; Love et al., 2005). For instance, organizations are likely to face various costs, risks and uncertainties when assessing the new adopted IT technologies (Lin and Pervan, 2003; Tsao et al., 2004) such as RFID. Therefore, this research aims to develop a mechanism which can help organizations to identify their risks and choose a suitable adoption option for the implementation of RFID. Analytic Hierarchy Process (AHP) methodology is used to analyze the data as it is useful for examining different RFID adoption options and can assist organizations in predicting the possible issues and challenges when adopting RFID. The AHP methodology was developed by Saaty (1980) to reflect the way people actually think and it continues to be the most highly regarded and widely used decision-making theory (Lin et al., 2005). RFID uses a microchip in a tag to store and transmit data when it is exposed to radio waves of the correct frequency and communications protocols from an RFID reader. It can be used to capture accurate information about the location and status of products and track them as they move from the assembly line to the retail store (Chappell et al., 2002). The three major components of RFID are: tags, readers, and software systems. RFID tags consist of silicon chips and antennas. Each tag uses an ID coding system and contains a unique serial number of a product. This enables the tag to store some information of the product. On the other hand, a RFID reader is used to communicate with RFID tags. In reading, the signal is sent out continually by the active tags. In interrogating, the reader sends a signal to the tags and listens. It can also send radio waves to energize the passive tags in order to receive their data. RFID software systems are the glue that integrates the RFID systems. The software systems manage the basic functions of the RFID reader and other components that route information to servers. At present, the most well known ID coding system is called EPC (Electronic Product Code), which is formulated by MIT and used by Wal-Mart. The RFID EPC Network is constructed from the ONS (Object Name Service), Savant (a middleware specific to RFID), and PML (Physical Markup Language) (AutoID, 2006; GS1, 2006; Lin et al., 2004; Yang and Jarvenpaa, 2005). RFID has many advantages and can be deployed to assist organizations in improving global integration as well as used as an effective tool in the areas of, for example, retail inventory tracking, customer relationship management, supply chain management, or any other situation where the tracking of the movement of goods or people is critical (Chappell et al., 2002). However, there are some business and technical problems and issues with the use of RFID technology (such as data sharing, data quality, costs and benefits, security and privacy, and RFID standards) and this warrants further research. Successful adoption and functioning of RFID can be affected by the divergent factors and perceptions of the internal and external stakeholders within an organization in the process of adopting RFID. For example, an organization must consider the potential costs in mastering collaborative planning and implementation with its partners before attempting to share and use the RFID data (Hugl, 2006).
Budgeting as a Competitive Advantage: Evidence from Sri Lanka
Dr. Siriyama Kanthi Herath, Clark Atlanta University, Atlanta, GA
M. W. Indrani, University of Ruhuna, Wellamadama, Matara, Sri Lanka
This study empirically explores the roles of Budgetary Control Systems (BCS) as a component of the Management Control System (MCS) in creating and sustaining Competitive Advantage (CA). More specifically, it attempted to reveal the existing accounting control practices in a manufacturing firm in Sri Lanka; Harischandra Mills Ltd (HML). The study examined how the BCS assisted in satisfying the demand for Coffee and Noodles at a competitive price leading to higher sales volume creating and sustaining a CA. The study reveals the budgeting process at HML and recognizes a number of roles performed by the BCS and it concludes that regardless of the fact that a BCS can play a leading role in establishing an efficient MCS for creating a sustainable CA, budgeting will not function in isolation. Instead, it can be used more effectively by strategically joining it with emerging strategic oriented knowledge enterprises. The management accounting literature proposes that effective planning and control are crucial for achieving organizational goals and objectives. Effective planning ensures that goals are selected with care and effective control ensures that the selected plans are implemented appropriately. Budgets perform an important role both in planning and control in achieving organizational goals. With Anthony’s (1965) pioneering work on management control systems, budgeting has become an essential part in organizational management. According to Anthony, budget preparation is an inherent part of the management control process. The management control process starts with the establishment of organizational goals and devising strategies for attaining these goals. This process involves the preparation of a strategic plan and converting it into an annual budget that focuses on the intended revenues and expenses for each responsibility centre. Budgeting is the cornerstone of the management control process in nearly all organizations, but despite its widespread use, it is far from perfect (Hansen et al, 2003, p.95). Budgets are financial blueprints that quantify an organization’s plans for future periods. They require management to detail anticipated sales, cash flows, and costs; and they provide an instrument for effective planning and control in organizations (Flamholtz, 1983). A proper budgeting system is more than just a process of collecting and accumulating numbers, and is a map that can guide an organization to competitive advantage (Jehle, 1999). The contemporary business world is extremely competitive and business must achieve competitive advantage to survive in business. The need to achieve a competitive advantage leads to the adaptation of the Porter’s (1985) value chain and value system model, which highlights that competitive advantages are derived from the many discrete activities an enterprise performs in designing, producing, marketing, delivering and supporting its product (Koh and Tan, 2005, p.187). As a result of the complexity in business, it is necessary that some powerful integrated control systems be used to achieve business goals and hence accomplish competitive advantage (Emmanuel et al., 1995). The survival of business firms depends on their ability to meet the needs of interested parties (Otley, 1995). Normally, businesses try to accomplish their goals while satisfying the needs of all interested parties, particularly customers, and that influences the success of the firm and hence competitive advantage. Budgets play an important role in this regard as they bind all the components together, establishing goals and benchmarks against which to assess performance as defined by the strategic plan (Jehle, 1999). Under these situations, it is worthwhile to observe empirically the association between budgeting and competitive advantage. There is very little research evidence of the role of budgeting in creating and sustaining a competitive advantage in developing countries like Sri Lanka. Thus, the main purpose of this study is to examine the role of budgeting in creating and sustaining a competitive advantage in a public company in Sri Lanka. Thus, this study is based on the following primary research question: i. What roles are performed by the budgetary control system in creating and sustaining a competitive advantage? The rest of the paper is structured as follows: Section II provides a review the literature on budgeting and competitive advantage. Section III describes the methodology used for the study, while Section IV describes the research site. The budgetary control system is discussed in Section V. Section VI is devoted to an analysis of role of budgets in creating competitive advantage. Section VII evaluates the role of Budgeting as a competitive advantage. Conclusions of the study are given in Section VIII.
Education and Labor Market in Knowledge-Based Economy of Korea
Dr. Namchul Lee, Korea Research Institute for Vocational Education &Training, Dankook University, Korea
The principal objective of this paper is to explore the reasons for male and female differences in participation rates in higher education and the labor market in Korea drawing on various aggregate data. Also, we compare to determine the relationship between education and labor market participation, and account for earnings inequality. More detailed attention is then given to issues concerning the changing composition of employment and unemployment in terms of occupation by gender. The results suggest that the overall differences in higher education at the college level which has been a heavily male-dominated area and in the labor market are primarily due to differences in observed socio-economic factors, such as wage differentials and cultural characteristics. Over the last forty years, education has been the main reason for the dramatic improvement the status of professional women. Educational attainment of the work force is a key development strategy aimed at promoting economic growth, and gender differences in education may be viewed as an important indicator of gender inequality (Mankiw etc, 1992). However, technology has proven to be an especially difficult area for women to penetrate professionally. They have been encountering very tough obstacles in the knowledge-based economy. The Korean economy has been transforming from manufacturing industry to a knowledge-based economy, owing to the continuous development of new one based predominantly on the technology, especially information and communication technology (OECD, 1996). In this paper, I analyze changes in the structure of woman’s higher education, labor force participation, employment, unemployment, and earnings inequality in Korea. The analysis is based on micro-data from the economically active population survey, statistical yearbooks of education, and a survey report on wage structures. In this paper, we aim to contribute to the reviewing literature and statistical data on this topic by studying changes over time industrial structure, labor force participation, and higher education programs. Understanding of the labor market trends provides a context for analyzing trends in higher education. For example, if participation in higher education programs parallels changes in the economy, one would expect to see a decline in enrollments in trade and industry programs in recent years and increase in enrollments in service and information communication technology related programs. One contribution of this paper is to provide on the basis of annual updates, to identify trends of implemented policies in the field of higher education. Also, this paper is to become a working tool for analysis and policy-making in the field of higher education and labor market. The paper is organized as follows. Section II analyzes Korea's economy and changes of the industrial structure under the knowledge-based economy. Section III investigates both changes in indicators of human capital in terms of higher education for women relative to men over time. Historically, enrollment rates of women in higher education and participation rates in labor market have been below the average rates observed in Korea. Such differences may simply arise because women are at a disadvantage in the Korean labor market. Section IV analyzes the changes in labor force participation in terms of females and males and discusses the relative employment and unemployment positions of Korean women in the labor market. Also, examines the evolution of earnings inequality between men and women. Wages key to understanding changes in employment over time, particularly, among female workers. Section V presents conclusions. The nation’s successful industrial growth began in the early 1960s, when the government instituted sweeping economic reforms emphasizing exports and labor-intensive light industries. The government also carried out currency reform, strengthened financial institutions, and introduced flexible economic planning. Korea’s rapid and sustained development can be ascribed to a peculiar combination of social, economic factors and the high level of industry. In Korea, GDP per capita increased by more than fifteen times between 1970 and 1997, from U.S$ 650 to U.S$ 10,371. However, the Korea economy slipped after the financial crisis. GDP per capita dropped to US$6,864 in 1998. It is estimated that the national competitiveness keeps slackening and many concerned voices are being heard that the economic hardship may not be overcome within a short period of time. However, GDP per capita increased to US$ 9,822, US$ 10,004, and US$ 14,148 in 2000, in 2002, and in 2004, respectively (See Table 1). One of the noticeable changes between pre (1997) and post (1998) economic crisis periods was acutely increasing unemployment rates from 2.6 percent to 6.8 percent. This increase was more dramatic among males, from 2.8 percent to 7.6 percent, than among females, from 2.3 percent to 5.6 percent during the same period. One can argue that although unemployment rates were higher among male workers, female workers also suffered considerably.
A Renewed Look at the Turnover Model for Accounting Knowledge Work Force
Dr. Yaying Mary Chou Yeh, CPA, Shih Chien University, Kaohsiung Campus, Taiwan, ROC
In an effort to understand the change dynamics on work attitudes and perceptions of the twenty-first century knowledge work force, this study explores a turnover model including organizational commitment, job satisfaction and turnover intent using accounting professionals as an example. Results from structural equation modeling (SEM) reveal that job satisfaction is the most important factor in determining employee propensity to leave for accounting knowledge workers. Organizational commitment is not critical in the turnover model for individuals with movement capital. Empirical test implies that firms should identify employees with different level of movement capital to design suitable retention programs. The conclusion of this study is also applicable to other high-level, service oriented knowledge employees in the age of knowledge economy. Knowledge worker, as first coined by Peter Drucker in 1959, is one who works primarily with information to develop and use knowledge in the workplace. Accounting professional is one of such who accumulates intellectual capital and business intelligence through specialization and expertise. The management of accounting knowledge workers, especially in the aspect of human resource development, requires contextual and careful attention and connection. The accounting profession has experienced profound changes in the past two decades. The globalization of the market place, fast intrusion of information technology, mergers of firms and increased litigation activities are but some of the challenges facing the profession (Schuetze, 1993). The Enron implosion wreaked more havoc on the profession than any other case in U.S. history (Thomas, 2002). The accounting profession is worried about not being able to recruit and retain enough professionals to fill the need after many scandals and the negative publicity in these couple of years (AOMAR, 2002). High employee turnover is a continuing problem, especially for accounting knowledge workers with vested capital in expertise and skills. The purpose of this study is to investigate work attitudes and perceptions in terms of the turnover model among accounting professionals with a hope to make an incremental contribution by generalizing characteristics about other types of professionals. Committed people are more likely to remain with the organization and work toward organizational goal attainment (Mowday, Porter, & Steers, 1982). Early researchers viewed commitment as a side-bet (Becker, 1960) and described commitment as a function of the rewards and costs associated with organizational membership and the accumulated interest in binding one to a particular organization. Others view commitment as binding the individual to behavioral acts that result when individuals attribute their attitude of commitment to themselves after engaging in behaviors that are volitional, explicit, and irrevocable (Kieslor & Sakumura, 1966; O’Relly & Caldwell, 1980; Salancik, 1977). Porter, Steers, Mowday, and Boulian, (1974) suggested that organizational commitment reflects an individual’s willingness to work towards and accept organizational goals. In this context, commitment consists of: “(a) a belief in and acceptance of organizational goals and values, (b) the willingness to exert effort towards organizational goal accomplishment, and (c) a strong desire to maintain organizational membership.” Meyer and Allen (1991) developed a three-dimensional model of organizational commitment by synthesizing common themes from prior researches. They view affective, continuance and normative commitment as distinguishable components rather than types of attitudinal commitment. They build on Mowday et al.’s (1982) work on employees’ affective attachment to an organization as affective commitment and on Becker’s (1960) side-bet theory that describes commitment as less affective and more concerned with accumulated investment that bind employees to an organization. Meyer and Allen (1984) name the side-bet definition of commitment “continuance commitment” and identify normative commitment as a willingness to remain with an organization due to a sense of moral obligation. This third component derives from one’s internalization of normative pressures from familial, cultural and/or organizational socialization to stay with an organization. Employees can experience each of these psychological states in varying degrees. The “net sum” of a person’s commitment to the organization reflects each of these separable psychological states (Schappe & Doran, 1997). Many researchers utilized this model to study the behavioral consequences of commitment (Schappe & Doran, 1997; Sims & Kroeck, 1994; Tepper, 2000: Yousef, 2000; Wahn, 1998). Given the three components of commitment as independent variables, early research findings consistently confirm the negative correlation between organizational commitment and employee intent to leave/actual turnover (Allen & Meyer, 1996; Larson & Fukaml, 1984; Mathieu & Zajac, 1990; Tett & Meyer, 1993). However, most researchers focused on work-related consequences of affective commitment, with very little emphasis on continuous commitment and normative commitment. Allen and Meyer (1996) confirm significant relations exist between the three dimensions of commitments and intention to stay/leave variable and the correlation is strongest for affective commitment. One recent finding applying structural equation modeling techniques in overall casual-effect relationships is contradictory. Stinglhamber and Vandenberghe (2003) demonstrated that affective organizational commitment did not influence employee actual turnover.
Market Controls on Corporate Social Responsibility: An Exploratory Study of Banking & Investment Policies (1)
Dr. Breena E. Coates, Chairman, San Diego State University, Calexico, CA
Corporate fraud, managed mendacity and other crimes necessitated passage of the Sarbanes-Oxley Act of 2002 (2). This conceptual paper resumes the discussion of corporate social responsibility by analyzing market controls that could facilitate business ethics, such as the “Equator Principles” in Banking, Grassroots Participatory Lending and other mechanisms. It explores these devices via a constructivist-interpretative methodology, using content analyses of scholarly and practitioner literature to deconstruct and synthesize business strengths, weaknesses, opportunities and threats over the last 20 years, and thus make policy-relevant recommendations for business and government. This exploratory study will be followed by an empirical analyses using survey research and statistical analyses of banking and investment loan evaluation procedures. This conceptual study examines how market controls could be an avenue for facilitating corporate social responsibility (CSR). CSR is the expectation that companies see their corporate strategies and decision-making within a framework that includes sustainability of social and environmental resources, and a keener understanding of the negative externalities generated through business actions–and not just profit and loss considerations. (Deresky, 2006). Our current system of business accounting harks back to the industrial revolution which set forth business practices that were largely oblivious to their social and environmental impacts. One explanation given is that our gross domestic product, based upon the “national account” principles set forth by John Maynard Keynes, while useful in its exactitude for accounting for capital goods, is elusive and ambiguous in tracking the impact of business on social and natural resources. The thinking of business and economics is about externalities, which may or may not be “internalized” to the organization via taxes and regulation through governmental instrumentalities. In the past it was possible to view the many of the costs of business as external—i.e., expenses generated by business but compensated for by the people. (Gore and Blood, 2006). Today social and environmental resources are endangered and business must take its “negative externalities” into account in its own dealings with society. A question to ponder as one thinks about corporate responsibility is: “what will be the cost to the planet by the year 2025, if corporations continue to degrade these human and natural assets? This study takes the notion of CSR beyond the legislative mandates of statutory and case law, as discussed in an earlier analysis of the Sarbanes-Oxley Act, 2002 Coates, 2004), and makes its present focus on corporations and their dealings with human and environmental justice issues in doing business. Figure 1.1. develops the complex causal sequence. The methodology used in this study employs an inductive constructivist approach (Denzin and Lincoln, 1994; Guba and Lincoln, 1985; 1989; Creswell, 1994, Strauss and Corbin, 1994; Glaser and Strauss, Dukes, 1984) (3) . It does so by critical content analyses of the relevant scholarly and professional literature. In the process of analyzing the relevant literature on CSR from 1980 to 2006, one sees a pattern of business ideologies and perspectives, that have been conducive to a climate of wrongdoing. Conversely, one also sees emergent forms of “rightdoing” being imposed by banking and investment policies, as well as, and organizations’ own internal CSR monitoring systems. On May 25, 2006, Kenneth Lay and Jeffrey Skilling—the top executive leaders of the Enron Corporation were found guilty of massive fraud and corporate wrongdoing. It took four years for the wheels of justice to shape this judicial decision, yet many of the employees of Enron who suffered from its failures (as well as those from other rogue corporations like WorldCom, Tyco, Adelphia) will have to endure the lifetime effects of these business crimes. In 2002, when these felonies came to light, Congress passed The Sarbanes Oxley Act (SARBOX) (4). This sharp policy instrument was designed to assist in the deterrence of fraud. As stated by the President: “[to] adopt tough new provisions to deter and punish corporate and accounting fraud and corruption” (ibid, 1, White House News Release, 2002). (5) SARBOX is the most sweeping attempt to regulate and make ethical public markets since the Securities and Exchange Act of 1934. While legislation such as this is a necessary condition for corporate responsibility, it is insufficient. Public policy, however, merely addresses the manifestations of corporate sociopathology. As noted by Shafritz and Madsen, the law is but a moral minimum (ibid, 1990). Minimizing wrongdoing includes stickier region of institutional and individual visions, values and beliefs, as modeled by executives in the strategic apex of the organization and flowing down to all parts of the operation base. Addressing these issues entails a cultural change, shared meanings about profit and loss, and social responsibility (Coates, 2004). It also entails encouragement of such values on clients by relevant institutions.
A Factor Analytic Study of the Computer Anxiety Rating Scale: Evidence from an Egyptian University
Dr. Mansour Salman Mohamad. A. M. Lotayif, Al Ghurair University (AGU), Dubai, United Arab Emeritus
Dr. Ahmed El-Ragal, Arab Academy for Science and Technology, Alexandria, Egypt
Computer Anxiety Rating Scale (CARS) has been a topic of interest in computer literature for decades. The current study is an endeavor in this perspective. It attempts to determine the number of anxiety’s constructs within Woszczynski’s 16-item CARS. The experience of 344 undergraduate students was utilized to achieve the current study aims. Through Explanatory Factor Analysis (EFA), CARS was loaded on four factors that represent four constructs: future ambition about computer, technical, personal, and experience anxieties. Technophobia and computerphobic are two terminologies surface as consequences of massive usages of computer applications in most life aspects. North, and Noyes (2002); and McIlroy et al (2001) have defined technophobia is the anxiety about present or future interactions with computers or computer-related technology; negative global attitudes about computers, their operation or their societal impact. Drawing from psychology literature, anxiety is a disease for which there is no consensus about its causes. However, its debilitating symptoms are well defined. They range from trembling, facial strain and high resting pulse, to excessive rumination, dizziness, insomnia and impaired concentration in its most severe forms (Shrady, 1985). With regard to computer anxiety, students and adults suffer from the same fear. What complicates this matter is its ramification on the personality. More specifically, any form of students and adults’ failure in any computer classroom is perceived as a reflection of their personal worth instead of simply failing at an academic exercise (Fisher, 1998). Completions of computer courses help alleviate this kind of computer anxiety (Safford and Worthington, 1999; Gos, 1996). Fisher (1998, p. 14) suggests the followings to avoid computer anxiety in any classroom for students or any training session for adults: Let there be light. Make the room setting as welcoming and nonacademic as possible; It's in the cards. After initial introductions, pass around blank 3 x 5 index cards to all participants. Ask each to write just three things on the card: number of years out of school, greatest fear about entering training, and why he or she is taking the training; Tenure is tenuous. This exercise is to review the average number of years people stay in various occupations. Set it up as a fun quiz by having trainees guess about job tenure for eight to 10 professions; Learning is fundamental. Show trainees how most adults engage in daily self-directed learning without realizing it; Tackle stress head-on. Discuss anxiety, stress, and change in open-forum fashion, and encourage participants to explore why those feelings are not only natural, but also beneficial; Old dogs can learn new tricks. Review in lay language what research shows about adults and their ability to learn; Left, right, left, right. Acquaint trainees with basic information about left- and right-brained thinking and basic learning styles; Address the brain drain. Spend the next block of time explaining study tips and showing participants how to compensate to achieve more balance in their thinking and learning modalities; and Sing a little song. Acquaint participants with mnemonic devices to help them remember new material. It's easy and fun, and it engages people's cognitive faculties in a non-threatening way. Determining computer anxiety has been a topic of interest for many scholars (e.g. Broos, 2005; Murthy, 2004; Hunt et al., 2002; Pratt et al., 2002; McInerney et al., 1999; Ropp, 1999; Ellis and Allaire, 1999; Cody et al., 1999; Hemby, 1998; King et al., 1998; Harris et al., 1998; Stone and Arunachalam, 1996; Harris and Grandgenett, 1996; Ayersman and Reed, 1995; Martocchio, 1994; and Harrison and Riner, 1992). For instance, King et al., (1998) returned computer's anxiety – for adults - to organizational, generational factors, and the lack of know-how (i.e. skill, ability, and experience). On the other hand, Pratt et al., (2002) findings suggest that students enjoy learning using computers, and also feel they learn most when using computers. However, junior students were generally more positive about using computers than were senior students. Broos (2005), find that there was a significant effect of gender, computer use, and self-perceived computer experience on computer anxiety, as males were found to have less computer anxiety than females and computer experience has a positive impact on decreasing computer anxiety for men, but a similar effect was not found for women. In the same line of literature, Beentjes et al. (1999); Van, et al., (1998); Comber, et al., (1997); Bannert and Arbinger (1996); Clarke (1990); and Chen. (1987) pointed out that boys use computers more than girls, as well as have more computer experience, also spend more time using the computer, and have more interest in computer-related activities. Other scholars (Murthy, 2004) tried to connect between students’ participation in and satisfaction with a computer-mediated collaborative learning system and both autonomy and anxiety about the use of computers. The results of that study indicated that although students varied significantly in terms of their preference for autonomy and anxiety about the use of computers, there was no significant difference in participation and satisfaction levels between low and high computer anxiety students or between those with a low preference for autonomy relative to those with a high preference for autonomy. Recently, Compeau and Higgins (1995) adopted a 4-item scale to measure anxiety. However, this rating scale was not extensively tested (Woszczynski, 2001), therefore it is recommended to explore this approach in further research. Annexing, Miller and Rainer (1995) developed a 19-item scale for measuring anxiety. Confirmatory factor analysis’s (CFA) findings revealed that only 7-item scale was relevant. However, their approach was strongly criticized in three aspects: in using the CFA which is not recommended in such cases in the literature, and in the theoretical logic behind omitting the 12-items from the rating scale, finally in the unsuitability of the used technique in items omitting, as they omitted 12-items in two sequential steps without justifiable support from anxiety literature (Woszczynski, 2001).
Executing Strategies on Intellectual Capital: Case Study for Management and Corporate Governance
Jui-Chi Wang (Amanda), Hsing-Wu College, Taiwan
Skanida is the first company to report its valuable intangible asset on financial statement in the world and has made the distinguished contribution in intellectual capital (IC) development and affected greatly in today’s strategic management and business re-engineering practices. This case study first introduced the background of this Sweden company, and also its current financial performance. Jan R. Carendi, former CEO of Skandia, had worked for Skandia for decades and contributed his initiative knowledge and vision to build the foundation of intellectual capital in Skandia. Skandia Navigator was introduced to help the company to balance the financial and intellectual capital evaluation; therefore, the company’s successful factors can be visually seen and evaluated by quantitative ratios through the business development process. The strategic development and value creation as the hidden values of the company can be managed by utilizing the intellectual ratio and indicators. The strategies for Skandia’s human capital, intellectual capital management and distribution channel were also discussed. Furthermore, Skandia initiated extensive work on reshaping the group from a federative structure into a highly cohesive and uniformly governed group that makes effective use of operating synergies. Finally, the key questions examined internally and externally would provide detailed analysis and suggestion in changing external business environment and competitive realities for Skandia. In 2002, America’s Fortune magazine named Skandia as one of the 10 best companies for employee to work for. It is the first time for the magazine to rank the European companies in 20 years and the only Swedish company to receive this acknowledgement was Skandia. Also, Skandia Germany has been ranked number two in “Best Employers” survey that was conducted by Hewitt Associates consulting firm in 2004. Skandia insurance company was established in Stockholm, Sweden in 1855. It is the oldest listed insurance company in Stockholm Stock Exchange, Sweden, and on Jan. 12, 2005, Skandia has provided its service and product globally for over 150 years. The following statement is Skandia’s company profile: “Skandia Insurance still sells insurance worldwide, but long-term savings is becoming its new target. Skandia offers life, health, and property/casualty insurance; banking; and investment management. The company derives about 80% of its sales from markets outside Sweden, targeting businesses with such products as commercial, industrial, marine, offshore, and aviation insurance. Its core competencies are fund selection, product development, marketing, and market support. Operating units include mutual insurance group Skandia Liv, Scandinavian banking group. Financial services group Old Mutual has made a takeover bid for Skandia” (Yahoo, 2005). Skandia’s concept toward to the intellectual capital was first verbally presented in 1992 annual report. In 1994 annual report, Skandia Navigator was first introduced to highlight the importance of the intellectual capital. Skandia has been successful and continually expanded worldwide and key reason is its concentration on intellectual capital management. Its innovation capability in product design and development has kept Skandia to be the leading role in the industry. The following 5-year financial statement summary can clearly indicate what Skandia has performed during latest years. However, in past few years, the impact of scandals, SARS and Iraq war have affected the performance of Skandia, especially for the growth of new sales (see Table 1). Under Jan R. Carendi’s radical and bold leadership, employees in Skandia have been supported and enhanced to improve their working skills and knowledge through various internal and external training courses and programs. The former CEO of Skandia, and now who is a member of board of management of Alliznz AG since May 2003, had worked for Skandia for decades and contributed his initiative knowledge and vision to build the foundation of intellectual capital in Skandia. Also, what he believed is that the core resource for Skandia should be invisible intellectual capital, not physical assets shown on the financial statements. Skandia Navigator was introduced to help company to balance the financial and intellectual capital (see Figure 1). It focuses on the following four categories: human focus, customer focus, process focus and renewal and development focus. Financial focus information represents the historical performance, and customer, human and process focus information can determine the firm’s performance today. Then, how the firm can perform tomorrow is based on the information from renewal & development focus portion.
An Empirical Analysis Concerning the User Acceptance of E-Learning
Dr. Salih Zeki Imamoglu, Gebze Institute of Technology, Turkey
With the rapid change of traditional practices, technologies, skills and accelerated rate of knowledge creation and dissemination; lifetime learning has become inevitable both for everyone and for every organization. An e-learning system is a promising alternative in the current educational revolution that is taking us from a print to a digitized culture, with a corresponding demand to deliver knowledge to educate large numbers of people over vast areas without the boundaries of time and place. The increasing importance of e-learning has made user acceptance a critical subject for both academicians and practitioners. Accordingly, in this study the e-learning concept is described with a detailed literature review, and user acceptance, in terms of perceived usefulness, is empirically investigated. In the twenty first century, a period of knowledge domination[BAS1] , technology is increasingly changing our lives. First, desktop computers appeared, followed by the Internet, a technology that enables any user to have access to unlimited quantities of information and knowledge. In the current educational revolution that is taking us from a print to a digitized culture, technology and the Internet are playing more active roles in the education and training processes[BAS2] . As a result, educators and academicians face an important challenge in how they define e-learning (LearnFrame, 2000; Cheng, 2006; Huynh, 2005). E-learning, a relatively new instructional computer technology, breaks the constraints of time and space while creating many benefits, including reduced costs, less regulation, improved ability to meet business needs, greater ability to retrain employees, lower recurring costs, and better customer support (Barron and Mayberry, 2000; Gordon, 2003; Burns, 2005). The impact of e-learning is invisible, and it has attracted much attention from practitioners and researchers (Ravenscroft and Matheson, 2002). Accordingly, the number of online education and training programs has continuously increased over the last decade (Chyung and Vachon, 2005). Despite the several benefits of e-learning, the level of user acceptance varies, reflecting the perceived usefulness of e-learning activities that learners experience during e-learning programs. There are several factors affecting the level of user acceptance, such as perceived ease of use, intention to use, ability to use and commitment. As a result, it is necessary to manage the e-learning process in the new economy, which is marked by the increased size and variety of the learning network, which affords a larger and more diverse set of information resources and enhanced opportunities for information sharing and idea generation. In this study, our goal is to examine the e-learning concept from both theoretical and empirical perspectives, with a detailed literature review and a survey study conducted in with Small-Medium Enterprises (SMEs[BAS3] ) located around the cities of Gebze and Kocaeli[BAS4] . In academic environments, it has been argued that computer-based communication is the most important transformation in communication technology in the last 150 years. This transformation has had tremendous effects on learning opportunities (Garrison and Anderson, 2003). In the distant past, learning took place within one’s own family, society, clan and tribe. Today, our learning networks are more tentative and diffuse. The development of learning resources has always been vital for education and training. Several factors have led to the increasing importance of content development as a separate and more specific activity, which often involves a consultative approach or team effort, or is undertaken by people who may or may not be involved in the teaching (ANTA, 2003). Today’s educational enterprises increasingly need to deal with an increasing demand for knowledge in varied and dissimilar social, economical, cultural and technological environments (Huynh, 2005). We participate in schools, places of work, and extended families that are much more diverse, distributed, and without the strong, old ties of traditional learning networks. Often the groups we join interact solely by means of technology; the participants may be colleagues, friends or even strangers (Desanctis et al., 2003). As a result of the Internet[BAS5] , which provides a common broadband connectivity, a new term like “e-learning” emerges and becomes a central theme for learning. In the context of e-learning, learners are given much flexibility in choosing their place and time of study (Fujii et al., 2004). The Internet has the potential to leverage the learning field, whether, for example, one is a high school student in quest of support with a geometry question or a broker in Los Angles who wants to get an MBA from Duke University’s Fuqua School of Management’s distance education program. Individuals now have the choice to learn on their own time and at their own place. E-learning technologies enable real-time performance, thus allowing individuals to spend time on their areas of insufficiency, rather than wasting time on areas that they have already mastered (LearnFrame, 2000).
International Reality of Internet Use as Marketing Tool
Dr. Maria Teresa Borges Tiago, University of the Azores
Dr. Joao Pedro Couto, University of the Azores
Dr. Maria Manuela Natario, Polytechnics Institute of Guarda
Dr. Ascensao Braga, Polytechnics Institute of Guarda
This paper analyzes the factors that are associated with success in using the Internet as a marketing tool. We administered a questionnaire to companies on three continents, considering six levels, which range from financial results and customer relations to buying efficiency. We divided the companies into three groups using cluster analysis and tested the nature of the companies in each group on two levels, one regarding firm’s characteristics and another considering the approach that firms took in the use of the Internet marketing tools. The results show that the benefits vary according to the way the Internet application are explored; that commitment level can affect the results and the impact is superior in the companies that explore the Internet potentialities in a wider perspective. These results demonstrate that the Internet is not only a means of sales promotion, but also relationship management and that companies who invest in an integrated perspective are more successful. As limitations of the study we consider the need for more research into the types of company activities that use the Internet as a fundamental component of the business. This paper contributes to the research on this topic with new evidence in a broad geographic sample. The accelerated development of the New Information and Communication Technologies (NTIC) has promoted globalization and the opening of new markets that require new business structures, new mentalities and culture, and new competencies. Thus, it demands the ability of companies to adapt and develop new capabilities. In face of uncertain scenarios derived by the growth of market competition, the challenges to organizations have become more complex and demanding. The organizations apply more to the NTIC to create generate process and distribute information in real time (Teo & Pian, 2003). They develop new opportunities of sharing large amounts of information and allow new relationships between the organizations and the customers (Boyle & Alwitt, 1999; Boyle, 2001). In this manner, they have become a crucial element not only in their internal activities, but also in their external relationships, in the ability to communicate and process information, in the identification of new business opportunities, in advertising their products/services, and in sales. In many cases the NTIC are already incorporated in the organization’s processes and strategies. In effect, their adoption and use must be a strategic option to achieve competitive advantages (Porter, 2001). In the last few years the Web has become the most used Internet application as a low-priced way of having access to information and communicating (Avlonitis & Karayanni, 2003; Dubois & Vernette, 2001). The purpose of this paper is to analyze the importance of the Internet as a marketing tool in the United States, Europe, and Asia. After a literature review, we present the hypotheses and methodologies used in this study and the results are analyzed and discussed. The World Wide Web allows the creation of virtual communities; supported by a diverse set of tools whose specifications influence the character and the structures of the communications they support (Rheingold, 1993; Long & Baecker, 1997). The diversity of online communication tools is growing. Some of these tools have specifications that have a significant effect concerning synchronization between messages and receivers, styling (person to person, and person to forum), way or support (text, graphics, audio, etc), and the existence of dialogue. A few years back enterprises questioned the role of Internet in business performance, but today they cannot live without it, or outside of it (Sultan & Rohn, 2004). So, if initially businesspeople saw the Internet as an opportunity to save money on information, support to customers and transactions, nowadays, the Internet is more focused on the management of the process of communication online regarding the improvement of transaction efficiency, giving value, and increasing the customers’ involvement (Berton et al, 2003; Rao et al., 2003; Osmonbekov et al., 2002 ; Leek et al, 2002 ; Sharma, 2002). The Internet should be seen in a context of commercial and social progress. It should be used not only to sell, but also to create marketing mix (Kambil, 1995; Schlosser, 1999). In this sense, it should be more than a “place” where the image of the company and its products presented, trying to change occasional visitors into potential clients/customers but, mainly, transform them in active customers, (Hoffman, Novak and Schlosser, 2000; Constantinides, 2002). So, the NTIC are no longer used simply to send advertisements to passive audiences, but become related to the creation and participation in virtual commercial or social communities. According to Hoffman and Novak (1996) the Internet is a marketing tool, since it has distinctive marketing: the interactivity of “many-to many”, flows and exploitive/directive behaviors. Therefore, it’s important to restructure marketing activities so that they can be applied in a new context and in a more appropriate form (Strauss & Frost, 2001).
Optimal Stochastic Production Entry and Exit Models
Chuan-Chuan Ko, Jin Wen Institute of Technology, Taipei, Taiwan
Dr. Tyrone T. Lin, National Dong Hwa University, Hualien, Taiwan
This study aims to evaluate the firm value by risk adjusted discounted factor with no leverage condition. The proposed model considers the market entry model with investment cost and the market exit model with exit cost simultaneously. The results of this study find out the hysteresis difference only accessed by production entry or exit model, and the threshold of production is more conservative than the traditional net present approach or real options approach. The purpose of this work is to conduct study that when the project investment keeps out of debt, the anticipated rate of return on investment required by stockholders plus the risk premium shall be the revised risk discounted factor in order to measure the firm value. In addition, traditional financial management is based on the risk neutral approach to conduct measurement aiming at the stock price to convert into the firm value. That means the anticipated rate of return on investment is equivalent to the risk-free interest rate, which is accepted and identified by general finance scholars. However, when an enterprise is conducting real asset project investment consideration, if it utilizes the risk-free interest rate of the risk neutral approach as the anticipated rate of return on project investment, it will not be accepted by the practical sector. Therefore, in order to improve the difference generated from the object considered for real asset investment evaluation and financial commodities, this work adopts the anticipated investment rate of return plus the risk premium as the revised risk discounted factor so as to evaluate the income of the project plan more accurately. In addition, the real options approach is utilized to evaluate the optimum threshold of market entry and exit of investment strategy and measure the potential value of the investment plan. Lin and Lo (2006) conducted study on the most suitable lending quality of a financial institution (debt risk coefficient) and established the upper (lower) limit of this lending quality and explained the reasonable scope of lending quality under the premise of consideration on the entire maximum income by the financial institution. Lin, et al. (2006) conducted on how the financial institution should adopt the most suitable capital financing retreat policy evaluation under transparent market information. The analysis result explained how to select the most suitable investment retreat threshold by the leader and the follower from the first in first out or last in last out financing strategy. Keswani and Shackleton (2006) provided that when the project dis-investment (capital withdrawal) was processing the option value, the net present value (NPV) was adopted as the generated option value during its measurement to conduct analysis. Maurer and Sarkar (2005) utilized the real options approach to establish the most suitable capital structure of the company. They separately measured the stockholders’ value and the creditor’s right value. In addition, Maurer and Sarkar (2005) sought for a solution on the most suitable timing and the timing of non-performance by the stockholders based on the viewpoint of the stockholders and the management. If the stockholders sought for maximization of their profit, normally they would advance their investment and this would reduce the firm value and lose the most suitable capital structure as the advance investment would result in the decline of uncertain factor of the company and the reduction in the anticipated liquidation cost but tax burden would increase. This paper utilizes the real options approach to conduct study on the project (dis-) investment that under no raising funds for financing, the revised risk discounted factor is adopted to measure the firm value. In addition, this work would evaluate the most suitable production entry model when production is to be commenced and the most suitable production suspension model when there is consideration on market exit. The following is the content of discussion of this paper. Section 2 establishes the model and runs sensitivity analysis for relative parameters. Section 3 reports numerical analysis and sensitivity analysis is discussed in Section 4. Finally, Section 5 concludes this research work. Assume that under an incompletely competitive market, the firm is the decision maker of the product price P. Apart from considering the factor of market supply and demand; the firm is also required to consider the capital source and select whether to invest in production equipment. Moreover, assume that there is a sustainable operation for the firm, the capital cost I of fixed production equipment and the production essential variation cost will increase following the increase in production essential cost, the cost percentage required by production, and the production volume variation of various unit time will follow the geometric Brownian motion:
Doing More Harm than Good: Unraveling the Mystery of Frustration Effects
Dr. Michael P. Lillis, Medaille College, Buffalo, NY
Dr. Frank J. Krystofiak, University at Buffalo, Buffalo, NY
Dr. Jerry M. Newman, University at Buffalo, Buffalo, NY
In a management context, there is a strong belief that employees view outcomes more favorably when they result from fair procedures rather than unfair procedures. Yet academic and popular accounts indicate that some procedural enhancements have the potential to backfire – i.e. process improvements can unexpectedly bring about an increased sense of injustice, thereby doing more harm than good. This study attempts to provide an integrative framework for understanding this so-called “frustration effect”: when does procedural justice enhance, and when does it diminish, distributive justice. To better understand the occurrence of frustration effects, the authors focus on Referent Cognitions Theory (RCT). Using structural equation modeling in a multi-sample framework, evidence suggests that the trigger for the so-called ‘frustration effect” depends on a belief in one’s entitlement to a preferable referent outcome. If outcomes are bad enough, and fail to meet individual expectations for a more desirable alternative, procedural fairness does little to enhance perceptions of distributive justice. The results are discussed in connection with practices used to allocate scarce goods and resources. The research literature provides reasonably consistent information about two components that are crucial in deciding if the allocation process is just or fair: distributive justice (which focuses on the perceived fairness of outcomes, Adams, 1965) and procedural justice (which concerns the justice of the process used to determine those outcomes, Greenberg, 1990). Additional research efforts suggest not only that both process and outcome dimensions play a role in determining fairness perceptions, but that they play an interconnect role. In other words, individual reactions to outcomes may be enhanced by the perceived fairness of procedures used to distribute those outcomes (fair process effect), and conversely, individual reactions to procedures depend on the perceived fairness of outcomes that were obtained through those procedures (fair outcome effect). There is evidence that procedural and outcome enhancements don’t always have positive implications for justice judgments. For example, within the relative deprivation literature (e.g. Cropanzano & Randall, 1995; Folger & Martin, 1986) it has been argued that an improved outcome may provide a basis for increasing expectations. Under these circumstances, rising expectations create a new benchmark or reference point against which outcome allocations are measured. If improvements fail to meet these new benchmarks, the potential exists for individuals to feel “deprived” or dissatisfied with their improved outcomes. Other researchers have found situations where a process improvement fails to have a positive impact on justice judgments. In particular, in a small but persistent number of studies researchers have found that increasing procedural justice can actually yield a decline in perceptions of distributive fairness, suggesting that the fair process effect doesn’t always work. Folger and his colleagues labeled this finding the “frustration effect” (Folger, 1977; Folger, Rosenfield, grove & Corkran, 1979; Folger, Rosenfield & Robinson, 1983). The research literature offers few insights into the causes of this phenomenon. Certainly a disproportionate rise in employee expectations would help to explain such a finding. This explanation is consistent with the conclusions from other research that has observed increased expectation levels with the introduction of a fairer procedure (see Harlos, 2001; Robinson & Rousseau, 1994). From a process improvement perspective, as organizations become increasingly reliant on systems that allow employees to express their voice (open door policies, grievance procedures, suggestion boxes, etc…), the question remains as to whether the introduction of a procedurally enhanced system will only intensify employee perceptions of unfairness. This article adds to this literature by offering insights as to how and why such procedural innovations could potentially fail to produce their intended effects. By analyzing conditions under which procedural fairness enhancements can actually decrease perceptions of distributive fairness, this study directs attention to a context that has been historically neglected in the research literature, a situation where process improvements can unexpectedly bring about an increased sense of injustice. Findings reported here suggest that the frustration effect is a natural and theoretically expected outcome of a generalized model of justice – an outcome that occurs predictably under specific conditions. In the current article we try to show that reactions to process enhancements depend in large part on an individual’s frame of reference and conceptualize our understanding within the framework of Referent Cognitions Theory (RCT; Cropanzano & Folger, 1989; Folger, 1986, 1987, 1993; Folger & Cropanzano, 1998, 2001). We begin with a brief introduction to RCT, followed by an overview of relevant frustration effect research. After a test of our conceptual model, we discuss some practical implications along with avenues for future research.
Audience Attitudes Towards Product Placement in Movies: A Case from Turkey
Dr. Metin Argan, Anadolu University, Eskisehir, Turkey
Dr. Meltem Nurtanis Velioglu, Abant Izzet Baysal University, Bolu, Turkey
Mehpare Tokay Argan, Anadolu University, Eskisehir, Turkey
The practice of product (or brand) placement in movies and other multimedia has been an important emerging area of marketing and advertising communication in recent years. Marketers and movie producers now frequently use placement as the basis for multi-million dollar promotional campaigns. This study describes the attitudes of a sampling of Turkish moviegoers towards product placement in movies and analyzes data to determine the effect of product placement on a Turkish population of moviegoers. In order to examine product placement from an audience’s point of view, the study investigated a sample of Turkish moviegoers. The findings indicate that while there is generally a favorable attitude toward product placement, extensive commercial usage of product placement in movies is perceived by moviegoers as ethically less acceptable. The findings also indicate that movie going frequency and the level of movie enjoyment affect the attention paid to product-placement, whereas gender, age, education and income level do not affect attitudes towards product placement. The results of this research have significant implications for both the practice of marketing as a whole as well as to how the movie audience interprets product placement practices in Turkey. Product placement refers to the practice of including a brand name product, package, signage or other trademark merchandise within a motion picture, television show or music video (Brennan et al.,1999). This placement is done to influence the audience (Balasubramanian, 1994). The placement could be done within a movie scene to add realism to movie scenes, but from the product placement practitioners’ point of view, the desired influence is in the form of increased awareness of and intention to purchase the placed brand (Babin and Carder, 1996). Schudson (1984) points out the widespread practice of product placement by tobacco companies in the Hollywood movies of the 1920s. However, until the 1970’s, product placement practices were defined as poorly organized efforts. Product sponsors in those times did not make payments to movie producers; they would simply donate or loan branded items to appear in scenes, only to take them back afterwards. Now, expenditures for product placement are now calculated in millions of dollars. Product placement costs vary according to duration, interaction between the product and characters, as well as the prominence of the placement (Ferraro and Avery, 2000). In this paper, we address a movie from the Turkish cinema and its audience as a case to investigate audience attitudes towards product placement. The Turkish cinema offers an interesting case for study because it is quickly developing in terms of growing numbers of both the audience and theatres in the country, while the European cinema industry is arguably experiencing in a period of recession (European Audiovisual Observatory, 2006). The average audience figures increase for 15 European Union countries is 5%, for 10 European Union countries 16%, and for 25 European Union countries 6%, while average audience increases in Turkey is 20.6% (Baudais, 2004). All these developments have created the wider and more professional use of product placement in Turkey, and this, making Turkish cinema an attractive case for study. Accordingly, scale items in the survey used in this study are specifically developed to explore audience perceptions of the marketing strategies used for product placement as well as to identify audience attitudes towards these strategies. Furthermore, the differentiation level of audience attitudes has been reviewed among variables that may influence attitudes, such as movie going frequency, level of movie enjoyment and certain demographic features. Product placement strategies can be categorized into three modes: (1) visual only, (2) audio only, and (3) combined audio-visual (Gupta and Lord, 1998). The first strategy of product placement involves demonstration of a product, logo, billboard or other visual brand identifiers without any accompanying message or sound (Smith, 1985). The second strategy refers to audio placement whereby the brand is not shown but is mentioned in the film dialogues (Russell, 2002). The third strategy of product placement refers to a combination of the first two strategies i.e. to a hybrid strategy. Efficient usage of product placement strategies may vary depending on the person using the product in the movie, the character portrayed in the movie, the extent he uses the product and the creativity. The mostly used product placement mode is visual placement, which carries the risk of going unnoticed or not being recalled by viewers. Audio placements also involve similar risk. The third mode of combined audio and visual placement requires creativity in order not to interfere with the natural flow of the movie; its cost is also generally higher than that of other modes (DeLorme and Reid, 1999).
Management Education Reform in a Knowledge Management Environment
Dr. Satya P. Chattopadhyay, University of Scranton, Scranton, PA
This paper seeks to link knowledge management (KM) principles to needs that businesses expect will be filled by trained management graduates. Knowledge management is defined, and business tasks are mapped onto a knowledge management scenario. The need to change the emphasis of present management curriculum to reflect the new realities is substantiated. Specifically, the components of a knowledge management system: knowledge acquisition, strategic sense making and communication are described. Ultimately, the goal of business education is to develop decision-makers of the future who will be able to make better quality decisions more efficiently in what is at the core, a problem-solving environment. The business world is the customer of trained professionals that academic programs seek to deliver, and it is increasingly vocal about the needs that it wants met. With ever-increasing sophistication in conceptualizing and articulating these needs, they are able to clearly identify the specific areas where they seek fulfillment. They are looking for individuals who will be able to hit the field running. They seek to hire employees who are competent, productive, innovative and able to anticipate and respond to crises. The academic programs seek to prepare their students so that they will be up to the task ahead. The students are provided with a body of knowledge that is a varying blend of theoretical principles and practical applications as they pass through the system. They learn how to conceptualize problem scenarios and develop solutions using knowledge inputs they receive. This knowledge transfer is the key. Strictly speaking, the problems of improving the quality and effectiveness of business programs are no different than the problems faced in the business world. It all boils down to the transfer of knowledge! However, the problem that is implicit is that most of the knowledge is what is known as “tacit knowledge” or knowledge that resides within individuals or groups and communities that are not accessible directly through non-personal interactions such as literature searches and/or database queries. The way such crucial knowledge is transferred still is highly personal, idiosyncratic and inconsistent one-to-one interactions. Learning remains largely a function of who one knows and what is in ‘the head’ of that person and ‘his/her ability and or willingness to impart that knowledge. That is where knowledge management (KM) steps in. Olson (1998) states that: the key challenges of knowledge management (are) to design and build solutions that don’t just capture and distribute information, but that manage that information (as well as expertise) so that it is accessible and valued by the individuals who need it, when and where they need to apply it to improve performance. This paper identifies some of the key business functions that can be mapped to practical implications of integrating a KM framework to business education. The following sections of the paper discuss the implications of needs of the business world, link them to key performance areas, and discuss implications for a KM framework. Lotus Notes Corporation, in a white paper, identifies four major needs that they classify as the business drivers: competency, productivity, innovation and responsiveness. Competency: While employers do expect that employees will learn on the job, they expect their recruits to have a level of competency where they already have a solid foundation of basic pertinent body of knowledge so that the learning curve is one that is short and steep. The employees come into organizations must quickly learn the specifics of operating at the state-of-the-art and then continue to maintain that edge by timely refresher training in latest developments in technological, management and professional areas pertinent to the organization. Faced with a likely information overload scenario, the individuals must be able to home into key informants and be able to quickly determine nuggets of useful information and be able to use them appropriately. Productivity: We all agree, implicitly, that most of the problems we face in our lives, both professional and personal, are not unique in most aspects. Thus efforts at developing solutions from scratch in most circumstances are unnecessary and redundant. It happens all the time because as the saying goes “one hand has no knowledge of where the other has been.” The learning from having encountered and solved a similar problem elsewhere is not possible due to lack of access. Much better and productive use of time is in developing solutions for ‘new’ problems, rather than re-inventing solutions for already solved ones. Innovations: The ability to come up with solutions for ‘new’ problems tends to be a function of “thinking out of the box.” The key is the ability to identify knowledge and/or expertise that reside in non-traditional sources and apply them to the problem at hand. Knowledge of domains other than the one principally dealt with, and ability to create new solutions in response to problems.
An Evaluation of Time-series Operational Performance on the Non-profit Hospitals in Taiwan
Ching-Kuo Wei, Oriental Institute of Technology, Taiwan
The purpose of this research is to study the operational performance of the non-profit hospitals in Taiwan from 2000 to 2004. This study adopts the Malmquist Productivity Index (MPI) and Bilateral Model to analyze the operational performances of 72 non-profit hospitals. The results show that the operational performances of these non-profit hospitals regardless of the attributes present a common tendency that the performances of 2001 and 2003 regressed as compared to the previous year, while that of 2004 has grown as compared to 2003. In addition, the implementation of Taiwan’s Health Insurance Global Budget System has a greater negative impact to the public hospitals; therefore, based on the category of authorization and responsibility, the operational performance of proprietary hospitals is better than that of public hospitals. Non-profit hospitals are quite popular in Taiwan and can be categorized into two forms, public and proprietary, by their characteristics. The public hospitals are established by the government to provide the basic health care needs for the public and also take care of the minority patients’ health services. The establishment of proprietary hospitals is mostly because of the tax reason and enterprise’s image, and most major business groups have set up hospitals. Moreover, they succeed to introduce the business management model to the hospital operation. The proprietary hospitals also have the nature of social public-welfare. On the discussion of hospital management efficiency in the past, the non-profit hospitals were compared with the profit ones. However, because they both have different attributes, it is more meaningful to study them separately. Besides, in recent years, the payment policy of Taiwan’s health insurance system has changed and caused the hospital management to change greatly. Therefore, this study mainly focuses on discussing the operational performance of non-profit hospitals in recent years and hopes to understand the change of operational performance of the non-profit hospitals as well as the influences of hospital efficiency at different times and in various attributes. Many researches have applied Data Envelopment Analysis (DEA) models to study hospital efficiency (such as Sherman, 1984; Ferrier and Valdmanis,1996; Chang,1998; Puig-Junoy, 2000). The fact shows that DEA is an excellent analytical tool in evaluating hospital’s operational efficiency. However, most researches focused on cross-section data analysis, and seldom discussed the impact on hospital efficiency before and after implementing a major policy. In general, all DEA studies would consider performance analysis at a given point of time. However, extensions to the standard DEA procedures, such as the Malmquist Productivity Index (MPI) approach, have been reported to provide performance analysis in a time-series setting(Charnes et al.,1994).This paper will employ MPI and Bilateral model to analyze hospitals’ efficiency and productivity change, and compare their discrepancies. DEA is a non-parametric linear programming model for frontier analysis of multiple inputs and outputs of decision-making units (DMUs, e.g., hospitals), developed by Charnes et al. (Charnes et al., 1978) and extended by Banker et al. (Banker et al., 1984). Detailed introduction of DEA theories is provided by Cooper et al (2000). In general, any DEA study considers performance analysis at a given point of time. However, extensions to the standard DEA procedures, such as the Malmquist Productive Index (MPI) approach , have been reported to provide performance analysis in time-series setting (Charnes et al., 1994). In this section, the change in the efficiencies of hospitals in Taiwan during the period 2000-2004 using the MPI approach is presented. The framework employed in the current study can be illustrated by Figure 1 following Fare et al. (1990; 1993), Hjalmarsson and Veiderpass (1992), Berg, Forsund and Jansen (1992), and Price and Weyman-Jones (1996). In this diagram, a production frontier representing the efficient level of output (y) that can be produced from a given level of input (x ) is constructed, and the assumption made that this frontier can shift over time. The frontiers thus obtained in the current (t) and future (t+1) time periods are labelled accordingly. When inefficiency is assumed to exist, the relative movement of any given council over time will therefore depend on both its position relative to the corresponding frontier (technical efficiency) and the position of the frontier itself (technical change). If inefficiency is ignored, then productivity growth over time will be unable to distinguish between improvements that derive from a council ‘catching up’ to its own frontier, or those that result from the frontier itself shifting up over time. Now for any given council in period t, say, represented by the input/output bundle z(t), an input-based measure of efficiency can deduced by the horizontal distance ratio 0N/0S. That is, inputs can be reduced in order to make production technically efficient in period t (i.e. movement onto the efficient frontier). By comparison, in period t + 1 inputs should be multiplied by the horizontal distance ratio 0R /0Q in order to achieve comparable technical efficiency to that found in period t. Since the frontier has shifted, 0R /0Q exceeds unity, even though it is technical inefficient when compared to the period t + 1 frontier. It is possible using the Malmquist input-based productivity index to decompose this total productivity change between the two periods into technical change and technical efficiency change. An input-based productivity index is used since it is generally argued that an input- orientation is consistent with the notion that local government outputs are largely given and the focus is on reducing inputs (proportionately) as much as possible, given technology. Fare, Grosskopf, Lindgren and Roos (1993) have calculated these input-based Malmquist productivity measures for a sample of (government-controlled) Swedish pharmacies. Berg, Forsund and Jansen (1991) have also employed an input-orientated approach to analyze the effects of deregulation in Norwegian financial services. And Fare, Grosskpf, Yaisawarng, Li and Wang (1990) applied Malmquist input-based productivity measures to evaluate productivity growth in Illinois utilities. According to Fare, Grosskopf and Lovell(1994), the input-oriented Malmquist productivity change index can be written as:
Entrepreneurial Cognition and its Linkage to Social Capital
Janusz K. Tanas, Swinburne University of Technology, Melbourne, Australia
Dr. John Saee, Swinburne University of Technology, Melbourne, Australia
The main aim of this paper is to examine the cognitive and behavioral aspects of social capital and its influence on entrepreneurial development and economic growth. The paper presented is conceptual in nature with some support from secondary data assessment. Further, extant literature review seems to suggest that cognitive and behavioral aspects have received limited attention in the extant literature on social capital. The findings seem to suggest that cognitive and behavioral aspects are one of the main components of social capital which stimulate the development of trust, networking and relationships. Further, it is argued that cognitive coherence is a necessary catalyst for entrepreneurial development and thus economic growth. Entrepreneurship is central to the development of the existing and the transitional economies (Aldrich 1999). Entrepreneurial activity serve as a vital component of national economic growth and development during transition as it encourages action, promotes job creation, consequently, improving well being of the entire country (Bednarzik 2000; Keister 2000). Entrepreneurial businesses form the nature of social and economic stratification in a transitional economy (Haltiwanger & Krizan 1998). Thus, Entrepreneurship enables individuals to accumulate wealth, to expand their social contacts, and to improve their social and economic standing (Bates, 1997a; Fischer & Massey 2000; Haveman and Cohen, 1994; Keister 2000; Nee & Sanders 1985; Quadrini 1999). This paper investigates the phenomenon of entrepreneurship within the transitional economies with a specific focus on Poland. Particular attention is paid to the nature of social capital and its influence on entrepreneurship. Specifically, cognitive and behavioral aspect of social capital that influences the development of entrepreneurship within a society was examined. Findings suggest that tremulous social change experienced by Poland throughout its history has lead to a strong and resilient human capital providing stability and progress during the transitional change. Thus, the populace was more willing to accept both the negative and positive aspect of economic transition. Finally, a theoretical and conceptual framework is presented that may assist others in enhancing a smooth transition into the market based economy. The main aim of this paper is to investigate the influence of cognitive and behavioral aspects of social capital on entrepreneurial development in transitional economies. The research question raised is how behavioral and cognitive aspects of social capital tend to influence the development of entrepreneurship and thus transition? While it is acknowledged that transition inherently brings a degree of chaos within some sectors of the economy, a smooth transition is largely aided by the nature of the social capital. More specifically, the intrinsic behavioral and cognitive strength of the human capital tend to have a significant influence on entrepreneurial development during and after transition. This study is largely based on secondary data collected from various sources within and outside Poland. The main trust of the analysis is to assess the extent to which cognitive and behavioral aspects of the social capital influence the development of entrepreneurship and economic advancements during and after transition. The historic politico-economical transformation during the 1990s resulted in the manifestation of a considerable quantity of literature dealing with entrepreneurship in Poland. Some scholars attempted to list the qualities of entrepreneurs, which include behavioral and cognitive aspects; however, limited research seems to have focused on the social capital framework as a prime paradigm for entrepreneurship growth in Poland. Several authors have provided an in-depth understanding about the value of social capital as one of the main foundations for economic development. Other authors have argued that the history and also the nature of a society form the basis of the social capital which reflects the main foundation for any economic system (Lipton, Sachs, 1990, 1992; Myant, 1993; Murrel, 1992). Social capital has various forms, namely, trust, norms and networks. Both trust and norms are developed over time through a repeated series of interactions that motivates individuals to contribute productively to the natural sharing and exchange of ideas in order to materialize an opportunity (Jacobs, 1965). Each individual within a society embarks on a journey of life based on his /her competence; the cognitive and behavioral aspects tend to develop through collective interaction. The social context within which the individual acquires maturity comes from human interaction and socialization between people. Bourdieu (1986; 1992) further developed the concept of social capital by arguing that it is the sum of the resources, actual or virtual that is accrued through durable networks of institutionalized relationships of mutual acquaintance and recognition. Social capital is the process that conditions, and energizes both people and organizations to achieve mutual social benefit (Jackman & Miller, 1998; Pennings, Lee & van Witteloostuijn, 1998; Portes & Sensenbrenner, 1993; Miles, Miles Perrone & Edvinsson 1998, Paldram, 2000, Putnam, 1995; 2000, Woolcock, 1998). Such processes comprise of four interrelated constructs namely; trust, social engagement, civic participation, and reciprocity. It is difficult to refute that such interactions tend to influence the cognitive and behavioral aspects of the society. Foresight and a dynamic world view enable the society to become more resilient to both the negative and positive aspect of transition.
Chinese Management Philosophy – Study on Confucius Thought
Dr. Chiou-Hua Lin, Ming Chuan University
Yuan-Kai Chi, Ming Chuan University
It has been more than 5,000 years of the history in China. Though through many disorders, these chaotic situations could finally be settled and an orderly situation may then be developed. What can be relied on mainly is the continued contribution of both wisdom and effort from the well-learned people, creating the best management philosophy. Management is a kind of complicated development process. It has such features as target, method, input, time & space and development, etc. However, Chinese Confucianism has influences on the cultural development, social progress, international relationship and peace in China for several thousand years. These thoughts have long integrated into the management process and denotation. The Chinese Confucius management philosophy is: starting from management of oneself, then developing to the management of an organization, and further marching toward the “benevolence”, “loyalty & forgiveness” thoughts, as a management philosophy that concerns “life with focus on service as a goal”: 1. “Self Management”: It refers to the moral management of an individual. 2. “Family Management”: It refers to the life management of the family members to form a good family. 3. “Organization Management”: It refers to the management of an organization or a business. 4. “World Management”: It refers to the management of the world as a unity. From ancient Greek, the philosophy is “a science with love of wisdom” and “a science of intellectuals”, as the thinking and knowledge of dealing with human beings, up till now. The term of philosophy started in use by a Japanese西周氏 in 1877 through the translation of “utilitarianism” of J.S. Mill, and later was introduced into China. However, the linguistic root of Philosophy is the Greek words Philo and sophia, indicating the “love of wisdom”. In Greek, it refers to the overall knowledge. That is, including physics with target of the Nature (the knowledge of studying the natural laws), as the theoretical science of human thoughts, and the ethics of human behaviors, as a science of extensive coverage. However, “love” is just what pointed by Confucius in China that “one who knows is inferior to the one who loves, and one who loves is inferior to the one who loves to know”. Chinese philosophy concerns most about life, centered by politics and ethics, and always tied to morality. Confucius: “appoint the justified people to manage the wronged people can turn the wronged people to be justified”. Then how can management and philosophy turn to be the “management philosophy” in China? Management philosophy is one of the methods in practice philosophy, coming from all life experiences, all races and cultures, as the theory and actual evidence of interpreting and criticizing the management activities. The earliest management philosophy in China started from the “name justification law” announced by Confucius at more than 2,000 years ago, that is to say, “one who is a supervisor shall fulfill his supervisor obligations; one who is a subordinate person shall fulfill his subordinate person obligations; in a family, a father shall fulfill his fatherly obligations; and a son shall fulfill his filial piety obligations”. The conformity of both the name and fact can help everyone to perform his role properly and deserve what he deserves. This paper will discuss the Confucius management philosophy led by Confucius, humanity, knowledge, morality and social progress, etc. Chinese philosophy concerns the entire concept, regardless of an individual, a family, or a community, they are all closely related. Lun-yu: “appoint the justified people to manage the wronged people can turn the wronged people to be justified” indicates a law of this concept. Tzung-san Mo thinks Chinese philosophy concerns “subjectivity” and “Inner Morality” the most. Chinese philosophy is centered by “life”, to develop wisdom, knowledge and practice from here, to form the subjectivity of morality. Yi Wu thinks Chinese philosophy pursues happiness in life, especially an inner peace, but not self-indulged, but the utmost performance of the sense of prior consideration of any potential hazard. What is more, Chinese philosophy has the sense of mission of a Tao system and the pursuit of intellectual, life and art beauty. Kuang Luo also thinks Chinese philosophy to concern “life” the most. The life view is the core of the Chinese philosophy. The ancient learning does not aim at the pure access of knowledge, but the pursuit of how to be a human being; any deep laws are related to life. Yang-ming Wang thinks Chinese philosophy to care for the pursuit of truth, treasuring good sense and behavior to be of equal importance, not to get rid of his own behavior, as the entire performance of life process, that is, the “unification of knowledge and behavior”. Good sense is the idea of action, and action is the effort of good sense; knowledge is the start of action and action is the result of good sense. But because Chinese philosophy only concerns about the practical evidence in life, it does not the detailed examination and theory, rather in lack of an orderly system. The routine experience is thorough; as one perceives it longer, one can naturally realize it. It is not the performance of pure knowledge; it includes the wisdom, life and art, able to mingle with the nature and objects. This differs from what the concerns of thinking & discrimination in western philosophy, by building the truth from thinking and discrimination, and then turning back to guide life behavior.
A Methodological Classification in ES Implementation Research
Mei-Hsia Chiang, National Central University & Hsin Wu College, Taiwan, R.O.C.
This paper classifies enterprise systems (ES) implementation research into variance research, process research and conceptual research with a methodological dimension. Classification in ES implementation researches can contribute to theory development and formulation in ES implementation literature. In addition, an ES implementation conceptual model is developed based on the hybrid of variance and process theory and the incorporation of some existing ES implementation researches. This conceptual model can guide future research by developing propositions to explain contradictory findings of business performance in ES implementation, to permit a generalization of findings to related phenomena and to put forward a research agenda. Overall, this paper is unique in two ways: first, a methodological classification in ES implementation researches is provided to facilitate a good theory-building procedure to improve theoretical development; second, a conceptual model of ES implementation is proposed to guide future research and practically successful ES implementation. Our analysis can add to knowledge accumulation and creation in the MIS academic and practical discipline. Enterprise Resource Planning（ERP）systems are variously called enterprise-wide systems or enterprise systems (ES). Enterprise systems are commercial software packages that enable the integration of transaction-oriented data and business processes throughout its entire organization, eventually to assist the inter-organizational supply chain (Markus et al., 2000b). During the 1990s, enterprise resource planning systems became the de facto standard for replacement of legacy systems in large and, in particular multinational companies (Holland et al., 1999). Ross (2000) noted that the six most common motivations for ES implementation are a common platform, process improvement, data visibility, cost reduction, strategic decision -making and customer responsiveness. Year 2000 compliance was the driving concern and merely the catalyst to replace an aging information technology (IT) infrastructure with one more manageable and better enabled to new business processes (Ross, 2000). The significant impact of ES on industry is so big and growing that Davenport (1998) stated “The business world’s embrace of enterprise systems may in fact be the most important development in the corporate use of information technology in the 1990’s”. Moreover, ERP is a recent IT innovation (Rajagopal, 2002). Nowadays many corporations adopt ES to promote their business performance and create a competitive advantage in fluctuating global economic circumstances and totally digital times. This blooming phenomenon also intrigues many academic researchers’ interest to investigate. Besides the growing interest in ES, ES publications published in the main IS journals and international conferences within the academic Information Systems (IS) community are now flourishing. Classification is an important foundation for theory development and verification has been shown in many fields. For example, the biological taxonomy provides morphological evidence for evolutionary theory and the periodic table of chemical elements demonstrates the patterns and regularities among different chemical characters. Especially in strategic management literature, many strategic classifications have been proposed to guide business management and facilitate theory formulation. Based on the above, this paper wants to investigate the existing classification in ES implementation research and then develop a more insightful one to contribute to management information system literature. Research questions are proposed to explore as follows. Is there any existing classification in ES research or ES implementation research and if any, what are they? Can classification in ES implementation research which may provide more insightful managerial implications be constructed by other dimensions? Based on a survey of 189 ES publications published in the main IS journals and international conferences during the 1997-2000 period, Esteves and Pastor (2001) proposed an ERP system lifecycle model consisting of various phases through which the ERP project passes through implementing organizations. According to Esteves and Pastor’s review, they found that publications related to the implementation phase are the most prevalent. This finding corresponds to the articles printed in the trade press which also predominantly focus on the implementation phase. Their study shows that ES researchers mainly concentrated on ES implementation related issues. Similar conclusions are also drawn from Klaus and Rosemann’s work (2000). They argue that ES implementation related publications account for about one third of the articles they reviewed. One of the reasons is that an ES implementation is a complex, IT-related social phenomenon (Sarker, 2003) and the majority of organizations are in the implementation phase (Esteves and Pastor, 2001). This is also the reason why this paper is focused on ES implementation research. Holland and Light (2001) proposed an ERP maturity model. Their research framework was composed of five theoretical constructs: strategic use of IT, organizational sophistication, penetration of the ERP system, vision, drivers and lessons. A scoring process was to calculate a score for each of the constructs in the research framework to provide a comparative analysis of maturity and was used to identify organization maturity rankings. The organization scores were plotted and showed a distribution of an ‘S’ shaped curve. A closer analysis of the qualitative case data indicated three broad maturity stages. In stage one, organizations are managing legacy systems and starting the ERP projects. In stage two, implementation is complete and the functionality of the ERP system is being exploited across the organization. In stage three, organizations have normalized the ERP system into the organization and are engaged in the process of obtaining strategic value from the system by using additional systems such as customer relationship management, knowledge and supply chain planning. Jacobs and Bendoly (2003) reviewed ES researches and divided them into two distinct research streams based on traditional OM/OR paradigms. Mabert et al. (2000) describe a concept-based definition of ES as involving the seamless integration of processes across functional areas with improved workflow, standardization of various business practices, improved order management, accurate accounting of inventory and better supply chain management.
The Influence of Dimensions of Corporate Governance on Firm Values Using an Applied Structural Equation Model
Chung-Cheng Hsu, Ling Tung Institute of Technology & Da-Yeh University, Taiwan
This research was conducted mainly to study the influence of dimensions of corporate governance on firm values by applying a Structural Equation Model. This paper analyzes whether factors such as equity ownership structure, directors’ and supervisors’ structure, and information transparency are appropriate for studying corporate governance effects on firm value. Information transparency is the variable used to indicate the quality of a corporate governance mechanism. This study uses the index of the Securities and Futures Institute to divide the information transparency of a corporate governance mechanism into high and low groups. The results show that information transparency can profoundly influence the efficiency of corporate governance and enhances firm values. also: (1) confirmed validity of the overall structural model; (2) showed that, once information transparency within a firm is established, the equity ownership structure and directors’ and supervisors’ structure have significant influence on the firm’s value; and (3) showed that a strong relationship exists between the composition of the board of directors and information transparency. Corporate governance has a relatively long history in the U.S. and Europe. The term first appeared in the 1960s but did not attract much attention in Asia at the time. In 1995, in relevant conferences held by Asian Development Bank and Asian Pacific Economic Cooperation (APEC), the international community gave this concept deep thought and promoted the adoption of corporate governance mechanisms, particularly after the Asian financial crisis in 1997. For the years 1999-2001, inclusive, the Organization for Economic Cooperation and Development (OECD) included Asian companies’ corporate governance issues in its meeting agenda as a major topic to be discussed; OECD published the main principles for corporate governance in 2004. After the Enron debacle in the U.S., “corporate governance” drew considerable attention from the public. Recent literature reveals that “corporate governance” encompasses three primary elements: (1) Information transparency, or the disclosing of financial information and information related to internal monitoring and control; (2) Equal treatment and equal protection of all of shareholders; (3) Providing incentives to and imposing obligations on the management level to encourage and oblige its members to pursue profits for the company and fulfill the obligation of full disclosure to shareholders (Huang, 1998). Corporate governance models cover a wide spectrum of issues, ranging from governing authorities, accounting departments of listed and OTC companies, and CEOs, to shareholders, institutional investors and the general public. From a broad perspective, corporate governance can affect a nation’s financial stability and economic growth. Therefore, stakeholders from all over the world have enthusiastically promoted corporate governance in order to combat financial fraud. These efforts should eventually reduce the number of investors who fall victim to false financial information and make incorrect investment decisions (Zhong, 2002). Establishing a sound corporate governance policy benefits both management and ownership. It can help firms avoid management-level corruption and enhance firm values. Positive investor response can help a firm raise the funds required for operating the business. In addition, sound corporate governance can ensure the maximization of shareholder interests and reduce investment risk. As for supervision of financial markets, effectively implementing a corporate governance policy can reduce the risk of a financial upheaval and help to create a sound financial market. In the current global markets, funds circulate rapidly around the globe. Therefore, a healthy corporate governance policy and high information transparency are important indices used by international investment institutions when considering investing in a company (Ralph, 2002). Some believe that the equity ownership of a public issuing company should be much diversified to reduce corporate fraud by controlling shareholders. However, if a company is operated by controlling shareholders, a positive inducing effect and a negative entrenchment effect may occur (La Porta, Lopen-de-Silanes & Vishny, 2002). According to previous studies targeting Taiwanese companies, the existence of a positive inducing effect was supported and a negative entrenchment effect was also primarily supported (Yeh, Lee and Ke, 2002). The effect of such an equity ownership structure on corporate governance is that monitoring efficiency may be maximized, helping to ensure that management continues to focus on the premise of maximizing shareholders’ rights. Even so, having controlling shareholders manage the company may create agency problems, which could affect firm values. Most prior studies have focused mainly on information transparency, the directors’ and supervisors’ structure and the equity ownership structure.
Gender Differences in Burnout among Life Insurance Sales Representatives in Taiwan
Dr. Chiang Ku Fan, Shih Chien University, Taipei, Taiwan
Chen-Liang Cheng, Shih Chien University, Taipei, Taiwan
There is a paucity of studies in which the relations among work-related gender differences are examined. Business competition has heightened, which has also increased the stress of workers. Life insurance is a vying business, but few studies have investigated burnout in this industry, particularly the relationship between burnout and the gender of sales representatives. Burnout was measured using the Maslach Burnout Inventory (MBI; Maslach & Jackson, 1986), which was the main measure of experienced job strain. The life insurance sales representative pool (N = 250) was selected using stratified sampling among employees in 29 life insurance companies in Taiwan. Mean scores were calculated for the three MBI subscales. Differences in mean scores were assessed using multivariate analysis of variance. Variables on which gender differences existed were selected as possible concomitant variables. Gender differences in burnout among Taiwan life insurance sales representatives were found to exist. However, our results indicate that underlying factors, such as working hours, have a profound effect on these differences. In 2002, Taiwan became a member of the World Trade Organization (WTO). In response, it was expected that domestic financial institutions, including life insurance companies, would face strong competition from foreign financial institutions and, as a result, were destined for tremendous change (Chiu, 2002). While business competition was stimulated, a lack of necessary competencies increased the work stress of life insurance sales representatives (Myers & Torrington, 2001). From 1996 to 2003, the average retention ratio among Taiwanese life insurance sales representatives at the 13th month of employment was just 48.1%. This means that more than 50% of life insurance sales representatives terminated their job within the first year, implying that burnout may be higher for those leaving their companies than those who stay in the organizations (Goodman & Boss, 2002). Half of new life insurance sales representatives may suffer from vocational burnout in their first career year. Sex differences in the manifestation of burnout have been reported for different occupational groups (Brake, Bloemendal, & Hoogstraten, 2003). Although some gender-specific explanations for these findings have been advocated, there is a paucity of studies in which the relation with other work-related gender differences is examined. Most research has focused on burnout in human services. Unfortunately, few burnout studies have investigated burnout in the life insurance industry, especially the relationship between burnout and sales representatives’ gender. Burnout is considered to be a response to chronic work-related stress (Brake, Boemendal, & Hoogstraten, 2003). Schaufeli and Greenglass (2001) noted that burnout could also be defined as a state of “physical, emotional and mental exhaustion that results from a long-term involvement in work situations that are emotionally demanding” (p. 501). Many researchers have been devoted to understanding the contributing factors (McGrath et al., 1989). Further research has found that burnout is correlated with numerous self-reported measures of personal distress (Belcastro & Gold, 1983; Greenglass et al., 1991; Schaufeli & Enzmann, 1998). Moreover, burnout is a very painful experience for individuals, accompanied by an array of physical, emotional, and mental symptoms, and is also a very costly phenomenon for organizations, manifested in such things as low morale, absenteeism, high job turnover, poor performance, vandalism, and lack of commitment to the organization (Pines, 2002). Most studies on burnout have documented its existence in a wide range of professions as well as its symptoms and high costs for individuals, organizations, and society in general (e.g. Schaufeli et al., 1993). Burnout was measured using the Maslach Burnout Inventory (MBI, see Table 1), which is the main criterion for experienced job strain (Maslach & Jackson, 1986). In this sense, burnout refers to an individual’s maladaptive reactions to chronic occupational stress. The MBI consists of statements of job-related feelings divided among three subscales that aim to measure three distinct but related concepts: Emotional Exhaustion (E.E.), Depersonalization (D.P.), and Personal Accomplishment (P.A.). For each statement, respondents are asked to express how often they experience these feelings at work, ranging from never to every day. Emotional Exhaustion is characterized by feeling drained and a lack of energy or resources to meet job demands (Schaufeli & Greenglass, 2001). Depersonalization refers to an emotional distancing of individuals from the recipients of care or services (Goodman & Boss, 2002). Personal Accomplishment is regarded as one’s feelings of competence and successful achievement in one’s work with people (Brake, Bloemendal, & Hoogstraten, 2003). Emotional Exhaustion and Depersonalization are assumed to be related concepts, while Personal Accomplishment is relatively independent. Human service professionals have been described as particularly vulnerable to burnout (Freudenberger, 1997), so it is necessary to examine the characteristics of these occupations and their employees as well as the stressors involved. Most research conducted on burnout has been in the human services area. In reality, many professional human service workers tend to be female, though males can also be found in many service areas including social work, teaching and health care (Schaufeli & Enzmann, 1998). Many studies have found that females are more susceptible to burnout since they often have primary responsibility for children in addition to employment (Schaufeli & Greenglass, 2001).
Understanding E-learning Consumers: The Moderating Effects of Gender and Learner Diversity
Dr. Yao-kuei Lee, Tajen University, Pingtung, Taiwan, ROC
Understanding consumer behavior is vital in formulating marketing strategies as consumer may form different perceptions due to individual differences for any given marketing stimulus. Using an e-learning acceptance model (Pituch and Lee 2006), a sample data of 259 Taiwanese undergraduates was used to investigate the effects of gender and learner diversity on consumers’ cognitive beliefs and intentions. The results revealed that the differences in construct means between males and females occurred at the front ends of the path model while those between nontraditional and traditional learners were at the opposing ends. It implied that different needs of various learner groups for e-learning, rather than academic discipline or gender seem to drive the differences in intention to use e-learning for distance education and for supplementary learning. In addition, gender and learner diversity moderated some of the model relationships. In particular, women’s adoption intention for distance education purpose was more strongly influenced by system interactivity and women’s perception of e-learning usefulness was negatively influenced by self-efficacy. System functionality predicted intention to use e-learning as a supplementary learning tool for traditional students, but not for nontraditional students, and perceived usefulness predicted intention to use e-learning for supplementary learning more strongly for nontraditional students than for traditional students. These findings help prioritizing the marketing efforts for different learner groups. E-learning has become an important educational and training method for corporate training, universities education, government employee training, and K-12 education. In marketing this new information technology to potential users, it is important to explore the driving forces for consumers to use or accept this technology. The first task in doing that is analyzing consumer-product relationships, which entails analysis of the psychological aspects and environments involved in the use/purchase process (Peter and Olson 2005). The analysis is essential for identifying bases for effective market segmentation. Market segmentation is pivotal for prioritizing the marketing efforts. According to Kerin, Hartley, and Rudelius (2004), customer characteristics and buying situations are two variables to segment consumer markets. Customer characteristics include demographics (e.g. gender, age), geographic and socioeconomic variables, and psychographic characteristics (e.g. personality, lifestyle). Buying situations include benefits sought (e.g. quality, service) and usage. Among these variables, gender and learner diversity were further investigated for this study. The study focused on the e-learning adoption in the university settings. Examining the demographics of university students in the U. S. and Taiwan revealed two noticeable changes for the past two decades: (1) the increase of female students and (2) the increase of older, working students (Ministry of Education, Taiwan 2006; National Center for Educational Statistics 2006). In the adoption of innovation (in this case, e-learning), the factors predicting e-learning adoption might vary across demographic groups. It became necessary to study the effects of these two changing factors. For example, gender has been reported to have influence on the adoption of e-learning (Gefen and Straub 1997). Appropriate actions can then be planned separately for either female or male group to improve the acceptance. Older working students have a tendency to be enrolled as nontraditional continuing education students. Comparing with traditional higher education students, these nontraditional students generally have busy living schedules and most likely will seek technological help to satisfy their education needs and, therefore, may have a higher acceptance for e-learning. In other words, the benefits offered by the e-learning technology will be perceived higher. The purpose of this study then was to investigate how gender and learner diversity (nontraditional versus traditional learners) influenced the acceptance of an e-learning system. To explain factors affecting learner’s cognition, affect, and behavioral intention in adopting e-learning technology, Pituch and Lee(2006) proposed and empirically tested an e-learning acceptance model as shown in Figure 1. The model identified three technology factors (system functionality, interactivity, and response) and two individual factors (self–efficacy and Internet experience) that influenced two cognitive factors (usefulness and ease of use perceptions), and that in turn impacted two behavioral intentions (use intention for distance education and supplementary learning). It depicted consumers’ psychological aspects of using the product.
Re-innovation: The Redefined Definition
Chi-Jyun Cheng, The University of Birmingham, UK
Dr. Eric Shiu, Lecturer, The University of Birmingham, UK
When redesigning a new product it is not only the time and the cost that are important, but also its characterization. While innovation has been much researched, re-innovation is essentially undefined and obscured. The authors attempt to illuminate the concept by reporting the insights obtained from their exploratory research. Eventually, this research provides a rigorous definition of re-innovation. When facing with either decreasing customer loyalty or falling market share caused by rapid environmental changes, companies often provide new products to speedily deal with these changes (Lukas and Menon 2004). However, research has shown that it is not easy for some current customers to accept a new product which is based on breakthrough innovations (Treacy 2004). In addition, most new consumer products fail (over 90 % each year) one of the reasons being that they use radical technologies which do not meet consumers’ requirements (Christensen et al. 2005). Conversely, companies can remain competitive by offering new products which are modified versions of existing products (Rothwell and Gardiner 1989). In fact, to constantly improve an existing product is a requirement of continued success for an organization (Rothwell and Gardiner 1983; Randal et al. 2005). This is partly because following this approach may not only reduce the cost of developing a new model but also decrease the lead time in bringing it to the market (Zangwill and Kantor 1998). Another possibility is that new product uncertainty could decrease (Song and Montoya-Weiss 2001). It is also possible that existing customers are more accepting of redesigned current products, because of the use of incremental technologies (Treacy 2004). Finally, perhaps this new model with slight changes would fit a company’s strategy regarding its competitors (Lin 2003). For example, in order to maintain competitiveness in the short term, firms can react to their competitors’ actions (e.g., launching new products) by redesigning existing models. This would allow company enough time to create new products or to build other strategies in the near future. An empirical example occurred in 2004-5, when, in order to compete with a newcomer in a short time, Toyota (Taiwan) claimed that they launched a new Corolla-style car onto the market. However, this car was slightly changed from an earlier model and therefore enabled the company to keep its market share with relatively little setup cost. Rothwell and Gardiner (1989) described this process of improving existing products as “Re-innovation.” Given its general advantages, one would expect this concept to have a clear definition, a set of determinants, or a body of empirical findings. Surprisingly, for nearly two decades, almost no attention was paid to the factors regarding re-innovation, and almost no empirically related data were gathered. In addition, although Rothwell and Gardiner (1989) defined this concept about twenty years ago, re-innovation has not been broadly and profoundly explored. Especially, in an era of rapid change, the obsolete definition of re-innovation may need to be further refined. Furthermore, when conducting field research with participants (which will be described later), we discovered that there was a wide range of opinions about the meaning of re-innovation. Above all, the absence of a widely accepted definition of re-innovation already results in the failure of the performance of re-innovation because practitioners confuse it with innovation (Cheng and Shiu 2006). Therefore, this research attempted to address the above problems through providing a rigorous definition of re-innovation. Research on re-innovation has mainly come from the concept of innovation. Researchers have used the terms ‘disruptive’, ‘discontinuous’, or ‘breakthrough’ to describe innovation (Freeman 1974; Garcia and Calantone 2002; Tushman and Anderson 1986). According to Crawford and Benedetto (2005), whatever the actual degree of newness, an innovation must not only be new to the market and new to the technology but also new to the firm. This suggests that whether a new product can be considered to be an innovation is based on the degree of newness to market, rather than the degree of newness to the organization alone. Also, the degree of newness could depend on the amount to which new technologies or new things, such as ideas, concepts, or processes, have been used. In terms of market, Robertson and Gatignon (1986) have also shown the same idea, namely that ‘innovation’ is a perception of the new product among its customers. In other words, if a new product is not perceived as being an improvement over the previous model, the new product may not be thought of an innovation, even if it uses high technologies. However, there is a grey area in the perception. For example, some customers might think it would increase safety to equip a car with High Intensity Discharge (HID) lighting while others might disagree. But it might cause less disagreement, in terms of passenger safety, to provide a car with Air Bags. To the car industry both are innovations, but customers may disagree. Therefore in terms of function both are innovations, but in terms of customers’ perception agreement might be difficult to find. With respects to technology, Christensen (1997) addressed the effects of sustaining or disruptive technologies on the emergence of innovations. Sustaining technologies continuously improve the performance of established products while disruptive technologies abruptly advance the performance of established products. This concept is in line with the idea of radical innovation and incremental innovation by Anderson and Tushman (1990) and Henderson and Clark (1990). A radical innovation using a different core technology provides significantly higher benefits, compared to previous products (Chandy and Tellis 1998). It has often been considered a discontinuous innovation (Robertson 1971). In contrast, an incremental innovation is the logical outcome of the radical innovation process (Henderson 1993; Mitchell 1991; Mitchell and Singh 1993).
ABC Joint Products Decision with Multiple Resource Constraints
Li-Jung Tseng, Ling Tung University, Taiwan
Dr. Chien-Wen Lai, Asia University, Taiwan
Owing to capacity constraints, the companies that produce joint products have to assess the economic desirability of further processing joint products beyond the split-off point. This applies especially in a situation where market demands exceed the company’s production capacity. In order to maximize total profits, these companies must learn how to utilize the limited resources efficiently. The aim of this paper is to develop an ABC approach for the further processing decision of joint products, with multiple resource constraints. With the approach presented in this paper, companies producing joint products could consider the process costs and the limited resources simultaneously and determine which products provide the higher unit profit per constrained resource. By applying this approach, companies producing joint products could utilize constrained resources more efficiently and this would lead to an optimal further processing decision for joint products with multiple resource constraints. Under the global competitive environment, the key factor for a successful enterprise is ongoing performance improvement. This strategic target is achieved by utilizing constrained resources efficiently. Traditional accounting methods usually assign overhead costs of products by using volume-related allocation bases such as direct labor hours, direct labor costs, direct material costs, machine hours, etc. This usually does not critically distort the product costs, as the overheads are normally just a small portion of the production process. But in a situation where there is a large diversity of products, or where there is a high level of automation, as Brimson (1991) pointed out, the distortion of overheads will be significant. To overcome the failures of traditional cost accounting and to improve managerial decision making, the ABC developed by Cooper and Kaplan (1988) provides a more accurate measure of cost because it traces indirect costs more closely with regard to the different types of activities consumed. Armed with knowledge of which activities are consumed by each product and the resource cost of each activity, managers might be able to make a more accurate and realistic computation of each product’s cost. Since 1988, ABC has evolved from the concept stage and has been widely used. Applications range from manufacturing industries (Zhuang and Burns, 1992; Dhavale, 1993) to service industries (Carlson and Young, 1993), non-profit organizations (Antos, 1992), and government bodies (Harr, 1990). The information achieved through ABC cost assignment can be used for decisions concerning joint products costing (Tsai, 1996), quality improvement (Tsai, 1998), research and development (David and Li, 2003), performance measurement (Laitinen, 2002) or environment costs identification (Jasch, 2003). All of the above studies agree that ABC is a useful accounting model with the ability to present more accurate information about the cost structure. “Joint production” is the term used in economics to describe situations where, as a result of a single process, two or more products are made. In other words, when two or more products are jointly produced in a common manufacturing process, they are called joint products. All costs incurred before the split-off point of joint products are referred to as joint costs, and costs incurred for further processing and disposal are referred to as separable costs. This situation is distinguished from the more common multiple production, in which a number of different products are made by different processes in the same facilities. Many companies, such as petroleum refiners, lumber mills, meat packers or flourmills, produce a multitude of products simultaneously from their joint process or series of processes. The joint products can either be sold at the split-off point or after further separate processing. Further processing decision for joint products involves allocation of constrained resources. To maximize operating profit, the decision about further processing of joint products has to consider some relevant factors, such as the contribution of the additional process, resource capacity and market demand (Hartley, 1971). Many research articles have discussed the decisions concerned with product mix. As compared with regular product mix decisions, joint products mix problems have two special features. First, the joint products are jointly produced at the same time and during the same process. That is, the quantities produced of each joint product cannot be decided individually. Second, each joint product and its further processing must be performed in sequence.
TCE Mode Selection Criteria and Performance
Dr. Lisa Y. Chen, I-Shou University, Kaohsiung, Taiwan
The concept of transaction cost economics (TCE) has been frequently used to analyze the determinants of entry mode choices. Throughout the literature, TCE has been used in many empirical studies to identify several crucial factors. This study, with grounding in the theoretical basis of TCE, investigates the factors that comprise multinational firms’ decisions to operate in foreign markets, with a view toward predicting performance. With a cross-sectional descriptive survey research design, a survey was developed for this study, in order to test the hypothesis and measure the variables. The survey was mailed to executive officers of U.S.-based small, mid-sized, and large multinational companies. The results of this study demonstrate that the effects of TCE variables were significant predictors of entry mode choice decisions and that the degree of control afforded by each entry mode choice had significant influence on mode performance. The cost of implementing a particular mode of entry is an important consideration in the choice of entry mode (Rajan & Pangarkar 2000). Recently, researchers have stressed the need to supplement efficiency considerations of the transaction cost model with strategic issues concerning entry modes (Aulakh & Kotabe 1997). Firms are expected to choose the governance or entry mode that minimizes the costs of carrying out particular transactions. In application, transaction cost economics (TCE) is concerned with comparing different institutional arrangements for carrying out economic activity (Burgel & Murray 2000; Williamson 1985). TCE posits that firms’ choice of organizational structure, including mode of foreign entry, is based on efficiency criteria for organizational structures that will economize on transaction costs (Yiu & Makino 2002). In addition, TCE is concerned with discovering the most efficient arrangement for an economic transaction, in which the basic choice for a firm consists of carrying out the transaction itself, engaging in an external transaction, or collaborating with a third party (Gemser, Brand, & Sorge 2004). However, a multinational corporation (MNC)’s particular mode of entry choice is determined by numerous factors. These include resource contribution, bargaining position, and organizational capabilities; moreover, each of these factors is interrelated with and has impact on the others (Deng 2003). In the context of managing foreign market operations, much prior research has identified and assessed the most influential factors in foreign market entry decisions. These factors supply insight into how firms select from the alternative strategies of mode choice to determine the most appropriate entry mode for foreign business activities (Baird, Lyles, & Orris 1994; Brown, Dev, & Zhou 2003; Palenzuela & Bobillo 1999). Consistent with the increasing importance of international business to firms, market share performance and profitability of overseas business activities have long been important issues in international business studies. The literature shows that a firm's profitability and market share performance are determined by many different factors (Pan, Li, & Tse 1999). As a result of growing concern with entry mode performance, prior research has increasingly focused on identifying factors conducive to superior performance, since several factors demonstrate a consistent pattern of association with a firm’s entry mode performance (Robson, Leonidou, & Katsikeas 2002). Although much literature exists on strategy, many researchers have focused on the factors that influence the entry mode decision and the importance of entry mode selection to a firm’s competitive advantage in a new foreign market. However, relatively little empirical work has addressed the performance implications of these decisions (Lieblein, Reuer, & Dalsace 2002). Thus, the literature emphasizes that determinants of entry mode choice in foreign markets are regarded as vital strategic decisions that provide global managers with potential critical insights toward understanding the roles of these factors in influencing choices of entry alternatives and performance in international operations (Erramilli 1991; Hill, Hwang, & Kim 1990; Taylor, Zou, & Osland 1998). Therefore, improving the competence and sustained performance of MNCs is the primary rationale underlying international expansion (Zhao &Luo 2002). Issues, such as which factors determine a firm’s mode performance and how a firm’s performance in foreign markets can be improved, have received considerable research attention in recent years (Zou, Taylor, & Osland 1998). It is important to substantiate the effect of international market strategies and decisions on a firm’s performance in foreign market entry. Following previous research, the present study seeks to identify how the entry mode choice, based on the transaction cost criteria in foreign markets, is linked to mode performance. The conceptual framework of this study, discussed below, proposes a reflective relationship between entry mode selected and performance, based on the transaction cost criteria. Study results are expected to provide a more significant explanation for the results of entry mode implementation decisions. However, this study also aims to identify the influence of governance decisions on entry mode choice, considering the important financial implications of mode performance. Thus, this study will offer insight into the factors that drive entry mode choices, which are an important aspect of a firm’s decision-making processes and may influence its capabilities for exploitation and foreign investment. The transaction cost economics (TCE) theoretical approach has been applied successfully in explaining entry mode choices, such as: export, licensing, joint venture, and wholly-owned operations (Baran, Pan, & Kaynak 1996; Gatignon & Anderson 1988; Hill, Hwang, & Kim 1990). Each mode involves different resource deployment patterns (Agarwal & Ramaswami 1992), levels of control and risk (Kim & Hwang, 1992), and political and cultural awareness (Dalli 1995). In seeking to penetrate a foreign market, MNCs may choose from various entry modes. Entry mode choice is a complex arena in which managers must choose from a range of options (Rhoades & Rechner 2001).
Cross-Cultural Leadership Behavior Expectations: A Comparison Between United States Managers and Mexican Managers
Dr. Sergio Matviuk, Regent University, Virginia Beach, VA
In the present global market, cross-national operations are common, which increases the interaction and relationships between people from different national cultures. The success of these cross-cultural business operations depends on the ability of the parties to understand and predict their counterpart’s behaviors. This ability is impacted by the people’s behavior expectations regarding how their counterpart’s should behave; because, if the behavior expectations do not match with the observed behavior, then the probability of conflicts and misunderstandings increases exponentially. This study focused its attention on the cross-cultural business relationship between the United States and Mexico and investigated if there were significant differences in leadership behavior expectations between a group of U.S. American managers and a group of Mexican managers. This study was cross-sectional and field-based, using a survey instrument to gather data. The Leadership Practices Inventory (LPI, Kouzes & Posner, 1997), adapted to define an ideal leader, was used to determine the leadership behavior expectations of each group. The results indicated that the U.S. American group had significantly higher leadership behavior expectations than its Mexican counterpart for all assessed leadership behaviors, which is helpful to anticipate potential sources of conflict when people from these countries interact. Results also suggested that variables such as education, gender, age, and their interaction had significant effect on participants’ leadership behavior expectations. This study contributes to a better understanding of the dynamics of cross-cultural business teams by identifying cultural variations in leadership behavior expectations between the U.S. and Mexico, which may help managers, trainers, and consultants to predict more accurately potential problems regarding cross-cultural interactions and develop strategies to increase cross-cultural business operations’ performance. As Adler (1983) and Doney, Cannon, and Mullen (1998) agreed, this removal of trade barriers and the growth of global markets has given rise to an increased association and interaction between employees and managers of different cultures, creating several new issues in cross-border businesses. One example of this growing cross-cultural interaction is represented by the commercial relationships between the United States and Mexico. In 1986 the United States signed the General Agreement on Tariffs and Trade (GATT) with Mexico to reduce tariff barriers. Later, in 1992, the North American Free Trade Agreement (NAFTA) was signed by Canada, Mexico, and the U.S. to create a regional market and to facilitate trade. According to Nicholls, Lane, and Brechu (1999), NAFTA has increased business operations between Mexico and the United States establishing new demands and relationships between Mexican an U.S. American businesspeople. This cross-national association exposes managers and business leaders from different countries to cultures other than their own, making cross-cultural teams more commonplace and important (Brodbeck et al., 2000) and posing new management challenges. During intercultural exchanges, barriers to communication and other cultural factors facilitating misunderstandings are more likely to arise as a result of cross-cultural interactions (Graham, 1985; Varner & Beamer, 1995), affecting the performance of cross-cultural business. Research indicates that some company’s lack properly-trained personnel to efficiently perform in the global market. Gregersen, Morrison, and Black’s (1998) research on global leadership reported that 85% of the Fortune 500 companies did not think they had an adequate number of global leaders, and 67% of those firms thought their existing leaders needed additional skills and knowledge before they could meet or exceed needed capabilities. Black and Gregersen (1999) and Hopkins and Hopkins (1998) stated some of the negative consequences of an inadequate understanding of cross-cultural management and business leadership conduct to expatriates’ premature termination of cross-border assignment, disappointing manager functioning, job dissatisfaction, and reduced organizational morale, cohesion, and performance. Taking into account the present challenges of cross-cultural business interactions in the global market, this study aimed to advance empirical research and gain insight into the relationship between culture and leadership behavior expectations, focusing specifically on differences between a group of U.S. American managers and a group of Mexican managers. This study may also help managers, trainers, and consultants to predict more accurately potential problems regarding cross-cultural interactions and develop strategies to increase cross-cultural business operations’ performance. The results of this research are useful also to provide empirical evidence to contribute to the development of a general theory of cross-cultural leadership. In their experimental study dealing with leadership behavior expectations, Lord, Foti, and De Vader (1984) posited that people use categorization processes when forming leadership perceptions. People match a target person against a cognitive prototype that contain characteristic leader attributes (Phillips & Lord, 1981). Someone recognized as a leader is also perceived as someone who behaves in a particular way; usually this person is more powerful and influential than others (Cronshaw & Lord, 1987).
An Experimental Approach to Test Theories on Time Pressure in Online Time-Limited Promotions
Dr. Ching-I Teng, Chang Gung University, Taiwan
Li-Shia Huang, Fu-Jen Catholic University, Taiwan
Wen-Chun Yeh, Chang Gung University, Taiwan
Previous research on online time-limited promotions rarely considered the influence of time pressure using an experimental approach (except for Lin & Wu, 2005). By using an experiment, this study examines several theories on time pressure and finds the limitations of previous theories: (1) time pressure increases choice deferral tendency, contradicting the findings of Dhar and Nowlis (1999), (2) time pressure both directly and indirectly influences total purchase amount, contradicting the findings of Herrington and Capella (1995) and (3) the construct of ‘perceived fulfillment of the shopping plan’ both directly and indirectly influences revisit intention, extending the findings of Teng, Huang and Chang (2006). Finally, implications and future research opportunities are discussed. What are the benefits of stores imposing time pressures on shoppers? What are the influences of time pressures on shoppers? Do the effects of time pressure vary according to the products being purchased? Previous studies did not fully answer these questions. The findings of the literature include: Time pressured consumers have a tendency to accelerate information processing and filtering, and thus focus on important attributes (Svenson & Edland, 1987; Wright, 1974; Zur & Breznitz, 1981). Facing two alternatives with high choice conflict, time pressure reduces the choice deferral tendency (Dhar & Nowlis, 1999). Spending per unit of time increases under time pressure (Herrington & Capella, 1995). Time pressure and store knowledge exert a combined influence on purchase volume deliberation, failure to make an intended purchase, brand switching, unplanned buying and information processing (Park, Iyer, & Smith, 1989). This paper contains three contributions by remedying the shortcomings of the literature. First, past works rarely examined the influence of perceived time pressure on choice deferral in online time-limited promotions while only Lin & Wu (2005) pioneered to explore the impact of moderate time pressure. To fill this gap, this study revises the time pressure manipulations by Dhar and Nowlis (1999) and investigates the influence of perceived time pressure on choice deferral in a simulated online time-limited promotions environment. Filling the gap is the first contribution of this paper. Second, past works overlooked the problem of how to measure time in decision making and its impact. This study uses Visual Basic to write a program for collecting the decision-making time. Moreover, this study examines whether consumer decision-making time has an impact on total purchase amount. The analytical results can improve manager knowledge of consumer decision-making time representing the second contribution of this paper. Third, repeated purchase is a behavioral indicator of customer loyalty (Bloemer & Kasper, 1995). In online shopping environments, revisit intention is one important antecedent of repeated purchase, but research on this construct in online shopping is lacking. Consequently, this study analyzes whether revisit intention is influenced by perceived time pressure and satisfaction, thus filling a void in the literature and comprising the third contribution of this study. To summarize, this study has three purposes: To examine the influence of product involvement and perceived time pressure on choice deferral in the context of online time-limited promotions, in other words, to test the theory of Dhar and Nowlis (1999). To analyze the manner in which perceived time pressure influences decision-making time, total purchase amount and relative purchase amount. Meanwhile, the theory of Herrington and Capella (1995) can be tested using this purpose as a guideline. To discuss the influence of perceived time pressure on perceived fulfillment of the shopping plan, satisfaction and revisit intention. This purpose validates the conclusion by Teng et al. (2006) and extends it by additionally considering revisit intentions. Time-limited promotions have purchase acceleration effects (Aggarwal & Vaidyanath, 2003). However, time-limited promotions generate time pressure for consumers. Time pressure is the psychological urge received by consumers who regard themselves having only limited time to make a consumption decision. Extremely high or low time pressure results in consumer no-choice options on choice task (Lin & Wu, 2005). One way for consumers to deal with time pressure is to trade off speed with accuracy (Swensson, 1972). Firms can use short-term special offers, limited supply of goods for offer at special prices or short-term promotions to create time pressure for consumers. These strategies all push consumers to make their decision faster and help retailers to increase consumer traffic. Time pressure can even increase the amounts consumers purchase during a particular time period (Herrington & Capella, 1995); thus, imposing time pressure on consumers seems beneficial to firms. However, time pressure increases the chances that consumers will fail to achieve their purchase plans when grocery shopping (Park et al., 1989), which represents the negative aspect of time pressure. Therefore, the effects of time pressure are not all positive for firms. One of the negative effects is choice deferral, which has been thoroughly discussed in some studies (Dhar, 1997; Dhar & Nowlis, 1999; Tversky & Shafir, 1992).
Is there a Dividend to an Institution for having an Accredited College of Business?
Dr. Antonina Espiritu, Hawaii Pacific University, Honolulu, Hawaii
The main purpose of the study is to determine if there exists a significant difference in full-time retention and graduation rate between institutions with and without accredited business schools. Using a sample of higher educational institutions in the far west region of the United States, the empirical results of the cross-sectional regression analysis indicate that on average, accredited institutions enjoy 23% higher graduation rate and about 15% higher full-time retention rate than non-accredited institutions. Even after controlling for other relevant institutional factors and student characteristics, the positive result of having the Association to Advance Collegiate Schools of Business accreditation or AACSB remain robust and statistically significant. Therefore, resources devoted by higher educational institutions to achieving, if not maintaining, high quality academic standards in Business and Management education through AACSB accreditation will and no doubt pay-off not only to the institutions but more importantly and consequently, spill-over to all major stakeholders. In the United States, the Council for Higher Education Accreditation (CHEA) serves as the national organization that coordinates the accreditation of universities and colleges. CHEA serves as the main voice for voluntary accreditation and assurance of quality to the Congress and Department of Education (www.chea.org). Accreditation in higher education is a process of self- and peer-review to ensure that the institution meets and maintains good quality academic standards in all areas such as the administration, instructional resources, faculty, physical facilities, student’s recruitment and the curriculum. Six major regional accreditation associations normally handle the primary or institutional accreditation. The secondary level of accreditation normally applies to schools or colleges and specialized programs or departments that are parts of an institution. In the field of Business, the Association to Advance Collegiate Schools of Business or AACSB International accreditation represents the gold standard of achievement for business schools worldwide. As a specialized agency, AACSB International covers the accreditation for undergraduate and graduate business administration and accounting programs. In general, accreditation cuts across state lines, assuring students, parents and the public that a given school is focused on student achievement, and on providing an efficient, effective and enriching learning environment. Accreditation also assures the public that accredited schools adheres to high quality standards based on the latest research and successful professional practice. Hence, attending an accredited school should help enable students to be more competitive in the job market or its graduates to enhance their earning potentials or prospects for promotion. Accreditation should also help ease the transition of students as they transfer from one accredited school to another. It should allow institutions to which students transfer to in the same or another state to assess the quality of their academic training and accept the incoming student's credits from their former schools. Most AACSB accredited schools will only recognize undergraduate credits to satisfy first-year MBA core course requirements if those credits come from other AACSB accredited schools. Accreditation also creates a gateway for students to apply for federal grants, scholarships, or state financial aid programs. AACSB International articulates that it can help assure major stakeholders that accredited business schools manage resources to achieve a vibrant and relevant mission; advance business and management knowledge through faculty scholarship; provide excellent quality of teaching and current curricula; cultivate meaningful interaction between students and a qualified faculty and; produce graduates who have achieved specified learning goals. (www.aacsb.edu/accreditation) The main motivation of this study is to examine if there exists a dividend to an institution by having an AACSB accreditation for College of Business. The concept of a dividend or the value in excess of that is normally expected will be measured by examining if there exists a significant difference in full-time retention rate and /or graduation rate between institutions with accredited business schools and those without accreditation. Therefore, the primary objective of this study is to test the hypothesis that institutions with accredited business schools enjoy higher retention rate and / or graduation rate relative to institutions without accredited business schools or programs after controlling for relevant institutional and student characteristics. This study can also provide insights to other institutions that are in the process of pursuing accreditation. There are numerous studies on the economics of higher education. Many of these studies fall into the category of estimating rates of return to higher education. There are studies (Ehrenberg and Brewer, 1996; Brewer, Eide & Ehrenberg, 1999) that attempted to control for selection by segmenting four-year institutions based on admissions policy and whether the institution is private or public. These studies found that attendance at most selective private institutions confers extra economic advantages to students, in terms of higher early career earnings and better probabilities of being admitted to the best graduate and professional schools. Another major area of empirical research on the economics of higher education has been the role of various public policies in enhancing college enrollment rates, persistence in college and graduation rates. Some studies (McPherson & Shapiro, 1991; Kane, 1994) have estimated the effect of the various federal aid programs, as well as the levels of public and private tuitions on college enrollment and graduation rates. Seftor and Turner (2002) have found that the Pell Grant program had sizable effects on the college enrollment rates of potential students. The modeling of the university’s behavior as an organization that produces multiple outputs and that is subject to several production constraints was first introduced in the works of Garvin (1980) and James (1990). Rothschild and White (1995) and Winston (1999) recognized that in modeling university’s behavior, its customers (students) are vital inputs to its production function and as such, an institution can be engaged in some sort of “arms race” of increased spending to distinguish it from other educational providers to attract more potential students.
The Development of a Competency Ontology
Jui-Hung Ven, China Institute of Technology, Taiwan, R.O.C.
Chien-Pen Chuang, National Taiwan Normal University, Taiwan, R.O.C.
We first explore the competency standards systems of America, England, and Australia. We find that they all emphasize on professional competencies instead of general competencies. Hence, we propose here a competency ontology structure, from the viewpoint of competency standards, which uses the domain ontology as the stem and competencies or skills as the branches. In other words, all competencies or skills are directly linked to the concepts of the domain ontology and are described as instances with the format "action verb + object + condition." The competency- or skill-related knowledge, the performance criteria, and the needed abilities are attached to the skills. We also use slots to give synonymous meanings to each concept and skill to enrich their semantic meanings. Based on the proposed competency ontology structure, we construct a software competency ontology to be used in our future researches. Ontology, with a big O, is a branch of philosophy that studies the nature of being. An ontology, with a small o, is an explicit specification of a conceptualization in a particular domain (Gruber, 1993). Hence, Ontology is a philosophical theory allowing us to construct an ontology or ontologies (Guarino & Giaretta, 1995; Guarino, 1997). In addition to concepts and relationships, ontologies have other terms to describe their properties, constraints, and instances. The most frequently used terms are concept, attribute, value, and instance in natural language representation; while class, slot, facet and instance are used in an object-oriented environment or in an ontology tool. Ontology-based applications have been expanded to many areas such as information extraction, virtual enterprise, semantic search, knowledge portal, e-learning, job recruitment, knowledge management, information exchange, and recommender system (Staab & Studer, 2004). As for the applications of competency ontologies, Hirata, Ikeda and Mizoguchi (2001) propose a total resolution for human resource development. They use a competency ontology that links to an education-training system, personal profile, organization process, career planning, job recruitment, and competency assessment to solve the obstacles encountered in a human resource area. The competency ontology includes core competencies such as listening, speaking, reading, and writing; work competencies such as coordination, cooperation and team work; and meta competencies such as supervising, action, and thinking. These competencies, which belong to general competencies, are needed in every workplace. Hence, the competency ontology does not include the professional competencies of a particular domain. From the point of view of human resource development, one has to have professional competencies which can enable one to be employed in a particular domain. One also has to have general competencies which can help the development of professional competencies. Because of the lack of professional competencies, the competency ontology may affect the effectiveness of the total resolution of human resource development. Woelk (2002) proposes a competency-based and just-in-time learning system. This system has four components: competency ontologies, which describes the tasks to be performed; learner profiles, which records individual abilities, preferences, and experiences; e-learning training courses; and enterprise processes. Through matching competency ontologies and learner profiles, the learning management system identifies the competency gaps and delivers adaptive course contents to the learners. The CommOnCv is a multidiscipline integrated project which combines competency ontology, competency management and knowledge engineering to promote electronic job matching (Michel, Mounira, Michel, & Francky, 2003; Draganidis F., Chamopoulou P., & Mentzas G., 2006). Mochol, Oldakowski and Heese (2004) construct a virtual employment market platform which uses semantic search to improve the effectiveness of job recruitment processes based on competency ontologies. Competency ontology enables an enterprise to manage employees' competencies, recruit the most suitable candidates, analyze competency gaps, develop appropriate training plans, and construct competency-based Web services. Competency ontology can also be used in information extraction, semantic matchmaking, and semantic search. However, most competency ontology-related researches do not specify the structures, or focus only on general competencies. In this paper, we present a class hierarchy structure for the competency ontology from the point of view of competency standards systems. In the next section, we review competency standards systems in order to form the hierarchical structure of the competency ontology. In Section Three, we give a description of the competency ontology and its architectural analysis. Based on the proposed structure, we construct a software competency ontology. In Section Four, we draw conclusions and describe our future work based on the created competency ontology. The HR-XML Consortium (2001) describes that the appropriate scope for competency is a set of KASOC. KASOC is an acronym for knowledge, ability, skill, and other characteristics. Rychen and Salganik (2003) merge other characteristics into ability. Hence, competency is an identifiable and measurable knowledge, skill, and ability which a person may own and which is also necessary for the performance of a task within a specific workplace context. Competency is concerned with what people can do rather than what they know. This has several implications: competency is an outcome, competency must be clearly defined as standards, and competency is a measure of what someone can do. In the representation of competency, one term may represent several competencies, and different terms may be used for the same competency.
Subsidiary Initiatives in Subsidiary Role Changing —In the Case of the Bartlett and Ghoshal Typology
Tzu-En Lu, Chungli, Yuan Ze University, Chungli, Taiwan (R.O.C.)
Lu-Jui Chen, Chungli, Yuan Ze University, Chungli, Taiwan (R.O.C.)
Wen-Ruey Lee, National Taipei College of Business, Taipei City, Taiwan (R.O.C.)
In this study we develop a model
of subsidiary evolution about the conditions that drive the role of subsidiary
changing by subsidiary initiatives. We see subsidiary initiatives as
entrepreneurial processes that find out the new way for subsidiary to expand
resources and to cultivate corporate capabilities. Bartlett and Ghoshal’s
typology of subsidiary is our basic frame of reference to infer to the effect of
subsidiary initiatives causes. Subsidiary role’s changing is a function of
subsidiary initiatives and initiatives make subsidiary for local learning and
global integration. Our provisional conclusion is that MNE subsidiaries not only
contribute to firm-specific advantage creation, subsidiaries also drive the
evolutionary process by their own distinct initiatives. Operating only in the
home market may allow an enterprise to survive in a primarily domestic industry,
but moderate international expansion often brings current benefits and provides
a base for future success should the enterprise become more global
The Effect of Business Cycles on Transition Probabilities in the Labor Market
Dr. Ben-David Nissim, The Max Stern Academic College & University of Haifa, Israel
In the theoretical model presented here, people choose between two economic states: being out of the labor force or searching for a job, i.e. being unemployed. Searching for a job enables them to move into a new economic state, which is being employed. Firms will offer vacancies as long as their economic value is positive. If they find a worker they get the economic value of an occupied job. In this economic environment, the transition probabilities between these two states play a central role in determining the economic value of each state for the firms and the agents. The wage rate is determined by comparing the economic value of being out of the labor force with the economic value of being unemployed. In equilibrium wage rate, transition probabilities and the unemployment rate are determined simultaneously. Changes in exogenous variables such as productivity, searching cost, government benefits to people out of the labor force or unemployment benefits, would lead a change in transition probabilities, the unemployment rate and the vacancies rate. The labor market is characterized by large flows of workers in and out of employment. The transition of workers in and out of employment is connected to the rates of job creation and job destruction. When more jobs are created the flows into employment increases and when jobs are destroyed the flow out of employment increases. I present a model that emphasizes both the changes of the separation rate and, the changes of the probability of the unemployed to find a job as factors that determine the changes in unemployment as well as the number of vacancies. The transition probabilities between employment and unemployment are affected by changes that are held exogenous. The matching function with two-sided search (developed by Pissarides (1984), Mortensen (1982), Diamond (1982) and others) is central to the theoretical literature on labor market flows. The main innovation in those papers is that market frictions are modeled by an exogenously given matching function that relates the number of matches per unit of time to the stock of workers and the firms engaged in searching. The matching function thus captures the technology that brings agents together in the market. Wages are set by decentralized bargaining between the worker and the firm after they are matched. Since finding a new trading partner is a costly and time consuming process for both workers and firms, there is a surplus associated with the match, and this surplus is split according to the (asymmetric) Nash sharing rule. The separation rate, as well as the growth rate of the labor force, is exogenously given and together with the matching rate of unemployed workers the unemployment rate is determined. Many papers have used this basic framework for analyzing the labor market, although some of its basic assumptions contradict empirical findings. During the last 15 years, researchers have changed the basic assumptions of the model in order to make it conform to reality. The gap between the model and reality was large because of an assumption of a constant separation rate. This assumption does not fit key facts that have emerged regarding job flows. First of all, job destruction is relatively more important than job creation over time. That is, business cycles are driven primarily by large episodes of job destruction, with relatively stable levels of job creation (see Davis and Haltiwanger (1990, 1992, 1999), Faberman (2002) and others). Since job destruction is not constant, the separation rate should not be regarded as constant. Blanchard and Diamond (1989, 1990) have found empirical evidence that during recessions the flow out of employment is the main reason for the increase of the unemployment rate, while the decrease in the flow from unemployment into employment has a secondary significance. The effects of the business cycle on unemployment were also studied by Pissarides (1987, 1990, 2000) who emphasized the importance of changes in the matching function as a major reason for changes in the unemployment rate. Dramatic changes in transition probabilities during the business cycle were also found empirically by Ben-David and Weiss (1995). Ben David (2005) proved that in equilibrium any change that effected the economic value of a filled job would be followed by a change of separation rate. In this paper, I relax the assumption of a constant separation rate, and examine the reactions of separation rates and the transition probabilities, of each kind of worker, to various shocks, and the effect of these changes on the unemployment rate. The model describes the effect of exogenous changes in economic variables, such as productivity, searching cost, government benefits to people out of the labor force or unemployment benefits, on the flows and on the unemployment level of the workers. This paper is organized in the following manner. After the matching framework is laid down in section 2, the competitive search equilibrium is presented in section 3. Conclusions are given in the section 4. We assume that there is a well behaved matching function, which gives the number of jobs formed at any moment in time as a function of the number of workers looking for jobs, and the number of firms looking for workers. Vacant jobs and unemployed workers become matched to each other according to the prevailing matching technology. Unemployment persists in steady state because during the matching process and before all unmatched job-worker pairs meet, some of the existing jobs break up, providing a flow into unemployment and the labor force increases, providing another flow into unemployment.
Market Orientation Strategies and Business Performance: Evidence from Taiwan’s Life Insurance Industry
Dr. Yuan-Hong Ho, Feng Chia University, Taichung, Taiwan
Dr. Chiung-Ju Huang, Feng Chia University, Taichung, Taiwan
A comprehensive measure of insurance company's market orientation which includes customer orientation, distributor orientation, competitor orientation, environment orientation and, inter functional coordination was developed. The relationship between the degree of market orientation and the objective business performance of insurance companies in Taiwan was examined. The results of this study indicate that the degree of market orientation between companies is insignificant, and there is no empirical support for the existence of a positive and significant relationship between a company's business performance and its degree of market orientation. Further study on different type of company indicates that the relationship between the market orientation and business performance is significant in the newly established branches of foreign companies. Ever since Taiwan’s insurance market first opened its doors for American companies in 1987, the market has further expanded and encouraged development of both domestic and international insurance companies. By the end of 2005, Taiwan’s insurance market consisted of thirty independent insurance companies (domestic and international). More specifically, by the end of the fiscal 2004, financial assets held by insurance companies had represented 16.02% of the total value held by the nation’s financial institutions. The strong growth in this sector has given the insurance market a substantial control over Taiwan’s financial stability. However, because insurance coverage represents an intangible good, insurers are aware of the importance of differentiating on service, quality, and customer orientation. Currently, insurance companies in Taiwan enjoy high potential growth because the ratio of the number of life policies to total population is relatively lower than those in the United States and Japan. Despite the prospective growth, insurance companies in Taiwan work in a highly competitive environment where customers have constantly growing expectations and subsequently low loyalty. Given the nature of Taiwan’s insurance market, an insurance company’s market orientation and management strategy becomes exceedingly important. Each insurance company should gather market information, encourage company-wide participation, and efficiently allocate company resources to form a competitive, adaptive, effective, and proactive strategy to locate market opportunities. In the 90’s, market orientation (MO) was one of the most popular topics discussed by various levels of management. Kohli and Jaworski (1990) first described and led to the conceptualization of MO into three constructs: market intelligence generation, intelligence dissemination and responsiveness to market intelligence. Narver and Slater (1990) developed a valid measure of responsiveness to market intelligence based on three behavioral components: customer orientation, competitor orientation, and inter-functional coordination in a business. Other researchers have attempted to approach MO from different perspectives. While Hunt and Morgan (1995) establish MO as a corporate resource, Deshpande et al. (1993) and Slater and Narver (1995) compare MO to a type of corporate culture where company-wide support can be achieved through a unified corporate culture that upholds the essence of MO. Another perspective held by Jaworski and Hohli (1993) is that MO is a type of innovation. Consequently, as one can see, although many unique studies have been conducted on this topic, the primary focus remains on the antecedents of MO (Jaworski and Kohli, 1993), the relationship between MO and business performance, and the impact of environmental factors on market orientation and performance (Jaworski and Hohli 1993; Slater and Narver 1994a; Narver and Slater 1990). In theory, one could imagine how MO would directly impact business performance. Nevertheless, this claim is not yet fully supported by the amount of research conducted on this subject. In order to explore other factors behind this relationship, some researchers emphasize the organizational learning mechanisms, and claim that organizational learning orientation and MO together increase business performance (Slater and Narver 1995). In the same research, Slater and Narver (1995) further point out that organizational learning orientation is actually a subpart of MO. In other studies, researchers explain how innovation is the main determinant of MO and business performance (Deshpande et al. 1993; Slater and Narver 1994b). Although some studies point out the positive correlation between innovation and business performance, the field lacks enough research evidence supporting that a causal relationship exists. Our study portrays MO as the entirety of business behavioral and procedural activities, and taking into the perspective held by Coustre and Martinez (1997), divides MO into five distinct components: (1)Customer orientation (CO): measures the level of importance insurance companies place on meeting client needs. (2)Sales force orientation (SFO): understands the level of importance insurance companies place on their sales and field force, and whether the company understands and meets the needs of these employees. (3)Competitor orientation (COM): understands the level of familiarity insurance companies have towards current and anticipated competition, and their corresponding strategies and responses towards competition. (4)Environmental orientation (EO): measures the level of understanding insurance companies have towards the political and social environment surrounding them, and how they respond and adjust towards changes. (5)Inter-functional coordination (IFC): focuses on studying how insurance companies handle communication across departments and manage departmental issues and negotiations.
Impact of Cultural Barriers on Knowledge Management Implementation: Evidence from Thailand
Tanin Kaweevisultrakul, Ramkhamhaeng University (IIS), Bangkok, Thailand
Dr. Peng Chan, California State University, Fullerton, California
Today, knowledge management (KM) is widely regarded as an imperative tool to maintain and enhance a company’s core competencies. Although many companies had begun initiating KM program, little emphasis has been put to address the cultural barriers that may hinder the effectiveness of the program. Existing scholarly and professional works have pointed out that cultural barriers are among the major obstacles to the successful implementation of any KM program. The purpose of this research is to identify and examine the type of cultural barriers that affect the implementation of KM program in Thailand. Presently, knowledge management (KM) is regarded by many as an important tool to maintain and enhance a company’s core competencies and competitiveness. Disappearing boundaries, globalizing competition and rapid changing technology and business life – all these factors lead the economy to a knowledge-based direction. Keskin (2005) stated that, “…firms have become much more interested in stimulating knowledge, which is considered as the greatest asset for their decision making and strategy formulation”. In this sense, knowledge is a key resource bestowing a competitive advantage for entrepreneurial firms. In the new economy, effective knowledge management is vital because the achievement of a sustained competitive advantage depends on firm's capacity to develop and deploy its knowledge-based resources (Perez & Pablos, 2003). In recent years, many Thai companies started implementing KM program in recognition that knowledge possessed by an organization's employees is a highly valued, intangible and strategic asset (DeTienne et al, 2004). While these companies already initiated the program, little emphasis has been put to address the cultural barrier that hinder the effectiveness of the program. Many scholars and professionals agree that cultural barrier is one of the major obstacles that KM managers must encounter and resolve in order to successfully execute the program. KM implementation requires changes in an organization’s culture especially employees’ involvement and participation; hence human issues must be considered a key factor (Moffett, McAdam & Parkinson, 2003). One of the core necessities for knowledge creation, transfer, and sharing is that employees contribute their knowledge or expertise to the company (DeTienne et al, 2004). The purpose of this paper is to clarify the prevailing values and beliefs within an organization's culture that challenge KM initiatives in Thailand. This paper focuses on three key elements that many scholars and practitioners found to be essential components to effective KM initiatives, namely, collaborative involvement, trust, and incentives. Knowledge is an organized combination of data, assimilated with a set of rules, procedures, and operations learnt through experience and practice (Keskin, 2005). There are two critical dimensions to understanding knowledge in a practical, organizational context. First, knowledge exists at individual, group, and organizational levels. Second, knowledge is either explicit or tacit (De Long & Fahey, 2000). Explicit knowledge is the type of knowledge that can be easily documented and shaped. It can be created, written down, transferred, or transmitted among organizational units verbally or through computer programs, patents, diagrams and information technologies (Choi & Lee, 2003; Perez & Pablos, 2003). Firms using explicit oriented KM strategy can achieve scale economies and organizational efficiency through reusing codified knowledge. Tacit knowledge is what we know but cannot explain (De Long & Fahey, 2000). This form of knowledge: 1) is embodied in mental processes; 2) has its origins from practices and experiences; 3) is expressed through ability applications; and 4) is transferred in the form of learning by doing and learning by watching (Choi & Lee, 2003). Firms that focus on tacit knowledge (which is hard to imitate, creates competitive advantage, plays a key part in innovation process and leads to individual creativeness) can develop core processes, obtain new understandings, combine their ability and experiences, and rapidly answer the new ideas, so that they can reap great advantages especially in dynamic environments (Nonaka & Takeuchi, 1995). KM can be identified as systematic and organized approaches that ultimately lead organizations to create new knowledge, which can manipulate both tacit and explicit knowledge and use their advantages. The introduction of KM comes from the need to capture, catalogue, and preserve the knowledge that is part of organizational memory, and typically resides within the organization in an unstructured way. The objective of knowledge management is to support the creation, transfer, and application of knowledge in organizations and to convert tacit knowledge into explicit knowledge and transform individual knowledge into organizational knowledge (Wang, 2004). KM is the study of strategy, process, and technology to acquire, select, organize, share, and leverage business-critical information and expertise so as to improve company productivity and decision quality. Knowledge management embodies synergistic integration of information processing capacity and the creative capacity of human beings in order to maximize the responsiveness and flexibility of organizations. A successful organization must be able to manage various types of knowledge and maximize its strategic value. Toward this end, there is an indisputable need to enable managers to promote knowledge sharing and facilitate the acquisition and retention of intellectual capital. Best practices in knowledge sharing have been gaining increased attention amongst researchers and business managers in recent years. This is because the commercial success and competitive advantage of companies seem to lay increasingly in the application of knowledge and location of those parts of the organization where knowledge sharing practices and location of those parts of the organization where knowledge sharing practices can assist in optimizing business goals.
A Study of Human Resource Development and Organizational Change in Taiwan
Dr. Min-Huei Chien, The Overseas Chinese Institute of Technology, Taiwan
Management of change in organizations has been one of the most important concerns of professionals in the recent times. This paper provides an understanding of human resource management (HRM) practices for organizational change, explores the development of HRM in the organizational culture context, and provides some disciplines for business that wish to develop an in-depth knowledge of organizational change. Mainland China’s economy has developed very fast and has a huge domestic market. Many of Taiwan’s companies have invested in China, which has caused a lot of organizational change. With the rapid rise of organizational change in Taiwan, understanding the dynamics of change is most frequently confronted with questions, such as what is the concept of change? How to decide what to change, and then how to change it? Is implementation of change always painful? What one needs to keep in mind while implementing changes in organization? The Human Resource Development (HRD) issues and challenges for employers and their organizations in the world and in Taiwan play an important role in business success. HRD, in an integrated sense, also encompasses health care, nutrition, population policies and employment. This paper covers the development of people through education and training in a national context as well as within enterprises and will conclude with a reiteration of the importance of HRD to enterprises and countries.. Taiwan’s remarkable economic transformation in the last 30 years has been, to a large extent, due to its capacity to leverage markets to achieve economic performance far beyond its production possibilities. Such sterling economic performance was the result of far-sighted policymakers in managing and optimizing the emerging external environment and existing domestic resources. As a result, Taiwan has been one of the most favored areas in Asia for investment by transnational corporations. Several recent major events that occurred in the global and regional environment have had serious long-term implications for Taiwan's economic viability and performance. First, the Asian financial crisis in 1997 devastated the financial and real sectors of many Southeast Asian economies that serve as Taiwan's hinterland for resources and market. As a regional hub, Taiwan cannot prosper as long as the regional hinterland remains economically weak and socially and politically unstable. Taiwan needs prosperous and dynamic Southeast Asian economies to complement it in an environment of competitive regional clustering. The second major external change is the accelerating market liberalization and borderless nature of the global economy. The global marketplace has become strikingly more competitive and more complex as a result of the relentless process of global production networking. The new global economic structure, widely known as the New Economy, is the product of three main elements: information technology, changes in government policy, and corporate restructuring. Information technology is an important element shaping the contours of the global economy and contributing to the shortening of the product cycle and rapid changes in the comparative and competitive advantages of trading nations. The shift to information technology as the basis for industrial production is forcing a change in industrial organization. The comparative advantage of many industrial economies, including that of Taiwan, is affected by this new configuration of industrial location. Differences in the adaptability of East Asian economies to these external changes are disturbing the pattern and distribution of industrial locations in East Asia. The third major external factor is the world’s relationship with China. When China was accepted to the WTO, Asian countries were alarmed by the prospect of head-to-head competition for trade and investment with China. It is expected that China's economic competitiveness will continue to increase against Taiwan export products, especially in labor-intensive and mid-range industrial outputs. This study could contribute to and progress towards greater organizational effectiveness through making efforts to find answers to HRD and organizational change questions in Taiwan that have resulted from these three major factors. It helps businesses to develop the understanding, knowledge and skills necessary to build an effective HRD model, which is particularly appropriate for any organization in change. The paper also highlights some of the important issues involved in the management of organizational change, organizational culture change and conflict management which warrant further empirical and theoretical development. According to Lee (2002), the development of Taiwan’s human resources can be divided into four distinct periods. The first period can be traced back to the 1930s and 1940s, when Taiwan was still under Japanese occupation. During this period, the Taiwan government’s economic development plan had already accomplished its aim of developing its human resources to a primary-school level, which is needed for the first stage of economic development. By the fourth period, Taiwan’s workforce was still behind world-class levels partly because of the small number of colleges and universities on the island and partly because a large share of college and university graduates went to United States or Canadian or European universities for their studies; then, once they completed their study, a very large percentage of them remained and worked in the countries that had provided them with their advanced training. However, beginning in the mid-1980s more and more of these highly trained workforces started to return to Taiwan and helped to develop the high-tech industries.
Non-Parametric Versus Parametric Methods for Testing Means Equality. The Case of Stocks Means
Dr. Paraschos Maniatis, Athens University of Economics and Business, Athens
The scope of this paper is twofold: a) to compare the stock closing prices in the London Stock Exchange of two relative sectors- that of the food processing industry and the food-retailing branch and b) to investigate in each branch the possible existence of strong correlation between closing price and size of the firm. To this end we have employed the appropriate statistical tools- the test of means equality and the correlation coefficient. The first problem relates to the hypothesis that the stocks of homologue branches as these of food processors and food retailers should behave in the same manner in the stock exchange. The second problem is of the same nature but concerns each sector separately, namely if the closing prices are in any identifiable relationship with the particular firm’s size. In order to research the relationship between stock market price and sizes four separate measures of size were taken – total value (capitalization), total assets, turnover and gross profit. A set of data was obtained by taking randomly two sectors in the London stock exchange market: one consisting of 21 food-processing companies (Group B) and the other consisting of 16 food retailer firms (Group A). The Money World Stock Sectors was used to obtain the data concerning gross profits and total assets and turnover for both types of firms. Random selection sampling does not mean haphazard selection (Goldstein and Lewis, 1996). It means that each member of the population has some calculated chance of being selected. A random sample should give every member of the population an equal chance of selection. In our case to select the random sample, a list was used where each member was given a number. Then a series of random numbers were used to select the sectors that take part in this analysis. For prices we used the closing prices of the stocks in a given date. In order to investigate the posed problems we have adopted the following approach: -Investigate the behaviour of each size variable and that of the closing prices. This is done by the construction of the relative histograms. -Investigate the normality in the distribution of the closing prices- an indispensable condition for the application for the parametric t-test and the analysis of variance (ANOVA) techniques -Application of nonparametric techniques for testing means equality and price-size correlation.For the auxiliary calculations we have used the excel program- the most appropriate for spreadsheet tasks. For the graphs the Mann-Whitney U test and the rank correlation the SPSS program has been employed. A parametric statistical test is a test whose model specifies certain about the parameters of the population from which the research sample was drawn. Since these conditions are not ordinarily tested, they are assumed to hold. The meaningfulness of the results of a parametric test depends on the validity of these assumptions. Parametric tests also require that the scores under analysis result from measurement in the strength of at least an interval scale. A nonparametric statistical test is a test whose model does not specify conditions about the parameters of the population from which the sample was drawn. Certain assumptions are associated with most nonparametric statistical tests, i.e., that the observations are independent and that the variable under study has underlying continuity, but these assumptions are fewer and much weaker than those associated with parametric tests. Moreover, nonparametric tests do not require measurement so strong as that required for the parametric tests; most nonparametric tests apply to data in an ordinal scale, and some apply also to data in a nominal scale. In this chapter we discuss the various criteria which should be considered in the choice of a statistical test for use in making a decision about a research hypothesis. These criteria are (a) the power of the test, (5) the applicability of the statistical model on which the test is based to the data of the research, (c) power-efficiency, and (d) the level of measurement achieved in the research. It has been stated that a parametric statistical test is most powerful when all the assumptions of its statistical model are met and when the variables under analysis are measured in at least an interval scale. However, even when all the parametric test's assumptions about the population and requirements about strength of measurement are satisfied, we know from the concept of power-efficiency that by increasing the sample size by an appropriate amount we can use a nonparametric test rather than the parametric one and yet retain the same power to reject Ho. Because the power of any nonparametric test may be increased by simply increasing the size of N, and because behavioral scientists rarely achieve the sort of measurement which permits the meaningful use of parametric tests, nonparametric statistical tests deserve an increasingly prominent role in research in the behavioral sciences. This paper presents a variety of nonparametric tests for the use of behavioral scientists. The use of parametric tests in research has been presented well in a variety of sources [Among the many sources on parametric statistical tests, these are especially useful: Anderson and Bancroft (1952), Dixon and Massey (1951), Edwards (1954), Fisher (1934; 1935), McNemar (1955), Mood (1950), Snedecor (1946), Walker and W (1953)] and therefore we will not review those tests here. In many of the nonparametric statistical tests to be presented, the data are changed from scores to ranks or even to signs. Such methods may arouse the criticism that they "do not use all of the information in the sample" or that they "throw away information." The answer to this objection is contained in the answers to these questions: (a) Of the methods available, parametric and nonparametric, which uses the information in the sample most appropriately? (6) How important is it that the conclusions from the research apply generally rather than only to populations with normal distributions?
An Examination of the Education Requirements to Become a CPA
Dr. Robyn Lawrence, University of Scranton, Scranton, PA
Dr. Ronald J. Grambo, University of Scranton, Scranton, PA
In 1988 the American Institute of Certified Public Accountants called for each state to require 150 credit hours of education, in addition to passing the national examination and meeting experience requirements, to become licensed as a certified public accountant (CPA). Over the ensuing years, a majority of the states passed some form of 150 hour requirement for licensure. The current study analyzed the diversity that exists in education requirements to sit for the CPA examination and become licensed as a CPA in each of the fifty states. Based upon this analysis, a minimum set of courses to meet the requirements of all of the states, as well as, a minimum set of courses to meet regional requirements were identified. This analysis is relevant given that the state education requirements ultimately affect the quality and quantity of CPA-related services available to users of such services. The variability in state requirements poses extra challenges for accounting educators, prospective students of accounting programs, accounting professionals, state accountancy boards, and clients of CPA-related services. In the United States, all certified public accountants (CPAs) are examined, licensed and regulated under individual state accountancy laws and regulations. However, consistency across the various states and other jurisdictions is enhanced through the Uniform Accountancy Act (UAA) which was first introduced in 1984 by the National Association of State Boards of Accountancy (NASBA) and the American Institute of Certified Public Accountants (AICPA). The NASBA is comprised of the boards of accountancy in the each of the fifty states, the District of Columbia, Guam, Puerto Rico, the Virgin Islands and the Commonwealth of Northern Mariana Islands. In 1988 the AICPA approved, effective beginning in 2001, requiring 150 credit hours of education, in addition to passing a national examination and meeting experience requirements, to become licensed as a CPA. In the ensuing years a majority of the states have passed some form of 150 hour requirement to become licensed as a CPA. However, much variability still exists from state to state regarding the specific education requirements for licensure. The purpose of this study was to analyze the diversity that exists in the education requirements to become eligible to sit for the CPA examination and ultimately become licensed as a CPA in each of the fifty states. Based upon this analysis, a minimum set of courses to meet the requirements of all of the states, as well as a minimum set of courses to meet regional requirements were identified. This article concludes by identifying the ramifications of the results for accounting educators, prospective accounting students, state accountancy boards and accounting professionals. Each state has its own accountancy board which has primary responsibility for the requirements to become licensed as a CPA in that state. As shown in Figure 1, the requirements in each state are influenced by many factors, including the business and legal environment prevailing in the state, the magnitude and diversity of the CPA services sought by the consumers in the state, various external accounting bodies, including the NASBA, the AICPA and the state’s organization of CPAs, the Securities and Exchange Commission (SEC), the Public Company Accounting Oversight Board (PCAOB) and the opinion of the general public. The licensure requirements in each state affect the number and characteristics of the persons aspiring to sit for the CPA exam, the number and characteristics of the persons licensed in the state, and ultimately the quality and availability of CPA services in the state. Most state accountancy boards describe their education requirements in terms of courses and credits to satisfy an accounting component and a business component. Thus for example, CPA examination candidates in Vermont must complete a three-credit course in “computer science” rather than completing multiple courses with computer science topics embedded in them. This approach expedites the state board’s ability to evaluate whether or not a candidate meets the education requirements to sit for the CPA examination and/or become licensed in that state. The NASBA is considering adopting a “subject” approach rather than the current “course” approach, especially with regard to professional ethics. Currently some states specify “coverage” of a particular subject rather than a minimum number of credits. An analysis was made of the published internet materials available through links from the NASBA website (www.nasba.org, through “Exam” and “Members”; see Appendix A for state accountancy board website links) for each of the fifty states. A database was constructed and analyzed by accounting requirements, business requirements and other requirements, which is consistent with the approach taken by the NASBA. Most states required the programs to be accredited either nationally (for example, by the Association to Advance Collegiate Schools of Business (AACSB)) or regionally (for example, by the Middle States Association of Colleges and Secondary Schools, the New England Association of Schools and Colleges, the North Central Association of Colleges and Secondary Schools, the Northwest Association of Schools and Colleges, the Southern Association of Colleges and Schools, or the Western States Association of Schools and Colleges). Some states specified different requirements if the program was regionally accredited versus if the business program was AACSB accredited versus if the accounting program was AACSB accredited. Different requirements were sometimes articulated for candidates earning a masters degree rather than a baccalaureate degree.
Descriptive Analysis of Social Standards for Suppliers in Top 100 Fortune Global 500 Companies
Dr. Deniz Kagnicioglu, Anadolu University, Eskisehir, Turkey
Dr. C. Hakan Kagnicioglu, Anadolu University, Eskisehir, Turkey
Nowadays, the social responsibilities of companies are gaining importance based on globalization. Companies develop codes of conduct to define social factors in their national and international activities. In the international context, codes are instrument companies can use to ensure the enforcement of minimum social standards within their area of influence. Codes of conduct also cover supplier practices. This can ultimately improve employee working and living condition as well as company success. In this study, top 100 Fortune Global 500 companies are analyzed for 8 social standards of suppliers. These social standards are based on ILO Conventions and Declarations. These 8 social standards are examined according to region (America, Asia, and Europe) of the companies and sectors (manufacturing, service, finance and technology) and results are commented. Social responsibility is one of the most popular concepts of today. As the firms getting bigger, effect areas of them are also getting wider. Competition and globalization force the firms to have international investments and it is difficult for the government to intervene these international firms day by day. The economics of globalization emphasizes competition, capital investment, free trade, growth and the transformation of markets. Too much has been made of the phenomenon of globalization in its economic dimensions. These do not sit easily alongside the priorities of people including women, minority groups, indigenous populations and children. Economic dimensions of globalization have acquired a status higher than human values or even above fundamental human rights, which are going to be seriously affected by current global trends (Welford, 2002). Moreover, growing firms, increasing trade and weakening government force bring new discussions to the agenda. Who will control the growing firms, who will save the underdeveloped countries from negative sides of international trade and who will be responsible from the duties of weakening government? These questions remain around how we can transfer the process of globalization into one that enables us to more fully engage with issues of human rights. The only concept that will answer these questions is social responsibility. Since the 1990s, individual stakeholders, trade unions and non-governmental organizations (NGOs) have increasingly called for companies to act in a socially responsible way. Companies are no longer assessed solely on the financial gains achieved for shareholders, but also on the contributions they make to stakeholders and society (Schafer, 2005). The period of having responsibility to just shareholders is closed. The firms firstly affect the workers that are the most important resource of the firms and then the society and the environment of themselves (OECD, 2001). There are different definitions of social responsibility. The common idea put forward in these different definitions of social responsibility is that companies should conduct their business in a manner that demonstrates consideration for the broader social environment in order to serve constructively the needs of society, to the satisfaction of society. In so far as the business system as it exists today can only survive in an effectively functioning free society, the corporate social responsibility movement represents a broad concern with business’s role in supporting and improving that social order (Graafland, Ven, 2006; Carroll, 1999). The people that the firm is responsible are defined as stakeholders in the concept of social responsibility. Workers, unions, civil society organizations, public, suppliers, etc are the stakeholders of the firms. Businesses come into regular contact with customers, suppliers, government agencies, families of employees, special interest groups. Decisions made by a business are likely to affect one or more of these "stakeholder groups” (Reich, 2005). The stakeholder concept suggests that the managers of a business should take into account their responsibilities to other groups - not just the shareholder group - when making decisions. The concept suggests that businesses can benefit significantly from cooperating with stakeholder groups, incorporating their needs in the decision-making process (http://www.tutor2u.net/business/ accounts/ stakeholder_theory.htm). As globalization progress, interest is being drawn to production conditions in developing countries. More and more consumers express their support for companies that comply with environmental and social standards by buying these companies’ products (www.coc.runder-tisch.de/coc- runder-tisch/inhalte/ publikationen_rt/ Guide_ social_standards_2004engl.pdf). No longer is the protection of workers’ rights considered a direct employer’s or a government’s responsibility. Companies operating in the global economy have been increasingly called upon to assume greater responsibility for social and environmental compliance in their supply chain operations. Transnational corporations that source products across the globe are now held accountable for promoting and protecting the rights of workers that make their products regardless of whether they are direct employers or not. Socially responsible business not only benefits workers, it also increases company profits. Such examples may include specific instances where implementing health and safety standards has increased company productivity, as well as more general examples of companies which have enhanced their public image as a result of their socially responsible business practices (Ferguson, 1998).
Analysis on the Evolutionary Game of Innovative Financial System
Chen-Kuo Lee, Ling Tung University, Taiwan
A financial system refers to a set of rules abided by human beings in their interactions over the years. These rules are formed via a dynamic game and are the innovative process of financial system. An innovative game is jointly created by innovative thoughts and game theories. Game theories are a critical analysis tool for evolutionary economics and are sufficient to interpret the process of innovative financial systems. Therefore, this study intends to discuss the innovative process of financial systems via revolutionary game theory. The research results indicate that, as far as the innovative process of financial systems is concerned, the trans-industry management system is an important innovative system in relation to intra-industry management system. Financial liberalization has accelerated since the 1980s. As a result, information technology and new markets have developed rapidly, causing all economies to rely on one another more than ever. Consequently, financial globalization has become an irresistible trend for both developed and developing nations. Financial globalization refers to a process and status in which financial activities have moved across boundaries and combined with other nations’ financial activities, including the internationalization of financial institutions, financial markets, financial tools, financial assets and revenue together with the unification of financial enactment, trading patterns, and international practices (Niehans, 1983; Podolski, 1986; Miller, 1992; Merton, 1992; Levine, 1997; Mantel, 2000; Mantel & McHugh, 2001). Financial globalization refers to the expansion process of financial activities, which implies the unification of financial trading rules and reduction of barriers, free movement of capital in international markets, an increasing connection between various nations’ interest rates and exchange rates, unrestricted financial activities, and fewer restrictions for admittance to financial markets. As a result, transnational banks continue to grow and more banks are growing multi-nationally; all financial institutions compete in the global market, and financial risks continue to increase (Girardone & Casu, 2004). Financial globalization triggers fierce competition in the worldwide financial markets. Consequently, reorganization, merger, and acquisition take place one after another. As a result, financial institutions no longer operate in the conventional manner, the border between banks and non-bank financial institutions gradually disappears, and financial institutions provide more services than ever , 2005). Worldwide financial controls have been lifted by leaps and bounds since the 1990s, which gave more room to financial innovation and, as a result, financial innovation has spread out throughout all areas (Van Horne, 1985). The globalization of financial innovation gave birth to financial competition across the world. All financial institutions fitted themselves into financial globalization via financial innovation. Consequently, financial innovation has expanded from domestic to international markets, particularly international financial systems, international financial tools, and international financial institutions. Thanks to the rapid development and extensive application of electronic information technologies, nowadays financial techniques are not only a means for financial innovation but also a revolutionary development for financial innovation and information technologies with an emphasis on Internet technologies. As a result, the financial service industry has significantly upgraded innovation capabilities, such as product design and transactions abilities, and has dramatically reduced costs.
Copyright 2000-2016. All Rights Reserved