The Business Review, Cambridge

Vol. 7 * Number 2 * Summer. 2007

The Library of Congress, Washington, DC   *   ISSN 1553 - 5827 

Most Trusted.  Most Cited.  Most Read.

All submissions are subject to a double blind review process

Main Page   *   Home   *   Scholarly Journals   *     Academic Conferences   *   Previous Issues   *   Journal Subscription


Submit Paper   *     Editors / Reviewers   *    Tracks   *   Guideline   *   Sample Page   *   Standards for Authors / Editors

Members / Participating Universities   *   Editorial Policies   *   Jaabc Library   *   Publication Ethics

The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Business Review, Cambridge will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work.  All submissions are subject to a double blind peer review process. The Business Review, Cambridge is a refereed academic journal which  publishes the  scientific research findings in its field with the ISSN 1553-5827 issued by the Library of Congress, Washington, DC.  No Manuscript Will Be Accepted Without the Required Format.  All Manuscripts Should Be Professionally Proofread Before the Submission.  You can use for professional proofreading / editing etc...The journal will meet the quality and integrity requirements of applicable accreditation agencies (AACSB, regional) and journal evaluation organizations to insure our publications provide our authors publication venues that are recognized by their institutions for academic advancement and academically qualified statue. 

The Business Review, Cambridge is published two times a year, December and Summer. The e-mail:; Website: BRC  Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via our e-mail address.. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above. Address advertising inquiries to Advertising Manager.

Copyright 2000-2017. All Rights Reserved

Hydrocarbons to Hydrogen Toyota’s Long-term IT-based Smart Product Strategy

Dr. William V. Rapp, New Jersey Institute of Technology, Newark, NJ



The world Expo in Nagoya in 2005 signaled Toyota Motor Corporation’s (TMC) emergence as the world’s leading automobile manufacturer and provided a strategic insight concerning its plans to move vehicle transportation from dependence on hydrocarbons to hydrogen. In addition it showed that in pursuing this evolving strategy the long-term role of hybrid vehicles and the use of embedded IT in combination with organizational IT will continue into the hydrogen era.  There are important connections between the IT activities embedded in the automobile and TMC’s well-researched production system, smart design, and globally based automated consumer ordering systems. This is because the efficiencies of these latter systems will enable TMC to continuously reduce the cost of the hybrid engine, fuel cells and embedded IT faster than its competitors, making them not only the industry’s technology innovation leader, but also the continued cost leader. Toyota’s resulting control over not only the intellectual property related to hybrids on which they will receive expanding revenues as demand for hybrid vehicles grows, but also over the global supply chain in areas such as hybrid engines will cause continuing problems for competitors. The corporate culture that has resulted in TMC’s on-going and well recognized leadership in quality, production efficiency and rapid product development combined with their deep financial resources means these embedded IT product initiatives and their global impact need to be taken increasingly seriously.  Over the last 30 years certain countries have combined the Olympics and an Expo as a way to announce a change in their status on the world stage; examples include Japan’s spectacular economic recovery, Spain’s entry into the EU and China’s emergence as a major global power. (2) So what was the purpose behind the 2005 World Expo in Nagoya? It indicated both TMC’s emergence as the world’s leading automobile manufacturer and provided a strategic insight concerning its plans to move vehicle transportation from dependence on hydrocarbons to hydrogen both as a way to address global environmental concerns, - the Expo’s theme -, and as a way to alter the competitive playing field in automotive transportation. This paper will focus on one aspect of that strategy by showing the role TMC intends hybrid vehicles and IT to play in these developments and why this strategy will continue into the hydrogen era. Importantly from TMC’s perspective, as competitive pressures have mounted in Japan and global markets, global auto groupings such as GM, Ford, Daimler-Chrysler and Renault have absorbed many Japanese firms, though there has been some reversal such as GM’s sale of Suzuki and Fuji-Heavy and Daimler-Chrysler’s refusal to rescue Mitsubishi. These expanding groups have aggressively challenged the two leading Japanese producers, Toyota and Honda, in their export and domestic markets. So it continues to be critical to TMC’s long-term strategy that it successfully maintains its position as the world’s most efficient vehicle producer while managing its planned transition to a new competitive model.  For TMC there is no alternative business model since vehicle production and related businesses, such as replacement parts and finance, represent most of its revenues, operating earnings and invested capital. Furthermore the adverse consequences of GM’s diversification acquisition binge are there for all to see. In this context TMC’s organizational structure and product development choices enable one to understand how the company will use technological innovation, IT, and organizational evolution as strategic tools to maintain and extend its competitive advantage in producing, selling and delivering vehicles while always relying on its core philosophy.  “Since its establishment, Toyota’s principle has been to strive constantly to build ‘better products at lesser costs.’ To this end, Toyota has developed its own unique production method. This system is based on the idea of ‘just in time’ (i.e. producing only the necessary amount of parts just at its needed time), the idea of Toyota Founder Kiichiro Toyoda. This system also seeks to thoroughly eliminate all sorts of waste in order to reduce prime costs. Toyota also places a maximum value on the human element, allowing an individual worker to employ his capabilities to the fullest through participation in the productive management, and improvement of his given job and its environment. With the motto ‘Good Thinking, Good Products,’ each individual worker is making his best effort to assure Toyota’s customers the highest quality product, with an understanding that it is in his work process that quality is built in.” (Quoted from Toyota's company booklet “Opening the Window”, p 13.) Therefore for TMC, the objective is to make this already very productive approach even more productive in terms of output and product performance. By using these principles TMC firmly established itself in the late 1970s and early 1980s as the world’s most efficient and lowest cost producer of high quality automobiles. This was explained during the late 1980s in a series of studies organized by MIT’s Auto Industry Center, culminating in 1990 in Womack, Jones and Roos’ seminal work on lean production, The Machine That Changed The World. More recently Jeffrey Liker and David Meier have amplified the concepts, The Toyota Way.  As a result other producers are well aware of TMC’s lean production principles and have spent years benchmarking them. But they have yet to catch up as TMC has continued to evolve the production system globally and even change it to make it even more flexible and productive. In addition, TMC has shifted the competitive model from one dependent primarily on superior organization and manufacturing to one that also incorporates significant product innovation and the extensive use of electronics, telecommunications and IT. In this way it has developed a strategy that is responding not only to industry changes but also trends in the economic and regulatory environment. California, the world’s fifth largest auto market, has increasingly tightened the emission regulations on cars sold in the state. Several US and Japanese cities have publicly undertaken aggressive programs to improve energy conservation and achieve a better environment through reduced fuel emissions. The rapidly developing potential mega-markets of China and India face mounting environmental issues as well as rising oil costs, partially driven by an increasing global demand that is influenced by their own rapid economic expansions. Furthermore world oil production appears to be peaking, and, simultaneously, aging populations are increasingly health and environmentally conscious.


A Cross-sectional Analysis of Earnout Contracting in Acquisitions

Dr. David R. Beard, University of Arkansas at Little Rock, Little Rock, AR



This research examines a sample of acquisitions in which earnouts are used and contrasts that to a sample of “traditional” acquisitions to explore specific hypotheses concerning agency, informational asymmetry, and the use of an earnout as a means of financing.  Logit analysis is employed to determine the choice between earnout vs. non-earnout transactions.  Tobit analysis is used to examine the proportion of the deal that is paid contingent upon future performance.  The paper ends with a post-acquisition analysis of these transactions. In this paper I analyze the determinants for the use of an earnout.  In addition, I look at the proportion of the contingent payments associated with the earnout contract, relative to the total value of the deal.  Finally, post-merger retention of management and post-merger contingent payments are examined.  In an earnout, the bidder agrees to pay the target an initial amount for the acquisition plus predetermined future payments contingent on the target’s achievement of performance milestones within a specified time period.  The literature has identified various motives for the use of earnout contracting in the acquisition of target firms.  In particular, Kohers and Ang (2000) and Datar, Frankel, and Wolfson (2001) contend that earnouts are relegated to mergers where problems of informational asymmetry and agency are so detrimental that this costly type of contacting must be employed to protect the interest of bidder shareholders and target firms.  In this paper, I examine the use of earnouts through an empirical comparison of a sample of acquisitions involving earnouts to a control sample of traditional acquisitions.  Within each sample, I also examine differences that exist when the target is a private firm compared to a subsidiary of another firm.  I employ a logistic regression to determine the variables that contribute to the choice of an earnout in an acquisition.  Next, within the earnout sample I use a tobit regression analysis in order to examine the determinants of the proportion of the contingent payments relative to the total size of the acquisition.  Finally, I report a descriptive analysis of the post-merger payouts and the post-merger retention of the target firms’ management. In order to accomplish this analysis, I compile a sample of transactions involving earnouts as well as a sample of traditional merger transactions.  In doing so, I am able to separate the effects of earnout contacting in the acquisition.  This allows the testing of hypotheses with respect to the earnout and be certain that these effects are due to this type of contracting tool and are not a consequence of the merger sample.  Using an earnout contract in an acquisition serves many purposes.  First, earnouts may lower the problems associated with asymmetry of information between targets and bidders.  The earnout accomplishes this by allowing for differential valuations of the target by its own management and the management of the bidder.  It also allows the target to signal its quality to the bidder by the proportion of total deal value that is paid contingently.  It thereby shifts some of the risk of misvaluation from the target to the bidder.  Second, an earnout helps solve some of the problems associated with agency.  It does this through its forced retention of target management while providing these managers with the incentive to maximize their efforts to achieve a higher payout.  Third, it can be used as a financing vehicle for the acquisition by deferring some of the payment needed to acquire the target.  I expect that earnout contracts will be used in deals that involve targets that operate in multiple industries, have few assets in place, have low information disclosure, high growth opportunities, and valuable human capital relative to firm assets.  Therefore, I use two dummy variables that take the value of one if the firm is in a hi-tech or service industry.  These types of firms will have few assets in place, high growth opportunities and valuable human capital.  Targets that operate in multiple industries will be more difficult to value by the bidder.  Therefore I use the number of target NAICS codes in order to measure this dimension of the deal.  Targets that have a low amount of information disclosure will also be more difficult for the bidder to value.  A dummy variable that takes the value of one will be used to measure whether or not the firm is a privately held entity, since these firms have low amounts of information disclosure.  If the firms are in the same industry the bidding firm will have less difficulty in valuing the target.  To capture this, I use a dummy variable that takes the value of one if the target and bidding firm have the same first two digits of their NAICS code.  When a bidding firm has more experience in merger transactions they should be more competent in information gathering and valuation with respect to a target firm.  To measure this, I use the number of prior acquisitions in the ten-year period preceding the announcement of the merger.  The variable that measures the value of the earnout relative to the value of the deal is used to test my hypothesis that relates to the target firm signaling its quality to the bidder. For the set of hypotheses related to agency, I use two dummy variables that take the value of one if the target is in a service or hi-tech industry.  These are the types of industries where human capital must be retained.  It is also necessary to align the interests of the human capital to the shareholders of the combined firm.   When a bidding firm has valuable growth opportunities it would like to maintain its flexibility to fund these future investments.  Also, it will not want the target firm to be able to share in the gains from opportunities that were already in place prior to the acquisition, to the detriment of existing shareholders.  I use the bidding firm’s market-to-book ratio and 3-year prior growth rate in sales to measure the bidding firm’s future investment opportunities.  In order to determine if the earnout agreement is being used as a vehicle to finance the transaction, the bidding firm’s cash holdings plus marketable securities relative to the value of the deal, the bidding firm’s free cash flow (as measured in Lehn and Poulsen (1989)) relative to the value of the deal, and the bidding firm’s debt-to-capital ratio relative to the industry average are used in my analysis of earnouts.  The sample of merger transactions involving earnouts consists of 533 acquisitions of private and subsidiary targets by public firms completed through the period of January 1, 1990-May 31, 2001.  These observations were identified from Thompson Financial Securities Data Mergers and Acquisitions files (SDC).  The data were collected using Compact Disclosure, Thompson Financial Securities Data Mergers and Acquisitions files (SDC), Standard and Poor’s Compustat files on Academic Universe, news releases found in Lexis/Nexis and ValueLine Investment Surveys.  Observations involving the acquisition of public targets, acquisitions involving foreign entities, and acquisitions involving financial firms or holding companies were excluded, as were acquisitions for less than $1.0 million dollars.  The logit regressions model the choice of using or not using an earnout as a mode of contracting in the acquisition.  The dependant variable takes the value of one if an earnout is the choice and zero if a traditional mode of acquisition is used.  The sample consists of the two sub-samples of earnout and traditional acquisitions discussed in detail in the last section.  The independent variables that are used test hypotheses concerning agency, asymmetry of information, and financing of the transaction, as outlined in the introduction.  The results for the logistic model of choice are reported in table one.  A logit model is used rather than a probit model because the errors are not normally distributed.  There are 8037 observations in models one and two and 6969 observations in models three and four.  The difference in the number of observations is due to the fact that certain independent variables were not available for all of the observations.  If data was not present for all of the independent variables in the analysis, the observation was dropped.  There are 466 earnout transactions in models one and two and 395 earnout transactions in models three and four.  The control sample of traditional acquisitions consists of 7571 observations in the first two models and 6574 in the last two models.  Model one and model three differ from models two and four in that model one and three use the natural log of bidder value and transaction value while the other two use target value relative to the combined value of the firm.  The significant correlation among these variables precludes their being included in the same model.  Model three and four include proxies to test the hypotheses associated with the financing of the deal along with two additional proxies used to test asymmetry of information between the target and bidder.  In each model, the likelihood ratio statistic and Wald statistic are significant at the one percent level, indicating that the explanatory variables have more power in determining the choice to use an earnout than the intercept alone.  A pseudo R2 is calculated using two alternative definitions of McFadden’s formulation of pseudo R2 as a measure of the explanatory power of the model.


Small Business and Globalization

Judy Lee and, Sam Houston State University, Huntsville, Texas

Dr. Balasundram Maniam, Sam Houston State University, Huntsville, Texas



The current era of globalization has offered some major challenges for small businesses in the United States. The ease and relative low cost of communications, transportation and marketing world wide have led to significant issues for the small business in the US. This paper will investigate some of these issues and the impact on survival.  The small business in the process of purchasing a new copy machine has dealt with the same copy repairman who sold them their current copy machine for 8 years. The business is happy with the old machine and the repairman provides excellent service anytime, if there is ever a problem with the machine, the repairman is there to fix it usually in a few hours, always within 24. So when it came time to purchase a new machine the business went to their “copy guy” for quotes. He is very familiar with the business, and their price constraints and usage. When the quotes came in and a decision was made on the machine to purchase, a search was made on-line to see the pricing elsewhere and confirm the best price was obtained. The manager was surprised to find that the machine he was planning to purchase for $4100 could be purchased for $2300 on-line. A question arose as to what is happening to the small business man in America? Is he being inched out by the forces of international trade, globalization, technology and the Internet? How can small business in America survive this kind of global pressure? In the instance of the “copy guy”, the manager wants to purchase a machine from him. As a small business he needs these services. If the forces in the market make it impossible for the “copy guy” to compete selling his machines the business will be worse off because of the need for a dependable repair person that will be available within 24 hours. With these issues in mind, the research for this paper delves into the issue of globalization.  Slywotzky, Baumgartner, Alberts and Mousanas (2006) argue that globalization is causing businesses to redesign their strategy as can be seen in the above illustration. Globalization tends to make weak businesses weaker and strong designs stronger. The job falls to the managers to do exceptionally well in customer connections in order to survive in the new business order. According to Slywotzky, Baumgartner, Alberts and Mousanas (2006), globalization can be considered a movement in value from antiquated business designs to new, more cost-effective efficient ones. Wagner (2005) discusses the effect of globalization on the role of the central banks, monetary policy and exchange rate systems. Ongoing globalization increases the complexity inherent in the job of the central bank and complicates the economy. Giaburro and O’Boyle (2003) discuss two perspectives of globalization: mainstream economics and personalist economics. Personalist economics is a minority view considered to be more relevant to globalization. Economic globalization involves international activities of economic agents integrating the performance of banks, business enterprises, and finance companies without a prevailing national base. According to this article, companies grow by their own efforts not because they are part of any nation.  Fooladi and Rumsey (2005) examined the role of the exchange rate in the avenue of foreign investments. They sought to determine if the exchange rate played a role in the diversification of the risk. The results of their study indicates that the movements in the currency markets tend to offset the equity market returns, hedging the exchange rate may actually increase not decrease the overall risk of the portfolio. Piasecki and Wolnicki (2004) suggest implementing policies to improve developmental economies through five areas: first, stimulation of local initiatives (i.e. responsibility for job creation, environment and education); second, local institutions sensibility to restrictions on markets support of legal and financial institutions; third, invest in human capital; fourth, increase the flow of information; fifth, state intervention should be market and competition friendly and only when necessary. Additionally a culturally and politically neutral economic model that is neither Eastern nor Western is recommended.  As globalization enhances competitiveness Mid-Market companies must embrace globalization, manage change, use it to their advantage, or perish. This requires a flexible business model; outsourcing can extend networks of suppliers, resources, skills and advanced capabilities. Effective use of technology favors those able to adapt. Typically, mid-market companies have slower reactions due to fewer resources, slower response to market change, a risk adverse culture and difficulty in making tough decisions. Controlling management has little knowledge of global competition (Iyer 2005).  Allen and Rayner (2004) recommend applying a strategic flexibility approach to international trade and investment. This approach means you hold options on international ventures that allow you to intensify, reduce, or abandon your positions if the global arena warrants it. Analyzing sustainability of US current account deficit consideration should be given to two different viewpoints: 1. Domestic view regarding investment spending and consumption in the U.S, and 2. International financial view of the global investors’ wealth. The determining factors in the ability of the US economy to sustain the accounts are the amount of funds that can be borrowed affordably from foreign sources and the willingness of foreign investors to purchase and hold U. S.  dollars. US account deficits cannot continue indefinitely. A structural change and policy design is the best way to avoid the negative effects of a rapid depreciation in the exchange rate and the resulting price of the U.S. dollar (Mann 2002).  Globalization brings with it competition from cheap foreign labor, a tendency to depress domestic wages and degrade working conditions, while restructuring may require a revolution (Wellington and Zadvakali 2006).  Leichenko and Silva (2003) performed empirical studies investigating the effects of foreign goods on rural manufacturing jobs. Their findings suggest that cheaper US exports lead to increased wages in rural areas and that lower cost imports reduce jobs and income in those same areas.   Feenstra (1998) states that globalization has resulted in an integration of the manufacturing process whereby, production is outsourced to countries with lower unskilled wages.  Many companies have shifted the lowest-cost parts production process overseas.  Julius (1997) focuses on market-led globalization microeconomic pressure from globalization effect small firms.  Forces are driving a need to change how companies are organized, how goods and services are provided and how they are brought to market.  Information technology is saving storage costs through paperless offices, time savings computer links to factories, and just in time inventories. Businesses that offer “middlemen” services between the producers and the consumer will be squeezed the most. New markets growing demand and lower costs are bringing companies into the developing world. Many industries are finding global competition squeezes margins.  Small firms must overcome the pressures applied to them in order to survive in today’s global economy. The local community is connected globally through technology and in order to survive the competition the firm must evolve.  Globalization according to the International Monetary Fund is a process developed over time, through man’s innovation and technological advances. It includes integrating world economies, predominantly through financial and trade advancement. IMF Staff (2002) there are four aspects of globalization to consider, trade, capital movements, the spread of knowledge and technology and international business employment.


A Research Framework in Banking Studies: “Researching and Writing Articles a Researcher’s Odyssey”

Dr. Hemant Deo, University of Wollongong, Australia

Dr. Kathy Rudkin, University of Wollongong, Australia



A research framework is an important feature of any academic research since it provides a researcher with an avenue of filtering his or her data to tell a particular story.  Many studies in banking have been researched from data which was filtered through a mainstream, positivist or scientific approach and therefore these studies have not taken into account the social, economic, political and historical factors which are an important component of any academic research.  The aim of this research paper is to bring out the importance of a research framework within case studies and the process the researcher undertakes, in his or her journey to complete the researcher’s odyssey. Banking is considered to be a product of its social environment.  The overall interdependency of banking operations and their environment results in change, which is brought about by a process of mutual adaptation.  To capture the totality in a research study, there is a need to see the inter-connection of social factors (Blumer, 1978; Funnell, 1998; Hines, 1988).  It is maintained in this paper that the philosophical assumptions that underlie mainstream research can be questioned as to their ability to capture interactions with social environments.  For example, by exposing the philosophical assumptions of scientific or mainstream approaches and seeking their implications for what understanding they can bring to real world situations shows us a narrow interpretation of the consequences of banking social reforms.  Not only does such understanding have the potential to improve people’s welfare through making visible the inadequacies of mainstream investigations, it also highlights the issue of “how little we know about the actual functioning of accounting (banking) systems in organizations” (Hopwood 1979, p145).  In the past decade banking research has used a mainstream, positivist or hypothetico-deductive approach and this has led to an emphasis on cause and effect relationships between banking and the environment in which it operates.  Using a positivist research paradigm raises the question can such research bring out or capture the social and political dimensions of banking?  It is argued social and political dimensions have not been sufficiently explained by any banking literature using positivist approaches, which have dominated banking research (Koch & MacDonald, 2003; Moore et al., 1997; Saunders, 2001; Rose, 2002).  The purpose of this paper is to highlight the lack of banking research from alternate approaches.  It also seeks to describe the problems researchers face from undertaking an interpretive research study in a banking area and to promote the richness of alternative research.  It will be shown in this paper that environmental demands require changes in how banking theory and practice are investigated and how new understandings from these changes in ways of investigating ultimately can meet the demands of the banking environment.  Alternative approaches uniquely acknowledge beyond a technical understanding the needs, desires and machinations of humans who engage with banking organizations, and the impact of this on banking practice.  There is a need by humans to recognize themselves as members of a community, which is sustained by a set of norms and values.  An alternative framework provides a researcher with guidance to conduct his or her research endeavors in a way that is receptive to such norms and values.  Alternative frameworks proposed for use in banking studies reflect the model of Burrell and Morgan (1979), who describe four distinct paradigms: the radical humanist, the radical structuralist, the interpretive and the functionalist.  These are distinctions based on ontological, epistemological, human nature and methodological assumptions about social sciences.  The Burrell and Morgan (1979) framework forms a map that can be used to find one’s way through developing banking research that is multi-disciplinary in nature.  It may point the way towards new areas of investigation and discovery of previously uncharted territories in banking development.  Advocating for alternative research frameworks in banking that are explicit in their ontological and epistemological stances, address the criticism of Chua (1986) that practically oriented subjects such as finance (and by implication banking) have often embraced theories from other areas such as philosophy, science, law or economics with little concern for their own discipline’s philosophical underpinnings, because they do not make these assumptions explicit.  There is a need for research in banking studies to document an awareness of assumptions and the limitations of their research methods. The assumptions intrinsic in banking “facts” are taken for granted and not recognized by practitioners and therefore their impacts are not fully assessed (Rose 2002, Koch and MacDonald 2003, Saundis 2001).  Alternative frameworks provide a mechanism to link inter-organizational and social conflict within an environment (Gaffikin, 1988; Laughlin, 1991; Laughlin & Lowe, 1990; Burchell & Hopwood, 1985).  The predominant myopic view of research in banking considers quantitative techniques as all that is available as a range of research styles.  This is questioned by Tomkins and Groves (1983, p.361) who argue “at least it often seems content to adopt one single stereotype of research style”.  The results are deficient, because these methods exclude or pre-frame social factors, making the results of such research questionable.  Such quantitative techniques view reality to be an objective phenomenon, but this paper argues that there is a need to research the inter-subjective human component attached to banking, because banking practice in its various contexts interprets and influences the social and societal factors in its respective environments.  However, it is acknowledged that it is a difficult task to analyze the inter-subjective constructions of reality.  Different philosophical assumptions (ontological, epistemological and methodological) have implications as to how inter-subjective knowledge is understood by a researcher.  The research methodology and methods must be appropriate to the knowledge being sought.  Furthermore, there is a relationship between the researcher and the research that has a duality of structure and is dialectic in nature.  The ability of research in banking practice to include social, economic and political factors within an organizational context may be difficult because of the challenge of incorporating subjective value judgments and their social consequences, which are traditionally overlooked by hypothetico-deductive research methods.  Such hypothetico-deductive methods reduce banking research to a link between explanation, prediction and technical control, a form of laboratory-based testing of banking without including the social environment (Abdul-Khanlik & Ajinkya, 1979).  Socially sensitive research gives an understanding of the implications of power and knowledge relationships that are normally incurred.  Alternative approaches if used in a banking site can base enquiries on both ‘exploratory’ and ‘inspection’ techniques of research (Tomkins and Groves, 1983, p.363), thus revealing the social contexts of banking. 


Introducing New Technical Indicators for Financial Markets

Dr. Rajeshwar Prasad Srivastava, Towson University, Towson, MD



This paper introduces two technical indicators for chart analysis of financial markets. One is called Gravity Indicator which gives buy and sell signals. The other is a volatility based stop loss channel which gives stop loss points to control risk and protect profits.  The indicators such as MACD, RSI, Stochastic and moving average crossovers work best in a certain type of markets.  For example, RSI and Stochastic work  best in a choppy market with small oscillations, MACD and moving average crossovers work best in a trending markets. Oscillators (Carr, 2005) like RSI and Stochastic signal over-bought and over-sold conditions too quickly in a trending market. They generate too many false signals.  MACD and moving average crossovers are lagging indicators.  By the time you get a signal in a choppy market, the market turns to the opposite side.  MACD is a difference indicator, so sometimes it turns too quickly in a trending market.  Because of these limitations, we have a variety of indicators but they give contradicting signals in the same timeframe.  This creates uncertainty and fear to take a trade.  This paper alleviates this problem by introducing a single indicator approach which cuts down risk and builds confidence.  The Gravity Indicator (GI) along with the stop loss channel can help in becoming a successful trader. One indicator approach cuts down the level of confusion and fear, and saves time in technical analysis.  This paper discusses when to buy and when to sell; where to place stop loss; how to manage money for maximum return; and how to manage emotions of fear and greed. This presents two case studies, one for an oscillating market and the other for a trending market.  It compares the performance of the Gravity Indicator with two of the most popular indicators MACD and Stochastic in these two cases. The ingredients needed to succeed in trading include:Technical  analysis of markets before making a trade. Risk management by daily monitoring and periodic review.  This paper focuses on technical analysis of price charts, and risk management.  It is based on the assumption that the price is a true representation of supply and demand which drives the market.  Fundamental analysis is important but technical analysis is essential for success.  The advent of microcomputers has changed the way trading is done.  Real-time data and minute-to-minute information about equities is easily available.  Online electronic trading is normal.  Bid/ask spread and commission rates have greatly improved.  Volume per day has improved significantly in all exchanges.  However, the ratio of winners and losers has not changed that much.  Twenty years ago 90% of individual investors were losing money.  In spite of all the progress and innovation in the financial markets, 90% of investors still lose money.  The main reason is that the level of fear and greed has not changed. A lack of suitable technical indicators contributes to this situation. Moving average convergence - divergence (MACD) was developed by Gerald Appel (Apple, 1979);  Stochastic was popularized by George Lane ( Elder, 1993); Williams %R was invented by Larry Williams ( Elder, 1993); Relative strength indicator (RSI) was developed by Welles Wilder ( Wilder, 1978). All these indicators were developed about 30 to 50 years ago and since then not much progress has been made in this area. The purpose of this paper is to introduce new indicators which perform better than other known indicators and can be easily constructed by individual traders.  It is a moving average based indicator. A moving average is averaged and then its average is averaged. The result is a smooth moving average. This average is used to construct a moving average pair. This pair shows how the gravity of a price line tilts from bull side to bear side and from bear side to bull side. That is why it is called Gravity Indicator. This indicator is based on a very simple idea buy it is very powerful. The characteristics of this indicator are:  It is not a lagging indicator because the average is taken over only 2 to 5 days. A moving average crossover becomes a lagging indicator if price is averaged over more than 5 days. A 10-day, 20-day, 50-day, and 200-day averages are common. It works in a trending market as well as in an oscillating market.It responds well to changes in the direction of the market.It filters the noise and gives less number of signals without sacrificing the signal response with price change. It performs better than known indicators such as Commodity Channel Index (CCI), MACD, RSI and Stochastic.   A stop loss point is used to control risk and save profits. As soon as a buy order is filled, a stop loss order is be placed to sell the stock if it goes down. Similarly, as soon as a short order is filled, a stop loss order is placed to buy back the stock if it goes up. Stop loss point is moved as the trade progresses  to protect the profits made. There are numerous ways to place stop loss points ( Bulkowski, 2007 ). A volatility based stop loss channel is introduced here to place stop loss points. The idea of a volatility based stop loss channel is not new but the way it is constructed here is new. It calculates 20-day volatility and then this volatility is weighted by a 3-day EMA. This approach is more practical. It can be expressed in MetaStock Formula Language as:


Social Return on Investment: Applying Business Principles to Starting and Managing Charitable Organizations

Dr. Thomas Clark, Xavier University, Cincinnati

Dr. Cathy McDaniels-Wilson, Xavier University, Cincinnati



This paper reveals the strategy and leadership required to implement a vision of a major and unique 21st century museum, the National Underground Railroad Freedom Center (NURFC), one which promotes dialogue between Blacks and Whites. Ed Rigaud, Founder of the National Underground Railroad Freedom Center, and Spencer Crew, Ph.D., its Executive Director and CEO, answer questions about the NURFC, a center which opened in Cincinnati in August 2004. It chronicles the struggles and challenges of slaves escaping to freedom—and the continuing impact of slavery in American life. In the course of the interview, they describe three key action principles adapted from business and applied to the management in a nonprofit setting : Social Return on Investment; HOFF, a motivational principle; and Freedom’s Pyramid, an adaptation of Maslow’s hierarchy to racial reconciliation issues.  Race relations in America are more defined by day-to-day experience than by legal barriers. Forty years after legal discrimination ended, Race still divides Americans. Controversies over various charges of police profiling, the celebration of Martin Luther King Day, the display of the Confederate flag on state property, the overturned conviction of those originally accused in the Central Park Jogger trial, and the media’s showcasing differing reactions of Blacks and Whites to the outcome of the O. J. Simpson murder trial all highlight the deep divisions that exist between many Blacks and Whites.  This paper reports on an interview with Ed Rigaud, Founder, and Spencer Crew, CEO, of National Underground Railroad Freedom Center. Rigaud and Crew see the Center as a healing institution which will help reverse the vicious cycle in which stereotypes inhibit honest exchanges, and alternatively promote dialogue so Blacks and Whites experience more trust and mutual respect. Opened in August 2004, the Center is designed as a think tank on racial relations, where local, national, and world leaders meet to define and implement plans that will improve interracial dialogue.  In this interview they show 1) the role business executives have played in defining the mission of, raising money for, and planning and managing the National Underground Railroad Freedom Center and 2) how they plan to use the history of cooperation between Blacks and Whites in operating the NURFC. Q: Initially what were your thoughts when you were asked to assume the leadership role in creating the NURFC?  Rigaud: I took a few days to think about it; talked to my wife, told her what the implications were.  I said this means I give up my career at P&G, and she said why do you have to do that?  I said because that’s what it’s going to take.  I did research about how long it takes to plan and open a museum, and it averages nine years.  Holocaust Museum took seventeen.  The way I work is with real clear vision and whenever I’ve got clarity on the vision, I know I can do it.  The obstacles are issues to be dealt with and I know that if I don’t know how to do it, we can find people who do. I knew it was going to take many professionals, and here we are right on the cusp. Q:  How did you build community, business and legislative support for the museum?  Rigaud: Wore out a lot of shoes and a lot of overcoats.  And truthfully once people understood conceptually what we were trying to do, support was typically there, especially among donors.  They embraced the concept of the Freedom Center after we explained it to them.  On the community, government relations side, it took a lot of handholding, a lot of lobbying, a lot of educating of the key individuals.  I had already travelled in all those circles before so I didn’t have to start from scratch.  It was a matter of having already established strong relationships with a lot of key individuals. And I had served on numerous boards and had a lot of doors opened to me to present this concept.  On the fund raising side, two aspects are important.  During the start up phase, it was important to have the connection to Procter & Gamble.  The most important thing I did was identify John Pepper, as the target to go after to be the head of our fund raising effort and also to get Vernon Jordan involved in recruiting our national board. Getting John on board made it possible to go out and open big doors all around the country--big doors that I couldn’t open on my own.  And to be credible, nationally.  That was a huge step and we wouldn’t have gotten to where we are without John.  Q:  How did you select your board?  What is the make up of your board in terms of museum experts, business interests, arts interests, political figures, community activists, and others?  And was has been their role in shaping the mission and vision of the Freedom Center?  Rigaud: There are two boards, one I alluded to earlier the National Advisory Board, which in my way of thinking was the more important, board initially.  Even though very few of them were active in their involvement on the Freedom Center, we got them to agree to allow us to use their names. Q:  So it’s initial credibility versus sustained credibility.  They gave you the initial credibility that gave you the momentum. Rigaud: At the outset, I had one person working for me.  What I did have was a prestigious national board of 40-50 people. Prestigious folks in every field, Black and White. Vernon Jordan helped me to get those names on there.  I got some myself.  John Pepper got some, many of the corporate ones.  Dick Cheney was on our board. And that list went a long way on the fund raising end and on the general credibility of the project.  Everybody was always impressed when they saw that list.  They thought wow.  The local board was a different story.  We needed a governing board, and we had a steering committee that were the original NCCJ folks who worked on this along with Richard Rubinowitz, the American History Workshop consultant, and many of those folks, transitioned to the board.  There were people like Chip Harrod on the steering committee and we didn’t screen out their names.  We just said we’re going to offer you a seat on the board. Basically we brought those folks over and then began to add to those.


Measuring the Effects of Anxiety and Self-Efficacy on Finance Students

Robert Jozkowski, Eckerd College, St. Petersburg, FL

Dr. Steve Sizoo, Eckerd College, St. Petersburg, FL

Dr. Naveen Malhotra, Eckerd College, St. Petersburg, FL

Morris Shapero, Eckerd College, St. Petersburg, FL



Because of their quantitative content, Finance courses are particularly difficult for business majors.  Math-related material causes many students to become anxious, which can impede their learning and their performance.  Also, students who think they will perform poorly on a task do worse than those who think they will succeed.   The difference is the student’s self-efficacy.  This exploratory study attempted to examine “Finance anxiety” in business students by creating a Finance Anxiety Scale to measure that phenomenon, as well as its relation to the construct of self-efficacy.  The objective of this research was to begin examining whether or not the critical message of finance could be made more accessible to students without diluting the necessarily rigorous nature of the discipline. The results and implications of this preliminary study are discussed, as well as essential future research.  Even though knowledge of finance is considered critical to success in business, studies indicate that many business students have more difficulty with Finance courses than any other business discipline.  Literature suggests that this is primarily due to the quantitative nature of Finance.  Quantitative material causes many students to become anxious.  Anxiety, in turn, is said to impede performance, causing students to fall behind in class and to postpone initiating remedial action. At the same time, reports show that students who think they can perform well on a particular task do better than those who think they will fail.  This paper describes an attempt to measure and assess this Finance-course anxiety.  Furthermore, it examines the relationship between Finance anxiety and self-efficacy--an individual’s confidence in his or her ability to accomplish a task.  Finally, the paper discusses the implications of this exploratory study and suggests necessary future research.  Financial performance is considered the chief indicator of the success of organizational decisions and activities (Thompson, Strickland, & Gamble, 2006).  As a consequence, a weak understanding of finance could have a very detrimental effect on a businessperson's decision making ability and career prospects.  Yet according to the Educational Testing Service (2000), seniors at 388 colleges and universities taking the “Major Field Test in Business II” scored lower on the Finance section than for any other discipline--at the 38th percentile.  Studies by individual undergraduate business programs have produced similar findings.   Literature indicates that most business students find Finance courses to be particularly difficult and challenging (Balachandran & Skully, 2004), and students with weaker quantitative skills delay taking the required math or statistics courses that are typically prerequisites for Finance.  This results in less well prepared and poorer performing students (Marcal & Roberts, 2001).  Studies attribute this largely to the quantitative nature of a Finance curriculum (Krishnan, Chenchuramarah, Bathala, Khattacharya, & Ritchey, 1999).  That is, mathematics anxiety manifests itself in all quantitatively related environments (Baloglu, 2002). Math anxiety is defined as “any situation in which an individual experiences anxiety when confronted with mathematics in any way” (Byrd, 1982, p. 38).  There is also agreement that anxiety is related to performance (Balachandran & Skully, 2004; Tobias & Everson, 1997), and that anxiety has been shown to have a debilitating effect on learning and achievement (Gaudy & Spielberger, 1971; Tobias, 1980).  What is more, this phenomenon is wide spread.  In a study of the learning and study characteristics of business students, Sizoo, Malhotra, and Bearson (2002) found that of the 10 subscales on the Learning & Study Strategies Inventory (LASSI), respondents uniformly scored lower on the Anxiety subscale.  That is, whether they were male or female, American or international, adult or traditional-aged business students, anxiety was their greatest obstacle to learning.  Furthermore, the anxiety was primarily math-related anxiety.  Research also showed that successful efforts to reduce this anxiety could lead to dramatic improvements in academic success (Bogue, 1993).   According to Baloglu (2002), two major theories address the effects of anxiety:  The Cognitive Interference Theory says that high levels of anxiety lead to poor performance (Wine, 1980); while the Deficit Theory assumes the opposite--poor performance leads to increased anxiety (Tobias, 1986).  Extensive research by Sciutto (1996) supports the Cognitive Interference Theory, stating that “anxiety causes poor performance and not the contrary” (p. 30).   Regardless of the particular theory, it is widely believed that math-related anxiety clearly impedes academic performance (Baloglu, 2002).  Some people--when confronted with a quantitative challenge--figure out how to work their way through it, while other people resort to delay and denial (Scott, 2003).  Bandura (1986) attributed this behavior to the individual’s self-efficacy--the level of confidence individual's have in their ability to accomplish tasks.  Those with high self-efficacy tend to persist in exhibiting new behaviors and, therefore, have greater opportunities for receiving feedback about their acquired skills than those with low self-efficacy.  Research shows that self-efficacy was found to be significantly related to academic performance in general (Wood & Locke, 1987), and mathematics performance in particular (Campbell & Hackett, 1986).  Still, Finance (like mathematics) is a difficult, challenging course of study.  Neither the student nor the discipline would be well served by diluting the course content or by consuming valuable course time with peripheral activities and exercises.  At the same time, student-anxiety impairs performance in the Finance classroom, and neither the student nor the discipline is well served if the content cannot be made more accessible.  Research shows that there are strategies available to both enhance self-efficacy (Eisenberger, Conti-D’Antonio, Bertrando, 2000) and to ameliorate math-related anxiety (Tobias, 1995).  These strategies may be useful for Finance students, faculty, mentors, and advisors.  To explore this issue more closely, the authors developed a measure of Finance anxiety and correlated its results to those of established self-efficacy measures.  The next section describes that process.  The results and implications are discussed later in the paper. To measure Finance anxiety, the authors used Alexander and Martray’s (1989) 25-item Revised Mathematics Anxiety Rating Scale (RMARS) as a template.  The RMARS has been widely used in academic research, rigorously tested, and found to be psychometrically sound.  Also, its construct applies to the Finance discipline and is designed to facilitate remedial action.  Self-efficacy was measured by a 17-item general self-efficacy scale (Sherer,  Maddux, Mercandante, Prentice-Dunn, Jacobs, &. Rogers, 1982).  See the “Finance Anxiety Questionnaire” in Appendix 1.  Participants of this exploratory study were a "convenience sample" of students easily accessible to the authors.  Neither the institutions nor the students were selected randomly.  As a consequence, the results cannot be generalized beyond the sample itself.  Still, exploratory research is useful in clarifying and defining the nature of a problem or opportunity (Zikmund, 2006).  The “Finance Anxiety Questionnaire” was distributed to 510 current or recent Finance students.  The questionnaires were passed out toward the end of the semester, but not on the day of an examination (to avoid situational anxiety): 501 usable questionnaires were returned.  Respondents were distributed as shown in Table 1:


The Impact of Workplace Constraints on Organizational Change

Dr. Melissa L. Waite,  SUNY College at Brockport, New York



Theoretical support for relationships between situational workplace constraints, employees’ control over these constraints, perceptions of corporate goal attainment, management trust and procedural justice are proposed. Qualitative data analysis of ten perceived workplace constraints, based on a taxonomy developed by Peters and colleagues (1985), demonstrate employees’ frustration with perceived workplace constraints. Analysis of survey data revealed no significant differences between employees in change-ready and change-advanced groups on their perceptions of, or control over, workplace constraints. However, perception of goal attainment was significantly related to lower levels of workplace constraints. Likewise, significant relationships were found between constraints and trust in management and perceptions of pay fairness. In a survey of more than 9,000 employees, Watson Wyatt Worldwide found most employees lack the skills and authority to do their jobs. Employees know the organization’s goals and their own duties, but fail to get the information or performance feedback needed to perform their jobs, and lack the power to make decisions to satisfy customers (Wall Street Journal, 1997).  The findings of this survey are not entirely surprising. Dating back to the 1980s, Peters and colleagues studied situational constraints, defined as "a situational factor which acts as an obstacle to performance by preventing persons from fully utilizing their relevant abilities and motivation" (Peters, O'Connor and Eulberg, 1985: 106). From multiple studies, Peters et al. (1985) contend 11 situational resource variables impact performance: (1) job-related information, (2) machinery and technology (3) materials and supplies, (4) budgetary support, (5) required services and help from others, (6) task preparation, (7) time availability, (8) work environment, (9) scheduling of activities, (10) transportation, and (11) job-relevant authority. Early research, comprised primarily of laboratory studies and concerned with a direct link between constraints and performance, documented the deterioration of work performance due to situational constraints (Peters, O'Connor & Rudolph, 1980; Peters, Chassie, Lindholm, O'Connor & Kline, 1982). Specifically, researchers found that situational constraints affected subjects' levels of performance, such that highly constrained individuals performed at lower levels and were more negative about the task than less constrained individuals (Peters et. al., 1980). The taxonomy was tested on a large sample of convenience store managers, assessing the impact of 22 workplace constraints relevant to the managers' jobs.  Results indicated employees facing the highest level of situational constraints expressed the most frustration and had the highest level of turnover (O'Connor, Pooyan, Weekley, Peters, Frank and Erenkrantz, 1984). A very mild effect on performance was found, with no significant difference between low and medium constraint groups, but significantly better performance for both of these groups compared to highly constrained managers. Their research begs for further examination of the mechanism through which situational constraints impact performance. One potential explanation may be related to employees' perceptions of control over constraints. The impact of emotional intelligence on managing the work environment has recently drawn the attention of organizational scholars. Nikolaou and Tsaousis (2002) found individuals with high emotional intelligence scores suffered less occupational stress. Similarly, Prati, Douglas, Ferris, Ammeter and Buckley (2003) posited a number of relationships between emotional intelligence and effective leadership and team performance. Thus, emotional intelligence may be a mechanism by which employees are able to gain control over workplace constraints.  Employees’ perceptions of control over constraints has been the focus of myriad research endeavors. In a study of maintenance and construction road crews, Tesluk and Mathieu (1999) found road crews who engaged strategies to manage performance barriers (i.e., equipment breakdown) were more effective and productive workers, better able to meet work deadlines, sustain higher quality service, and work as a cohesive crew. Phillips and Freedman (1985) reported constraints interacted with personal control, reducing motivation for individuals who reported a loss of personal control. Furthermore, they found that individuals with an external locus of control were more affected by constraints than those with an internal locus of control (Phillips & Freedman, 1985). In the face of such workplace pressures some employees are likely to perceive greater personal control over these constraints. Based on their studies, Peters et al. (1980) conclude that some individuals may persevere in highly constrained environments, while others may develop expectancies that effort does not lead to performance, and thus lower their effort. Judge, Thoresen, Pucik and Welbourne (1999) found locus of control, generalized self-efficacy and self-esteem were predictive of managers' ability to successfully cope with organizational change. Similar research with sales personnel has tested the relationships of locus of control and self-efficacy with problem-focused coping style (Srivastava & Sager, 1999), further supporting the evidence that psychological dispositions help people deal with stressors in the workplace. Thus, not only do perceptions of work constraints vary by individual (Carson, Cardy & Dobbins, 1991; O'Connor, et al., 1984; Pooyan, O'Connor, Peters, Quick, Jones & Kulisch, 1982), individuals may be able to exert different levels of control over situations that constrain their performance. Simply put: "It is quite possible that it is not always just the person or the system; sometimes it is the particular person in the system" (Carson et al., 1991: 155). It is well known that system constraints are germane to the discussion of change management. For years, proponents of quality management systems have lamented the impact of situational variables on work performance (Deming, 1986; Juran, 1989; Walton, 1986), with Deming (1986) raising the loudest concerns in this discussion, describing work performance as emanating from system and person factors. According to Juran and Deming, only 15 to 20 percent of all work problems in an organization are worker-controllable (Bounds, Dobbins & Fowler, 1995). One of Deming's principles of transformation reads (1986:23-24), " ... the bulk of the causes of low quality and low productivity belong to the system and thus lie beyond the power of the workforce."  Moreover, says Deming, systems are set up and controlled by managers, not employees, thereby limiting the influence employees alone can have on performance outcomes.  Scholarly discussion of situational constraints has resurfaced in the last decade or so. Waldman (1994) proposed a model of work performance that explicitly recognizes the influence of system factors. Other scholars have begun to untangle the relationships between workplace constraints and organizational variables. Gatewood and Riordan (1997) found organizational practices, specifically the internal support provided by other departments in an organization, affect employees’ perceptions of the organization’s embracing of change management. The level of internal support, they argue, "function to signal, develop, and reinforce the values, norms, and goals of the organization to employees" (Gatewood & Riordan, 1997: 45).  Irvine, Leatt, Evans and Baker (1999) considered a broader range of workplace constraints, more inclusive of the Peters et al. taxonomy, which loaded on two factors they labeled physical resources (e.g., the adequacy of materials, supplies, and equipment) and social encouragement (e.g., support and encouragement from coworkers). Irvine et al. (1999) found no relationship between social encouragement and behavioral outcomes such as organizational citizenship behaviors or quality-related job behaviors. However, physical resources were significantly related to both organizational citizenship behavior and quality behaviors. These two studies provide an interesting and somewhat contradictory picture of the impact of workplace constraints, one demonstrating internal support enhances perceptions of change management, and the other suggesting support and encouragement from coworkers bear no impact on job behaviors. Thus, internal support may affect employees’ perceptions but not behaviors.


Job Satisfaction and Knowledge Management: An Empirical Study of a Taiwanese Public Listed Electric Wire and Cable Group

Dr. Yuan-Duen Lee, Professor, Chang-Jung Christian University, Taiwan

Huan - Ming Chang, Chang-Jung Christian University, Taiwan



In organization scope, most people discussed and focused on how to inspire employees and what they can contribute based on performance and efficiency on job, not so much cared about the effect on knowledge management. This paper studied staffs’ views of the relationship between employees’ job satisfaction and knowledge management from one leading electric wire and cable group which it has been globalization and benchmarks in Taiwan. Utilizing full 173 staffs in this group of June 2006, the study analyzed the questionnaire responses of 123 staffs. The study adopted ‘job satisfaction’ and ‘knowledge management’ as its two dimensions, and then utilized descriptive statistics and factor analysis to identify the major factors of the dimensions. Canonical correlation analysis is then used to discover the relationship between the dimensions. The study concludes that: (1) the internal recognition of organization is conducive to knowledge management. (2) self-recognition will enthuse over knowledge transferring and sharing. (3) knowledge transferring and sharing will be the competitive advantage for their company in operation and business. The paper concludes with management implications and suggestions for future study. Generally speaking, job recognition is a factor to drive knowledge pool of an organization.  The Taiwanese wire and cable industry has been in existence for more than half of century. Job satisfaction of this business is always overlooked due to it is with much technology and knowledge, but less labors’ force. Electric wire & cable business involves lots of basic technology and knowledge of materials and electricity. Employees always articulate related information and transfer through verbalization and writing. The relationship between mentor and protégé is conspicuous important in knowledge transferring and sharing. In these two decades, rapid change in globe versions and information technology drive top management team of this industry to face and consider one question which it is about the effect of job satisfaction on knowledge management. In fact, employees also understand the importance of knowledge management in their working environment, but there is one issue that we are interested in and hope to know if their satisfaction on job will effect on knowledge management. Top management teams and employees simultaneously realize the advent of knowledge management. A public listed electric wire & cable group in Taiwan Stock Exchange with subsidiaries in China and Vietnam is the subject surveyed of this study. This study surveyed all general staffs covering Taiwan, China and Vietnam to understand and realize the viewpoints of the effect of job satisfaction on knowledge management. This study expected to have some suggestions and comments to disseminate the radical concepts of knowledge management to traditional industries with an empirical process for their reference.  The major difference between information and knowledge is people (Cagna, 2001). Job satisfaction is a general attitude toward ones’ job. One of the major contributors to the study of job satisfaction was Herzberg who claimed that factors leading to job satisfaction are separate and distinct from factors leading to job dissatisfaction (Lyons, Lapin & Young, 2003). The concept of job satisfaction can be contributed to the psychological well being of works (Robbins, Peterson, Tedrick, & Carpenter, 2003).  Recognition and perceived organizational practices are the best predictors of overall job satisfaction and satisfaction with the organization (Leung, Sui & Spector, 2000). For knowledge management, from the widespread recognition that the knowledge possessed by an organization’s employees is a highly valued, intangible, strategic asset (DeTienne, Dyer, Hoopes, & Harries, 2004). Business like wire & cable is based on technology and facilities, employees for operation & business running reply on IT and experience. IT solution can be easily copied by the competitors, only a people-focused approach to knowledge management will be competitive long term. JSS indices (Job Satisfaction Survey) of Lyons, Laptin & Young (2003) will be adopted for this study.  Generally knowledge is classified as tacit and explicit. Tacit knowledge is defined as “knowing more than we can tell” and “we recognize the moods of the human face, without being able to tell, except quite vaguely, by what signs we know of it”(Polanyi, 1966). Explicit knowledge is a type of knowledge that can be expressed formally and systematically (Ropo & Parvianinen, 2001). Tacit knowledge & explicit knowledge can be expressed in terms of “knowing-how” and “knowing-that”. Knowledge management is aimed at getting people to innovate, to collaborate, and to make various good decisions efficiently (Havens & Knapp, 1999). Knowledge management has being provided the means to enable effective retrieval from the increased load of knowledge and information to enable the identification of that which is necessary at a given point in time. The main objective of knowledge management is to arrange, orchestrate and organize an environment in which people are invited and facilitated to apply, develop, share, combine and consolidate knowledge (Van der Spek & Kingma, 2000). For business and organizations, knowledge management will imply businesses to discern important opportunities to result in innovations in products, services, processes and marketing directions. For organization, knowledge is often fragmented within it (Zack,1999). Knowledge management makes the role of knowledge from “knowledge is power” to be “knowledge is sharing”. For this point, the major difference between information and knowledge is people (Cagna, 2001), because information can exist separately from people, knowledge can not. Not only sharing is a key point of knowledge management, but also the knowledge creation. Our world of future will be one kind of a harmonized knowledge creation, storage and access, and application (LaDuke, 2004). For knowledge management, the major purpose is to create and accumulate knowledge. Members of organization will exchange the knowledge and share the proposals and ideas. To transfer and share knowledge is the core competence of organizations. In scope of employees in this study, their ideals for information and knowledge sharing, transferring and creation are the vital element for competence advantage. This study will adopt these viewpoints to approach the knowledge creation and sharing in organization.  For the purpose of standardization and consistence of research objectives, this study adopted full samples of staffs and sent 170 questionnaires. There were 127 questionnaires returned and 123 questionnaires were effective and available for further analysis Table 1 provides data on the samples. Demographic data from the samples revealed the following information about the samples: 77.2% of the staffs have been in this company for more than 6 years. 67.5% of the staffs are not supervisors or executive officers.  80.5% of the staffs who responded to the survey were graduates of university and college. 62.6% of the staffs are working in the sales, administration and information technology dept. 67.5% of the staffs are male. 85.0% of the staffs age are in the range from 31 ~50. The questionnaire required responses on a 5-points Likert – type scale in which ‘1’ represented ‘strongly disagree’ and ‘5’ represnted ‘strongly agree’. Descriptive statistics and factory analysis were used to establish the major factors of the two dimentions (‘job satisfaction’ and ‘knowledge management’), and cannonical correlation analysis was used to discover the relationships between these dimensions. As for reliability of the dimensions, please refer Table 2 for Cronbach’s α a for the full samples. It is apparently that the two dimensions have a good reliability respectively.


Characteristics of Professional Services and Managerial Approaches for Achieving Quality Excellence

Dr. Jukka Ojasalo, Professor, Laurea University of Applied Sciences, Espoo, Finland



The purpose of this article is to identify the special characteristics of professional services and suggest methods how to manage them in order to achieve quality excellence. The research methodology of this article is based on an extensive literature analysis. Ten distinctive characteristics of professional services are identified. A framework for managing the special characteristics to achieve quality excellence in professional services is suggested.  Professional service quality is, above all, an approach to everything a professional service firm does with or for the client. Whatever a professional service firm does, or omits to do, has quality dimension and direct or indirect impact on the quality of the services provided to clients, as well as on client satisfaction (Kubr, 2002, 723-4). Thus, quality management in professional services includes management of each aspect of professional service which influence directly or indirectly customer satisfaction. Professional services have certain special characteristics. In order to achieve quality excellence in professional service, these characteristics should be considered in the management, in particular. This article identifies the special characteristics of professional services, and also discusses the managerial approaches which relate to these characteristics and which are relevant for achieving quality excellence in professional service.  This section discusses the special characteristics often associated with professional services in the literature. The literature does not suggest any exact or absolute definition of professional services which would draw a sharp line between them and other services. This conclusion is supported by Gummesson (1977), who refers to the difficulty of defining professional services, and to the inconsistency in the different definitions. However, certain characteristics are often associated with these services in the literature (e.g. Ojasalo, 1999 and 2004), and these are summarized in Table 1. The literature suggests that professional services are provided by qualified persons with a substantial fund of specific knowledge. Professionals’ qualifications and high levels of knowledge are often based on education, experience and special skills, and knowledge is often concentrated within a narrow area. Wilson (1972, 4), for instance, talks about intellectual bias, referring to “ intellectual discipline capable of formulation on theoretical, if not academic, lines, requiring a good educational background and tested by examination”. Sarkar and Saleh (1974) refer to competence factors, Gummesson (1978) points out that professional services are provided by qualified personnel, and Gardner (1986) sees them as being provided by qualified persons known for specific skills. Payne (1986, 23) concludes that “Consultants allow managers to use highly skilled and high quality human resources from outside their organization to focus with intensity on their organizational problems”. Haywood-Farmer and Stuart (1988) argue that some standard of intellectual training is an essential element of professionalism.  The problem-solving approach has been found to be an essential characteristic of professional services. Recognizing the fundamental problem, and designing the solution as well as implementing it, are integrally associated with this approach. According to Wittreich (1966, 128), “A professional service must come directly to grips with a fundamental problem of the business purchasing that service”. Gummesson (1977; 1979b) also refers to the essential role of problem solving, and points out the advisory nature of professional services. Hill, Garner, and Hanna (1989) emphasize the importance of being interested in the customer’s problem, and of offering practical solutions. Mayère (1991, 61) considers that intellectual services “are based on a joint definition of the problem in question and the methods of solving it, and they basically consist of transformation of the methods of reasoning and know-how employed by the customer company, working in interaction with the service company”. Day and Barksdale (1992) point out that in professional services the marketer should take a problem-solving approach because it increases the probability of client satisfaction. One characteristic which is associated with professional services is that service providers typically operate in terms of assignments requested by the customer. This means that assignments are basic elements of the customer relationships. According to Gummesson (1978), a professional service is an assignment given by the buyer to the seller. Equally, according to Gardner (1986), professional services are centered on an assignment requested by the client.  A code of ethics is centrally connected to professional services. This regulates the service provider’s line of action. It may be official and written, or de facto and based on tradition. According to Greenwood (1957), professions are characterized by ethical codes. Similarly, Wilson (1972; see also Bennion 1969) considers the code of conduct or code of professional ethics one of the attributes of professionalism. According to Gummesson (1981a, 28), “The professionals involved have a common identity, as for example, management consultants or lawyers, and such professionals are regulated by tradition and a code of ethics”. Bloom (1984) sees ethical and legal constraints as part of professional services, and similarly, Maanen and Stephen (1984) suggest that, in the case of professionals, service orientation is articulated by a code of ethics.  Professionals in a certain area often form a professional association. Such associations typically certify the practitioners, supervise them and set the ethical rules. Wilson (1972), for example, refers to representative institutes which represent the members of the profession, particularly those in private practice, and which have the role of safeguarding and developing its expertise and standards. Bloom (1984) concludes that national, state and local professional societies, certification boards, government agencies and other bodies enforce the ethical rules. Maanen and Stephen (1984) also suggest that the formation of a professional association which certifies practitioners is a characteristic of professional service companies.  Societal acceptance is referred to as one of the basic characteristics of professional services. Swartz and Brown (1991) conclude that, because of a professional’s many years of training and special expertise, and an often favorable supply-demand situation, most clients have a tendency to perceive him or her as high in social standards. Maanen and Stephen (1984) similarly state that societal recognition of the status of the business is associated with professional services. Furthermore, Kleingartner (1967) suggests that, to be a profession, an occupation must be accepted as such by society, both subjectively and objectively. Many professional services require confidentiality. Customers’ problems are sometimes so delicate that they cannot be discussed with anyone other than an external professional who is governed by a code of ethics. According to Gummesson (1981a, 32), “The customer of the professional service enterprise is buying confidence”. Similarly, Wheiler (1987) concludes that most professionals are required to inspire a great deal of confidence and trust.  The literature suggests that professional services marketing has a special nature, and it is clearly different from the marketing of other kinds of services and goods. It has been argued that professional service providers have not historically perceived themselves to be sales- or market-oriented (Hill, Garner, and Hanna 1989; Hill and Neeley 1989). According to Gummesson (1981a, 33), “Professional service firms have, as compared to other types of firms, an obstacle to efficient marketing; in some professional groups, marketing is actively resisted; it is looked down upon and considered below the dignity of the professional man”. According to the literature, advertising in particular has low effectiveness and appreciation in the marketing of professional services. According to Harris (1981, 88), “..the role of advertising in the marketing and business strategies of professional service firms has not been widely accepted”. Hart, Schlesinger, and Maher (1992) conclude that advertising by professional service firms has sometimes even been considered unethical.


Feasibility of Taiwan’s Establishing an International Board Market

Dr. Chuang-Yuang Lin, National Taipei University

Dr. Jia-Jhen Liu, Taiwan Hospitality & Tourism College

Shang-Chieh Yang, National Taipei University



In hopes of pushing Taiwan’s capital market into the international arena and carrying out the policy aimed at encouraging returning Taiwanese business to pursue main board listing, Executive Yuan’s Financial Supervisory Commission proposed the idea of an “international board” in 2005. The difference between an international board and Taiwan Depository Receipts (TDRs), which some foreign companies have issued in Taiwan, is that the former are traded in U.S. dollars. Although the topic of luring returning Taiwanese businesses onto the main board has been widely discussed in articles and academic papers, most of the literature has emphasized capital markets and regulations or discussed the matter through case studies, and have lacked a practical approach. This paper reports on in-depth interviews with foreign companies that have issued TDRs in Taiwan in order to analyze the feasibility of  and provide recommendations for the establishment of the “international board.” It also aims to explore the problems that are likely to occur with such a board by identifying the companies’ motivations regarding issuance, performance, and experience. When affiliation TDRs (Taiwan Depository Receipts) are issued in Taiwan, how to push Taiwan’s capital market into the international arena becomes a hot issue for government authorities. Executive Yuan’s Financial Supervisory Commission proposed the idea of an “international board” in 2005. The topic of luring Taiwanese businesses onto the main board has been widely discussed in articles and academic papers. However, most of the literature have emphasized capital markets and regulations, and lack a practical approach. This paper compares the advantages and disadvantages for the Taiwan and Hong Kong markets and evaluates the idea of an international board as a platform with which to pull investment capital back to Taiwan.  The primary method used in this project is in-depth interviews of directors of TDR companies. The paper also provides analysis and recommendations on the establishment of the international board and explores the problems that are likely to occur by identifying the companies’ motivations regarding issuance, performance, and experience. The current situation regarding investment flow between Taiwan and China is that the real (physical) investment flow is one-way, as shown in Figure 1. Taiwan’s corporations have invested in China for quite a long time. Because of the political situation, Taiwan’s official policy forbids the corporations of the mainland China to invest in the Taiwan market, so the flow of Taiwan’s investment in China is like a river of no return. The flow of investment between Hong Kong and China is illustrated in Figure 2.  Fifteen years ago (1990), Hong Kong set an International Board Market (IBM), which is when these financial flows emerged.  The ideal flow of investment in the future is as shown in Figure 3. That is, in the ideal flow, Taiwan sets up an International Board Market (IBM) to attract those companies that already have branches or subsidiaries in China to IPO and then trade their stocks in Taiwan’s IBM.  Taiwan 50 can short-sell below last close price.  There was US$0.85 trillion in margin trading in 1997, but it had dropped to US$0.27 trillion by 2005. The short-selling dropped from US$0.03 trillion in 1998 to US$0.028 trillion in 2005. The amount in individual trades was US$0.82 trillion in 2005, down from US$2.09 trillion in 2001. A comparison between the Taiwan market and the Hong Kong market is shown in Table 1.  In 2006, only five companies issued TDRs in Taiwan, which the government authorities may take as motivation to implement an international board of the TDR. The objectives of this paper is to understand the practical operations experience of TDR companies, so as to evaluate the feasibility of establishing an international board. Directors of three of the five TDR companies that are licensed in Taiwan were interviewed through in-depth interviews techniques.  At present, the TDR companies which are already accepted in Taiwan are ASE Singapore Pte., Ltd., Singapore Eastern Asia Technology, Ltd., Singapore MEDTECS International Corporation, South Africa Mustek Company, Ltd. and Thailand Cal-Comp Electronics & Communication.  ASE Singapore Pte., Ltd. was established on Dec. 1, 1995 and was listed on the American NASDAQ in June, 1996, and on TDR on Jan. 8, 1998. Its primary business line is semiconductor tests. The company’s total distribution is US$100,000 with 120,000 units issued on TDR.  Eastern Asia Technology Limited was established in Singapore on Dec. 17, 1997. It was listed at the Singapore Stock Exchange Market on Nov. 12, 1998, and at TDR on April 25, 2001. Its main business lines are loudspeaker boxes, monomer, and systems; and electronic and digital audio products. The company’s total distribution is US$17,777.78 with 20,000 units issued on TDR. MEDTECS International Corporation, Ltd. was established on Nov. 26, 1997. It was listed at the Singapore Stock Exchange Market in Oct. 1999, and on TDR on Dec. 13, 2002. As a medical-product selling agency, its primary business line is products consumed by the medical industry and the medical back office. The company’s total distribution is US$12,222.22 with 22,000 units issued on TDR.  MUSTEK Company, Ltd. was established on April 28, 1987. It was listed on the South African Johannes Stock Exchange Market on April 3, 1997, and on TDR on Jan. 20, 2003. Its main business line is personal computers, notebooks, computer services, printers and facsimile machines, Internet products and other related peripheral equipment. The company’s total distribution is US$11,111.11 with 20,000 units issued on TDR.  Cal-Comp Electronics was established on Dec. 02, 1989, and was listed at the Thailand Stock Exchange Market on Sept. 22, 2003. Its main business line is family computers, personal computers, microcomputers, servers and peripheral equipment; telecommunications, electronic installation, digital dissemination, and Web meeting server equipment; LCD monitors; and multi-media players. The company’s total distribution is US$37,500 with 30,000 units issued on TDR.  To fully understand the problems of TDR companies facing the listing process and the experience of overseas companies in Taiwan which have gone public, we interviewed the executive directors from three out of the five TDR companies. For the in-depth interviews with those directors, who are all upper managers in companies that issued TDRs in Taiwan, we visited the company and spoke in person. Therefore, all the data collected is first-hand material for research. We asked questions related to their experiences of going public, the problems inherent in being listed in Taiwan, and their suggestions.


Extolling the Virtues of Language Immersion in Whole-Family Camps

Dr. Brendan T. Chen, Asia University, Wufeng, Taiwan

Dr. Wenchi Vivian Wu, Chien-Kuo Technology University, Changhua, Taiwan



This study investigated the needs and characteristics of potential campers at whole-family summer language programs using phenomenological methodology. Ten subjects were interviewed to find the preferences of business- and recreation-oriented learners regarding languages, program length and location, cost, and learning styles. The researchers also investigated the skills potential campers expect to learn, reasons for attending, and desired recreational activities. The review of related literature provides an overview of immersion camps, family camps, and content-based teaching, and a model for creating whole-family vacation programming at American-style outdoor lodging camps and/or the additional of recreational activities at immersion culture and language camps is described.  Especially in today’s society, finding a way to successfully spend time relaxing together as a family is important. No matter the country, families in industrialized nations increasingly have trouble finding time to eat a meal together, much less spend quality time relaxing. In addition, most working adults are given only a limited amount of time for vacation; this varies from Europeans’ and South Americans’ four weeks to Americans’ and Asians’ one week to two weeks of paid time-off. As a result, families have to squeeze as much into their vacations as possible.  Most parents want their trips to give their children the best experiences and learning adventures possible in an affordable and economical way. Globalization has created an expanding market for tourism that combines learning about other cultures and languages with adventure. As the world becomes more connected through technology and the media, people all over the world see the advantages of having a greater understanding of other cultures’ food, traditions, holidays, and languages.  Parents also want to increase their children’s and their own perspectives of the global society by learning about other cultures. As workplaces all over the world become more culturally, ethnically, and linguistically diverse, parents want to give themselves and their children every advantage. Globalization has made traveling abroad more comfortable for many families who want to gain more in depth experiences in a second language and culture. However, the limited time afforded for vacationing and tight budgets mean that families who want to learn actively about another culture have to combine their recreational travel with their desire to learn. This provides a basis for families who want to vacation either abroad or in their own country and want to experience a foreign language and culture in the most efficient and frugal way possible. The purpose of the study was to determine the need for and the best methods for offering and planning summer language-immersion programs in a traditional camp setting. The Grand Tour question for this study was the following, with subquestions below: What are the general perceptions and needs of Americans and Asians seeking whole-family educational recreation opportunities?  1. What are learners’ expectations regarding languages, program length and location, cost, and learning styles? . What skills and knowledge would learners expect to leave the program with, what are their reasons for attending, and what are their desired recreational activities?  This study gives recreational and educational programmers direction in developing whole-family vacation programming at American-style outdoor and lodging camps and/or the creation of recreational activities at full-immersion culture and language camps. The ideas proposed, as recommended by qualitative research, incorporate a variety of immersion levels and techniques, including participation in traditional outdoor recreational activities, in the target language and culture. This approach introduces content-based lingual learning theories, found to be the most successful for second-language acquisition and retention. In content-based language learning, the subject matter and meaning behind the communication is the focus, allowing for learning across a variety of learner backgrounds and interests. The family-oriented recreational camp model takes advantage of this to create genuine, real-world linguistic interaction.  Summer language-immersion programs designed for families can provide quality time together; meaningful learning; broadened perspective; fun, outdoor, recreational experiences; and good memories. Family holiday camps and programs that introduce language and cultural learning give children and parents an all-in-one experience. Based on the traditional American model of summer youth residential camps, family summer language-immersion programs can be designed to fit a variety of locales, participants, languages, cultures, and activities. Since the 1960s, youth language- and culture-immersion camps have grown in popularity in the United States, but family vacation camps have fallen out of favor. Conversely, Canada and Great Britain have seen a rebirth of family vacations in American-model summer camps since the 1990s while offering reputable language camps aimed at integrating the ethnically diverse countries (De Vita, 2004), and a few innovative programs have begun to combine the two concepts in recent years.  The American model of a summer camp experience is familiar to most American-movie watchers. Children ages 8 to 18 are packed off to a residential, rustic camp for up to three months, engaging in structured outdoor recreation and arts activities under counselors’ supervision. These traditionally include campfires, swimming, crafts, canoeing, team sports, skits, sing-alongs, camping, and hiking. In addition, for a few weeks of the year, many traditional residential youth camps offer the same curriculum of activities for entire families to attend together. In the past few decades, residential and day summer camps with a greater focus on arts, music, theatre, writing, specific sports, and foreign languages have become an increasingly popular alternatives for children less interested in outdoor recreation.  Worldwide, youth foreign-language immersion camps have become one of the most lucrative business sectors for tourism. Crompton (1979) stated that for a majority of parents investigating vacation destinations, the impact on their children’s education was an important consideration, and, for some, the primary factor. There are two major forms of immersion camps. The first places youth into a re-created cultural environment where the target language and culture are practiced exclusively, in intensive, classroom-based residential programs in the learners’ home country. The most well-known example of this is Concordia College of Minnesota’s Language Villages, where American students live in authentic-looking housing based on the architecture of their target country, eat traditional foods, and participate in culturally-based recreation (Concordia, 2005). The second form of immersion camp is a more recreational and authentic residential camp environment located in the country of the target language. This type of camp draws students from abroad and is usually attended by more affluent participants. Some programs take the model a step further; an example of this is Croatia’s Camp California, which attracts youth from 20 countries to an English-as-a-Second-Language camp designed to look like southern California (Camp California, 2005).


E-Business Strategies and Models: An Exploratory Study in China’s Securities Industry

Dr. Dan-ming Lin, Shantou University, Shantou, China

Dr. Zongling Xu, Shantou University, Shantou, China

Bin Wang, Sun Yat-Sen (Zhongshan) University, Guangzhou, China



This paper examines strategic issues related to the emerging concept of business model in the context of China’s securities industry. Through a brief literature review, theoretical connections between strategy and business model are proposed, and the key components of business model are identified. A research hypothesis concerning the relationship between e-business strategy and e-business model is then developed by refining relevant concepts. Accordingly, an empirical analysis is conducted, based on an online survey on 57 securities companies certified for online securities businesses in China. Overall, research findings indicate tentative support to the research hypothesis. Theoretical and practical implications are also drawn.  The incoming internet age has witnessed the emergence and increasingly broader diffusion of e-business over the last fifteen years. Among the many industrial sectors equipped with advanced information technologies (IT), the securities industry, with its earlier adoption of electronic networks and digital nature of transactions, has appeared as playing a pioneering role in promoting e-business applications. A growing amount of securities trading is now conducted over the internet, mobile phones and cable TVs. It is estimated that, from 1996 to 2001, the proportion of online securities trading volume had increased from 7% to more than 20% in the United States (U.S. Securities and Exchange Commission, 2001). A similar trend has also taken place in the Asia Pacific region. Taken Korea as an example, the proportion of off-site securities trading had increased from virtually zero in 1998 to more than 50% in 2001 (OECD, 2001). In short, data for online securities trading in various countries indicate the significance of e-business channels in future development of the securities industry.  In China, online securities businesses are also gaining momentum. A multimedia, public online trading system was launched in a trading branch located in west Guangdong Province by the Zhongxin Securities Co. Ltd. in March, 1997. The event is widely acknowledged as the starting point of online securities trading in China. Since then, e-business applications have been increasingly adopted by domestic securities companies. According to statistics provided by the China Securities Regulatory Commission (CSRC), online securities trading has accounted for 18.54% of the total trading volume in 2004, while the corresponding ratios in the previous three years are 4.38%, 8.99% and 14.90%, respectively. Starting from 2001, the CSRC has conducted periodical review to issue certificate of online securities businesses for domestic securities companies, with 89 companies authorized so far. Thus it can be expected that e-business will maintain strong growth in the near future. More detailed information is summarized in table 1. The adoption of e-business applications is always accompanied with strategic changes. From a strategic management point of view, the viability and sustainability of e-business strategies are largely dependent on their methods of value generation, i.e. their business models. This is especially worth noting in the setting of securities services, given the vast opportunities (and risks) accompanying the process of accelerated offline-to-online value migration in recent years (Barber and Odean, 2001). Thus in-depth studies on business models are of great value to future development of e-business in the securities industry. To date, however, less research attention has been paid to such studies, and this paper aims at filling this knowledge gap.  The remaining part of this paper is organized as follows: In section 2 we undertake a comprehensive literature review to establish a theoretical background for studying business models, and sort out key components of e-business models with special reference to the securities industry in China. Research hypothesis is formulated accordingly. Section 3 deals with research methods, and section 4 presents an empirical analysis revolving the research hypothesis, based on an online survey on securities companies operating in China. Then, in section 5 we elicit further implications from research findings, and comment on limitations of the study. Business model has been a frequently quoted concept in recent years. In the United States, a number of noted business models, such as Amazon’s One-click Shopping Process Model and Priceline’s Reverse Auction Model, were even approved as patents, drawing intensive attention from both academic community and business circle (Dickinson, 2000).  To date, most discussions on business models are conducted with reference to e-business applications in the emerging internet economy. The internet economy is dependent on the four layers of infrastructure, application, intermediary and business development (Barna et al, 1999). Extensive interactions with customers taken place on the intermediary layer and the business development layer provide significant opportunities for creating innovative business models, which represents unique combinations of the three streams of value, income and logistics (Mahadevan, 2000). In a broader sense, business model can be conceptualized as an architecture comprising the streams of products, services and information. Business model also expresses, explicitly or implicitly, the roles of key players and their sources of income (Timmers, 1998). The strategic management scholars, from an academic point of view, describe strategy and business model as ‘closely related’ concepts. Specifically, strategy deals with a company’s competitive initiatives and business approaches, while business model concerns whether the revenues and costs flowing from the strategy and demonstrates that the business can be amply profitable and viable (Thompson and Strickland, 2003: 3). Similarly, Applegate et al (2003: 47) defined business model as the combination of an organizations’ business concept defines its strategy, an organization’s capabilities define resources needed to execute strategy, and a high-performing organization returns value to all stakeholders. According to Afuah and Tucci (2001: 3-4), the relationship between strategy and business model could be explicitly translated, since business model signals the approach by which a firm builds its resources to offer customers better value than its competitors and to make profits doing so. Thus, business model is what enables a firm to have a sustainable competitive advantage, to perform better than its rivals in the long term. In short, business model has become a variable which is of strategic significance in understanding modern business operations, especially in the emerging internet economy. Although authors may place various emphases on their discussions, it is obvious that the nature of business model lies in its linkage with strategies to ensure sustainable operations and long term profitability in the competitive marketplaces. Put differently, a company’s business model is management’s model of how the strategies they pursue will allow the company to gain a competitive advantage and achieve superior profitability (Hill and Jones, 2004: 4). Therefore, the coherence between strategy and business model becomes a central concern.


How to Improve China’s Enterprise Internal Control System: Based on the Perspective of Corporate Governance

Dr. Jianguo Yuan, Huazhong University of Science and Technology, P.R. China

Chunsheng Yuan, Huazhong University of Science and Technology, P.R. China



Analyzing the relationship between corporate governance and internal control systems, this paper compares the effect of corporate governance on the operational efficiency of enterprise with that of an internal control system. After comparative analysis of the effect of corporate governance and internal control, and considering two factors, namely the managers’ choice under asymmetrical information conditions and their path dependence when making decisions, we find that the primary cause of internal control invalidation is weak corporate governance. We maintain that, in order to improve the validity of enterprise internal control systems, the corporation should strengthen the supervisory system of corporate governance to shield top managers from the temptation of moral hazards. In recent years, a series of important financial report fraud cases involving companies such as Enron Corporation, Global Crossing, and WorldCom has shaken investors’ confidence in the stock market, bringing substantial losses to investors. In the U.S., financial statement fraud (FSF) has cost market participants—including investors, creditors, pensioners, and employees—more than $500 billion during the past several years (Rezaee, 2005). In China, the circulation market value of A-shares has been reduced from RMB 1736.281 billion at the end of June 2001 to RMB 1179.8 billion by July 1, 2005 (Wei, 2006). In recent years in China, the top managers of almost all listed companies were responsible for serious financial report fraud.  What causes this phenomenon? Are companies’ internal control systems disabled or is corporate governance so weak that it disables the function of internal control? In order to find answers to these questions, internal control system research has gradually focused on the relationship and the interaction mechanism that exists between internal control systems and corporate governance. Investors’ premonitions about accounting problems may be the main reason for the stock market slump that followed these scandals, but weak corporate governance and the moral lapses of top managers may be the leading reasons for widespread failures in financial reporting. As for financial fraud research of listed companies , many overseas scholars’ empirical research has indicated that corporate governance has a significant influence on financial report fraud (e.g., Dennis Caplan, 1999; Beasley et al, 1996, 2000).  By analyzing the relationship between internal controls and the detection of management fraud, Dennis Caplan (1999) derived three main results: first, when managers with strong incentives to commit fraud prefer weak controls, the choice of controls is in line with the risk of fraud; second, auditors have incentives to make control recommendations that are not cost-beneficial to honest managers and demonstrate that both results hold even when managers can override the controls; third, when internal controls are weak, auditors exert less effort investigating fraud, conditional on the audit evidence. As long as routine audit procedures do not distinguish between errors and fraud, a weak control system hides fraud; the auditor expects to find numerous errors, so the additional impact of fraud on the audit evidence and on the fraud itself may go unnoticed. A related result is that, when some managers with strong incentives to commit fraud prefer weak controls, the audit failure rate is higher when internal controls are weak.  Beasley (1996) analyzed the relationship between the composition of the board of directors and financial statement fraud and found that no-fraud firms have boards with significantly higher percentages of outside members than do fraud firms; however, the presence of an audit committee does not significantly affect the likelihood of financial statement fraud. In addition, as outside director ownership in the firm and outside director tenure on the board increase, and as the number of outside directorships in other firms held by outside directors decreases, the likelihood of financial statement fraud decreases. Beasley et al (2000) provided insight into financial statement fraud instances investigated during the late 1980’s through the 1990’s in three volatile industries, technology, health care, and financial services, and highlighted important corporate governance differences between fraud companies and no-fraud benchmarks, on an industry-by-industry basis. For each of these three industries, the sample fraud companies had very weak governance mechanisms relative to no-fraud industry benchmarks. The fraud companies in the technology and financial-services industries had fewer audit committees, while fraud companies in all three industries had less independent audit committees and less independent boards. The fraud companies in the technology and health-care industries had fewer audit committee meetings, and fraud companies in all three industries had less internal audit support. In China, various reasons contribute to financial fraud. One viewpoint stated that the nonstandard capital market is a primary factor. Another viewpoint argued that the primary factor is an incomplete internal control system because it can’t prevent top managers from committing fraud (e.g., Den, 2005; Wu, Chen, & Shao, 2000). Some scholars suggested that both corporate governance and internal control deserve condemnation (e.g., Shen, 2005). So is it corporate governance or the internal control system that matters more in determining a firm’s level of financial fraud? What causes the internal control failures, and how can China’s enterprise internal control system be improved? By comparative analysis on corporate governance and internal controls, we find that the primary cause for disabled internal control systems in China is weak corporate governance. Therefore, we should strengthen the internal control systems of China’s firms by improving their corporate governance. Corporate governance mechanisms are used to minimize agency costs of top managers and to maximize corporate operational efficiency. Internal control systems consist of the measures through which managers can ensure realization of operational efficiency. Obviously, both corporate governance and internal control systems affect the operational efficiency of firms, so we set an effective operation probability function as follows:  denotes the probability that a corporation can achieve effective operation.  denotes the situation of corporate governance, where  implies best corporate governance and  implies worst corporate governance.  denotes the situation of the internal control system, where  implies best internal control, and  implies worst internal control.  denotes the situation of the firm’s operation, which is affected by corporate governance and the internal control system; implies the best operational efficiency when , and , and implies the worst operational efficiency when  and.  implies that the operational efficiency of the firm will improve with improvements in corporate governance or the internal control system.  We also can set an effective operational probability function asor, which denotes the probability that the effective operation of a corporation is affected only by corporate governance or internal control systems. When , governance is effective, and the probability of effective operation is. When , governance is ineffective, and the probability of effective operation is. When , internal control is effective, and the probability of effective operation is. When, internal control is invalid, and the probability of effective operation is. Thus, we can analyze the firm’s operational efficiency with four mutual matching states.


Test Conflict of Interest in Analysts’ Recommendations from a New Perspective

Luke Lin, National Sun Yat-sen University, Taiwan

Dr. Chau-Jung Kuo and, National Sun Yat-sen University, Taiwan

Dr. David So-De Shyu, National Sun Yat-sen University, Taiwan



Many literatures highlight potential conflicts of interest in the production of investment research coming from the competing roles analysts play in financial markets. The pressures with investment banking business and brokerage commissions may provide incentives for analysts to present positively biased opinions. This paper investigates stock price reaction to analysts’ information reported in a Taiwanese financial newspaper. There are two departments within Taiwanese brokerage firms. The research department is motivated to maximizing commissions by providing timely, high-quality information for their clients. The dealer department, on the other hand, trade assets for their own accounts. These two objectives may generate an alternative conflict. We directly quantify and examine the question of whether two departments have a conflict of interest to each other, using both ratios of forecast error and the change in selling volume. Such an approach of focusing on these two ratios has the additional advantage that investigations are not subject to a conjectural bias from analysts’ characteristic, and has not been carried out before. The result indicates that potential conflict occurs between brokerage firms’ research departments and dealer departments.  In the real world, there is no doubt that individual investors have a weakness in cost-searching and information analysis skills. With the recent growth of mass media, institutionally and professionally recommended information is distributed to people virtually free. But do these recommendations really give investors some useful information? Can investors make money in compliance with them? Are there potential conflicts behind the analysts’ recommendations? The efficient markets hypothesis (EMH) asserts that all prices fully reflect all relevant information. However, if prices reflect all information that analysts are examining, why then are investors willing to pay for their services? No matter how scholars query profiting ability from the stock return forecasts of financial analysts, we still see brokerage firms devoting extensive resources to predicting firms’ expected earnings as a basis for their stock recommendations. The reasons for the existence of the security analysis industry have several possible answers. One may be that analysts’ recommendations are based on inside information, which is not yet revealed in stock prices. The second may be that investors are irrational. Even though they are getting nothing for something, they just care whether the cost of acquiring information individually is greater than from analysts’ recommendations or not. Finally, individual investors may lack confidence and assume that it is imperative to hold an account with a research-oriented brokerage firm. As a result, everyone has the false impression that the investor who utilizes professional information for security analysis will outperform others who use less information (Michaely and Womack, 1999).  According to a report of the technical committee of the International Organization of Securities Commissions, analysts are generally classified into one of three broad categories depending on the nature of their employment: sell-side, buy-side and independent. Sell-side analysts are the focus of this paper. They are typically employed in the research department of full-service investment firms. Analysts on the sell-side typically publish research reports on the securities of companies or industries that they cover. These research reports are distributed to their customers and often include a specific recommendation – such as a recommendation to buy, hold, or sell the certain security – and often include the analyst’s expectation of the future price performance of the security (price target). A number of potential conflicts of interest faced by sell-side analysts in the Jurisdictions have been identified. These conflicts generally arise as a result of: the various commercial activities pursued by full-service investment firms; analyst compensation arrangements; financial interest in covered companies held by analysts and their firms; and the reporting relationships within full-service investment firms.  Investigations into conflict of interest in analysts’ forecasts mostly focus on incomes from investment banking business or brokerage commissions-even how to getting friendly with firm management to acquire inside information (Ljungqvist et al., 2005). This paper, however, tests conflict of interest in analysts’ recommendations from a new perspective. There are two departments within Taiwanese brokerage firms. The research department is motivated to maximizing commissions by providing timely, high-quality information for their clients. The dealer department, on the other hand, trade assets for their own accounts. These two objectives may generate an alternative conflict. For example, it may happen that the research department recommends their clients to buy a particular stock and the dealer department sells this stock for their own accounts. Moreover, the stock price, in actual fact, declines later.  This study investigates analysts’ information, reported in a Taiwanese financial newspaper, and finds positive abnormal returns for stock recommendation before and on the event day. It indicates that the information content of analysts’ recommendations does exist. When transaction costs are considered, however, the abnormal returns become insignificant. On the other hand, abnormal returns after the event day are significantly negative and the loss is enlarged with the holding period.  In the Taiwan stock market, it is often seen that all classification of equities rise by turns and the proportion of electronic-communication (EC) stocks is almost 70% of total trading volume. Such transaction characteristics are the main difference between Taiwan and foreign markets. However, previous studies always deal with the effect of analysts’ recommendations on entire stocks. Such a process neglects the industry classification of the stock. The first priority of this paper deals with the question from just such an angle. Under the standard industry classification (SIC) code, the findings show that a favorable investment strategy should short EC stocks, or long conventional industry (CI) stocks, of buy recommendations. Specially, the second contribution of this paper is to investigate conflicts of interest from the forecast error ratio of research departments and the proportional change in selling volume of dealer departments. Our approach, of focusing on these two ratios, gives an insight into the business ethics of brokerage firms. We find that the existence of brokerage firms’ deals does have a statistically significant relationship with analysts’ stock recommendations. The definition of information content (Ball and Brown, 1968; Beaver, 1968; Hillmer and Yu, 1979; Grossman and Stiglitz, 1980; French and Roll, 1986) is as follows: if a new information can change investors’ expectation for future stock returns of the firm and for the equilibrium of market price, then the information is referred to be efficient or informative. Empirical studies on the effect of professional investment advice typically find that it contains valuable information and that this information is reflected in market prices gradually through time.


Manufacturing Competitiveness of Croatia: Results from the MANVIS European Delphi Study

Dr. Lovorka Galetic, University of Zagreb, Croatia

Dr. Jasna Prester, University of Zagreb, Croatia

Ivana Nacinovic, University of Zagreb, Croatia



A MANVIS Delphi study was conducted in 22 European countries in order to asses the current state of European manufacturing. This is one of the biggest Delphi surveys ever conducted in Europe. The results from this Delphi study were predominantly conducted for European policy makers and strategic manufacturing decision-makers. In this work we concentrate on the part of the Delphi survey concerning manufacturing strategies and organization. We chose only this topic because we believe that in this area significant improvements can be made without large capital investments. We intend to compare the Croatian results with the overall European results. That way we can see the similarities and therefore use the recommendations laid out in the MANVIS final report. Also, we will investigate the differences, identify the reasons for these differences and identify possible sources of manufacturing competitive advantage in Croatian setting.  While competitiveness of enterprises has been thoroughly researched by many scholars around the world, competitiveness of nations is a relatively new discipline. Competitiveness of enterprises may be defined simply as the manner in which companies are trying to create and develop a unique comparative advantage. Competitiveness of nations, on the other hand, could be described as an integrative process of all policies in a country in order to have a blueprint to increase prosperity. This would typically include not only traditional economic policies, such as what the Central Bank would be doing in terms of interest rates, but also government policies and policies affecting business and infrastructure, education, research, and more. Some of the most important elements of competitiveness would be issues like education, technology, research and science, where one is preparing the future. The competitiveness today includes cheap brainpower and a set of highly motivated people. Therefore it is now necessary for Western nations to rethink their unique comparative advantage and their perception of what constitutes success. The definition of success is far more complex today in Europe and the US where finding the right work-life balance is imperative. But the bottom line is that any policy put into place at the national level, needs to be predictable. Business cannot adapt when a nation is changing direction all the time. Unpredictability is a killer for competitiveness (Garelli, 2007).Therefore this Delphi study is conducted in order to lay ground to long term policies which would foster European competitiveness.  This Delphi research constitutes of more then 100 statements concerning technology, work practices, specific manufacturing sectors and strategic and management issues. In this work we concentrate only on twelve statements concerning strategic issues. These issues are divided into inter organizational issues (between companies, cooperation) and intra organizational issues that would increase competitiveness on firm level.  This work is organized as follows: First we lay out the reason for posing these twelve questions and why are they important. We also describe the “American” and “Japan” model of manufacturing competitiveness. This is done because in these twelve questions the best from those two models is taken. Then we also explain why we divided these issues on firm and across-firm level. In the second section we describe the methodology behind this Delphi study. Finally we present the results and draw some conclusions from these results. Some years ago it was enough to concentrate on one or a combination of four manufacturing core competences, namely quality, time, flexibility or cost. Today those competences are ability to work on global level, usage of advanced technology and network partnerships (Hayes et al., 2005, p. 2) while those traditional core competences are a necessity which is self-understood.  According to the World Competitiveness Scoreboard for 2006, USA is still the most competitive nation. Follows China, Europe and then Japan. Because China competes on low cost production, we look at manufacturing strategies in USA and Japan, followed by characteristics of European manufacturing strategies.  The “American system” is characterized by mass production for mass market. The key to low cost is standardization (Hayes et al., 2005, p. 37). Workers have only limited knowledge and are supposed to be concentrated only on skilfully doing the fully specified job. But on the other hand USA is the leader in R&D activities which are then put into novel products (Scoreboard, 2006, pp. 5). Also a characteristic that came out of specialization is outsourcing. It is generally believed that if one is specialized in some activity he can profit from economies of scale and render better and cheaper services (Logan, 2000, p. 22). The “Japan system” on the contrary puts greater emphasis on reliability, speed and flexibility rather than volume and cost. The essence of “Lean production system” is the notion that people ought to be broadly trained rather then specialized, should work in teams in solving operating problems. Communication through the company is rather informal than through prescribed hierarchy. Production throughput is more important than utilization, and all that is enabled with long term and cooperative suppliers (Hayes et al., 2005, p. 38).  According to Hayes et al., (2005, p. 38), it is not wise to copy best practices and believe that in such way a competitive advantage can be made. Rather, Hayes et al., (2005, p. 39) advice to take a more contingency approach to manufacturing strategy.  As much as European manufacturing is concerned, it is characterized with inflexible labour market regulations, powerful unions, high unemployment rate and highest labour cost but also the smallest labour productivity (Hayes et al., 2005, p. 7). So the question for Europe is: how to find a compromise between good working and living conditions and competitiveness on the other side, especially to combat the threat from low-cost labour countries. A move forward to this direction is laid out on the Lisabon convention in October 2005 (European Communities, 2006, p.2).  Globalization, with the objective of using the best comparative advantage worldwide, Outsourcing and networking, with the objective of working cheaper,  Quality and reengineering through information technology, with the objective of working better. As we can see, these three factors deal either with firm’s internal competences or cooperation with other partners in the value chain. Therefore, for some analysis the statements will be subdivided into inter and intra – firms questions. Intra-firm questions as we will see later mostly deal with organizational issues which is necessary for Lean system to work. In MANVIS research, Delphi technique is used. Its advantage is in collecting opinions from a large number of experts, and it is specially used for forecasting future, especially in domain of new technologies. Another good feature of this method is that experts do not communicate among themselves, reducing the possibility of influence by another participant. The conflicting opinions are ruled out by several Delphi rounds (MacCathy and Atthirawong, 2003, p. 796). According to Tavana et al. (1996) three prerequisites for a successful Delphi research must be fulfilled: (1) expert should be anonymous to one another, (2) a statistically significant sample of responses on a well structured questionnaire should be obtained, (3) there has to be a way to control the distribution of responses. All these prerequisites are fulfilled for this research. The questionnaire was designed by a group of experts including overseas countries (see Dreher et al., 2005). In this first process 130 statements were developed. They were then sent to each country for improvement, changes or exclusion. The questionnaire included general part, strategic part, and then sector coverage (machinery, metal products, electronics, rubber and plastics, traditional products and transport equipment). After the feedback from each country, 101 statements were agreed upon and that constituted the first Delphi round. For each statement expert had to evaluate: time horizon of realization, expected effects on environment, employment, competitiveness, barriers, importance to European manufacturing, current position of the responding country. Experts were chosen so that: 40% were from research institutes, 40% from production companies and 20% from consultants, associations or policy makers. The questionnaire was then put “on-line” and experts had to answer electronically. That way the third prerequisite for controlling responses was satisfied.


Managing the Change and Risk of Government-Owned Enterprise: An Empirical Study of Taiwan Water Company

Yao-sheng Hsu, Diwan College of Management, Tainan, Taiwan

Dr. Su-Chao Chang, National Cheng Kung University, Tainan, Taiwan

Dr. Hwey-Lin Sheu, Kun Shan University, Tainan, Taiwan



Taiwan Water Corporation (TWC), with water plant combination and year-by-year expansion, unfortunately, has amassed close to a 90 billion NT dollars debt burden. TWC undertook responsible center and set up target value as quantity control index to promote operation performance in 2004. With reasonable budget organization and a base price agreement to effectively control rising expenses, TWC may effectively raise its surplus. For the time effectiveness of organizational change, it is not able to provide the smallest risk review space and seeks profit under the existing system, and without consideration of knowledge economy and of future market change and technology. To look into the future, TWC’s organizational reengineering and time effectiveness has possibilities for improvement. Besides external economic index, multi-operation will produce a management and technology blind point; the existing outsourcing technique may cost TWC its leading business position. Therefore, the time effectiveness and specialty of business development has to pay close attention, as internal profit will be the key of enterprise continuous operation. Regarding government-owned enterprise, the change process confers possible existing risk or uncertainty and expects organizational change for reference. Curiously, however, this recurrent theme of change in government-owned enterprise has not induced a high volume of articles that explicitly address the topic in journals. There are prominent exceptions to this observation (e.g., Berman and Anderson 2000; Chackerian and Mavima 2000; Mani 1995; Wise 2002) and journal articles about topics related to organizational change (e.g., Berman and Wang 2000; Brudney and Wright 2002; Hood and Peters 2004). Some of the theories downplay the significance of human agency as a source of change (e.g., DiMaggio and Powell 1983; Hannan and Freeman 1984; Scott 2003). Conversely, other theories view managers’ purposeful action as driving change (e.g., Lawrence and Lorsch 1967; Pfeffer and Salancik 1978). Virtually all organizational changes involve changes in the behavior of organizational members. Employees must learn and routinize these behaviors in the short term, and leaders must institutionalize them over the long haul so that new patterns of behavior displace old ones (Edmondson, Bohmer, and Pisano 2001; Greiner 1967; Kotter 1995; Lewin 1947).  In parallel, studies have explored organizational risk-taking behavior. Although risk-taking is a necessary pre-condition for change strategy, the two streams have not been linked. Bowman (1980) reported mixed risk-aversive and risk-assertive orientations. Since change strategy requires risk-taking, we suggest that the risk management literature should be integrated with the organizational change literature. Due to engineering water construction, including sourcing, taking, leading, purifying, distributing water and supplying water to users, therefore, only the suitable water source and economical effective water treatment can provide sufficient and good quality water. The development of water industry relies on proper government policy, effective purification treatment technique, and enterprising business management, but specific risks such as downsizing water resource reserves, water pollution, and protests of neighbor inhabitants will cause difficulty on management. From the literature, reengineering creates effective investment reward and performance, but TWC has run and been monopolized for 32 years. How to promote performance and decrease risks on TWC’s business to create continuous operation is worth being researched. As the whole human population needs drinking water for sustaining life, the provision of a safe water supply is a high priority issue for safeguarding the health and well-being of humans (Van Leeuwen, 2000). To provide a continuous supply of safe water, the pollutant loading and treatment schemes are directed towards effective removal of target pollutants (Hatukai et al. 1997). In Taiwan, tropical storms yield heavy rains and produce highly turbid surface runoff. This step is greatly influenced by rainfall and surface runoff because the erosion products consist of soil particles (mainly clay minerals), organic detritus (e.g., originating from the decay of leaf litter), and living cells (Clasen 1998, Van Leeuwen 2000). Though the removal of turbidity is an important part of the water purification process, the functional design of water treatment plants (WTPs) and the associated facilities was generally adapted to the water source like surface water and ground water. The conventional coagulation-flocculation/sedimentation process generally works very well for raw water with low to medium turbidity, but not suitable for highly turbid raw water as there may be a significant risk to the supply of safe water (Davison et al. 2003). Therefore, a design’s function is not the same an operational result, and it is difficult and expensive to compensate for poor purification efficiency in the design and operation of WTPs (Edzwald and Van 1990, Edzwald 1993).  In nearly half of the states in the United States, the pollution load for water body, including that created by algae, increased by 17 percent from 1982 to 1997 (Fulton 2001). In 1993, 172 chemicals were produced or imported, and the chemical consumption was more than 1000 t, but the environmental risk of these chemicals remains unknown (Murtin et al. 1997). Thus, it is important to thoroughly understand the risks of pollution in aquatic environments, as well as the effect of the characteristics of the raw water on water purification, so risk analysis is one powerful approach for doing so (Lee and Chang 2005, Stam et al. 2000). The risk analysis is the cornerstone of recent guidelines from the World Health Organization (WHO), especially concerning the effects on bacterial populations (Davison et al. 2003; Fewtrell and Bartram 2001; Ashbolt et al. 2001). A deviation of processes was also in early discussion from CPQRA, and a risk management program for accident reduction from US EPA was involved (AIChE 1989, Vandenberg 1995). Over recent decades, there was concern about environmental risks caused by a change in the quality of raw water and improper operation of WTPs, and strict quality requirements were being urged to protect public health (Martin et al. 1997). Accordingly, it is important to concentrate on risk management and on the process used by WTP to assure consumers that drinking water is safe and can be consumed without any risk. Many guidelines or standards have to be set, giving maximum allowable concentrations of compounds in drinking water below which no significant health risk is encountered. Nonetheless, studies on risk analysis for the management and operation of purification processes in Taiwan’s WTPs to supply safe water are still rare and incomplete.  TWC was a government-owned enterprise which established with combining all purification plants in 1974. It began by supplying 42 percent of the water, and reached 90 percent by 2004, with 98 percent of the city, e.g., Tainan. Water business operation has become difficult in recent years due to climate variations, frequents typhoons and in addition the unique geographical nature of Taiwan region. These cause water shortage and increasing turbidity of water supply. The general public longs for a supply of steady and reliable water. To satisfy the demands of the general public and to prepare for the modernization, professionalism and internationalization of TWC and it complies with the following three major principles to accomplish mission of steady, reliable, excellent, sufficient and sustainable water undertaking. However, TWC has been unable to create a long-term abundant surplus, and in 2004, its debt had reached to NT$856 billion.


A Simple Model of Accounting for and Hedging Employee Stock Options (ESO)

Dr. J. Howard Finch, Florida Gulf Coast University, Fort Myers, FL

Dr. Joseph C. Rue,  Florida Gulf Coast University, Fort Myers, FL

Dr. Ara G. Volkan, Florida Gulf Coast University, Fort Myers, FL



The escalating size of compensation packages to senior managers and investor disillusionment have resulted in growing calls for the expensing of employee stock options (ESO). While initially slow to respond, the FASB has now mandated the expensing of ESO. The two primary methods used to value ESO, the Black-Scholes closed form equation and the lattice model, suffer from several deficiencies. A Simple model for valuing ESO that marks the option expense to market in succeeding financial statement dates and allows for the staggered exercise dates of option holders is available. The model is easy to understand, would have a low cost of implementation, offers a superior estimate of the true cash flow effects associated with the opportunity cost to shareholders of ESO exercise, and allows for the use of treasury stock to hedge the ESO expense that results. Rising levels of CEO compensation and investor unrest led to increasing calls for the immediate expense recognition for ESO in corporate financial statements (Apostolou and Crumbley, 2001 and 2005; Bartow and Mohanram, 2004; Botosan and Plumlee, 2001; Delves, 2002; Doyle, 1997; Mellman and Lillien, 1996; and Moyer and Weihrich, 2000).  While the Financial Accounting Standards Board (FASB) and its predecessor, the Accounting Principles Board (APB), continued to issue documents concerning ESO (APB, 1972; FASB, 1978, 1985, 1986, 1993, 1995, 2002a, and 2002b), strong opposition from firms fearful of negative impacts on their financial position resulted in a stand-off over the ensuing decade.  Finally, in December 2004 the FASB issued Standard No. 123(R) that required firms to recognize compensation costs related to share-based transactions in their financial statements and brought global harmony to accounting for ESO, given the mandates by the International Accounting Standards Board (IASB, 2002 and 2004).  All firms were required to be in compliance with these directives by the end of 2005.  Originally, FASB favored the use of a lattice model for option valuation, but backed off due to significant opposition from various stakeholders (FASB, 2004).  While still not specifying a specific option valuation model to be used, the directive does indicate a closed form equation model or a lattice model approach as appropriate for valuing ESO. In addition, the current standards call for expense recognition only during the vesting (mandatory employment) period, disregarding the impact of options on financial position after vesting but before exercise. Finally, hedging future ESO expenses is not addressed. One purpose of this paper is to examine the existing models for ESO valuation and offer a different model which more readily captures the wealth effects on shareholder value. Since valuing and expensing options have been accepted by stakeholders (Frederickson, Hodge, and Pratt, 2006), it is time to take the final step of starting to mark options to market until they are exercised and capture the true cost of these transactions in financial statements. Another purpose is to propose a process for hedging the future expenses that result from the implementation of the model.  From a financial investment perspective, the true economic cost to a firm of utilizing ESO as a method of compensation is the opportunity cost associated with the difference between the exercise price of the option and the market price of the stock upon exercise.  This intrinsic value represents foregone capital that could have been raised by the firm if the stock had been issued at current market prices.  Therefore, the choice of which option valuation model should be used to recognize the ESO expense should be driven by the determination of which model most accurately reflects this loss of shareholder wealth (Aboody, Barth, and Kasznik, April 2004; Balsam, 1994; Baviera and Walther, 2004; Best, Rue, and Volkan, 2002; Briloff, 2003; Dechow and Sloan, 1996; Deshmukh, Howe, and Luft, 2004; Dyson, 2004; Hill and Stevens, 1997; Hull and White, 2004; Jordan, Vann, and Clark, 2005; Lobo and Rue, 2000; Pacter, 2004; Perspectives, 1994; Rue, Volkan, Best, and Lobo, 2003; Siegel, 2006; Tucker and Shimko, 1995; and Wallace, 1984).  Since 1995, many firms have voluntarily acknowledged the effects of ESO on financial results through footnote disclosures, using the Black-Scholes (BS) or Black-Scholes-Merton (BSM, for stocks that pay dividends) model for option valuation (Aboody, Barth, and Kasznik, May, 2004; Beams, Amoruso, and Richardson, 2005; and Robinson and Burton, 2004).  The BS model is a closed form equation that, when supplied with several estimated variables, computes a fair option value that includes both intrinsic and time value (Black and Scholes, 1973).  Eaton and Prucyk (2005) provide an illustration of the BSM option valuation method using Excel spreadsheet software. However, a number of issues make the BS model a poor choice for option valuation to reflect the ESO expense recognition (FASC-AAA, 1994, 2004, and 2005; Hemmer, Matsunaga, and Shevlin, 1994; Kirschenheiter, Mathur, and Thomas, 2004; and Moehrle and Reynolds-Moehrle, 2004).  First, the model was originally developed to value exchange-traded options that have limited lives.  The typical ESO has a longer life (five to ten years) that often begins when a required vesting period ends.  Exchange traded options also have a liquid secondary market that facilitates low cost transactions, resulting in regular price discovery.  In contrast, the ESO are not transferable.  In addition, BS was developed to value a European option that cannot be exercised until maturity.  Once the vesting period is complete, the ESO can be exercised at the discretion of the employee, in effect an American option.  Finally, a key input in the BS model is volatility.  Most volatility estimates are derived from historical returns, leaving no opportunity for the incorporation of changing market conditions and the resulting effects on shareholder wealth.  A lattice model based on a series of discrete future price paths is an alternative option valuation method.  The simplest lattice is a binomial model that assumes the current stock price can diverge one of two possible paths in the coming period.  There is some sentiment that a lattice valuation model is superior to the BS model for the ESO expense recognition.  Barenbaum, Schubert, and O’Rourke (2004) highlight two advantages of a lattice model, including an ability to incorporate varying exercise patterns on the part of option holders and the flexibility to capture changes in the volatility of the underlying stock’s rate of return.  Study notes that the possibility of early exercise reduces the total option value at the grant date (reducing the time value of the option price), and thus the overall impact on reported earnings is less when the ESO are valued using a lattice model.  Baril, Betancourt, and Briggs (2005) and Folami, Arora, and Kasim (2006) provide illustrations of ESO valuation using lattice models.


Equity Valuation Models and Forecasting Capability: An Empirical Analysis of Taiwan’s Commercial Banking Industry

Nan-Chun Tseng, Chung Hua University, Taiwan

Yao-Hsien Lee, Chung Hua University, Taiwan



This paper makes use of various valuation models to assess the intrinsic value of companies in Taiwan’s commercial banking industry.  The results indicate that the P/S model is the best available model for valuation in terms of forecasting capability.  In 1991, the Taiwan Government gave permission for the establishment of 16 new banks, and reduced the existing limitations on establishing branches of foreign banks. Local banks, therefore, enhanced their administrative efficiency as a priority in addressing subsequent difficulties in confronting an open-banking market. In this enhancement process they confronted the challenges of foreign banks and accommodated alternative financial constituent structures. Due to a significant growth in national trade, there has been escalating demand for the services of finance and commercial banking companies in Taiwan. Currently, Taiwan is more concerned about how to spread the geographic location efficiently. This has resulted in the perception that commercial bank-related industries offer valuable investment opportunities for investors in Taiwan.  Obviously, finding the available valuation models stock market analysts use to identify mis-priced securities is an ongoing concern.  In this paper, we will use the equity valuation models presented by Reilly and Norton (2006), Bodie, Kane, and Marcus (2005), and Damodaran (1996) to analyze the investment value of firms in Taiwan’s commercial banking sector.  Therefore, the purpose of this paper is to find which is the best valuation model in terms of forecasting capability.  We use Theil’s U value to represent/measure forecasting capability.  The evaluation model with the best forecasting capability is one that can be used to increase and protect investors’ investment value. This paper is organized as follows: Section 2 contains a brief description of related literature; Section 3 presents and discusses the evaluation models used in the present paper; Section 4 presents the results obtained; and Section 5 concludes the study.  After reviewing the literature relating to Taiwan’s commercial banking industry, we discovered it is difficult to find literature that focuses on the evaluation of companies in the commercial banking sector.  In fact, we noticed that previous studies of commercial banking companies had focused mostly on performance assessment and statistical analysis. The financial indicators usually applied are those based on the discounted value model or the relative value model.  When determining the value of different industries, the price/book value ratio method is the most popular one.  And most of the related literature concerning satisfaction analysis uses analysis of variance, correlation analysis, and regression analysis. We select several papers to represent the above argument as follows.  Chateau, J. P. and Dufresne, D. (2002) investigate commitment credit risk and valuation in connection with their risk-adjusted balance used in computing a bank's capital requirements, mandated by the Bank for International Settlements (BIS). In a two-factor model of the marked-to-market value of the credit line (CL), x, and its mean-reverting volatility, V, the value of the American commitment put is obtained as the sum of a Fourier-based solution for the European put and a quadratic approximation for the early-exercise premium. Once computed, the put value is combined with the line fees and a conditional exercise-cum-takedown proportion to determine the commitment net value and the bank's exposure to commitment credit risk. A comparison between the stochastic and constant volatility option models reveals that correlation rather than stochastic volatility, forms the greater source of bias: the impact of the correlation-generated skewness on the distribution of the CL marked-to-market value is more significant than that of the σ-generated kurtosis. The random-volatility model is used next to ascertain how commitment credit risk affects banks' capital requirements. According to the BIS accounting-based procedure, the risk-adjusted balance of short-term commitments is nil; this is not the case when the same risk-adjusted balance is computed by way of the option-based procedure. Beyond capital sufficiency, the approach also determines the impact of commitment credit risk on the bank's future profits. L. Alfreda, S. Martinb, and S. Christianc (2002) compare the out-of-sample performance of two common extensions of the Black–Scholes option pricing model, namely GARCH and stochastic volatility (SV). We calibrate the three models to intraday FTSE 100 option prices and apply two sets of performance criteria, namely out-of-sample valuation errors and Value-at-Risk (VaR) oriented measures. When we analyze the fit to the observed prices, GARCH clearly dominates both SV and the benchmark Black–Scholes model. However, the predictions of market risk from hypothetical derivative positions show sizable errors. The fit to the realized profits and losses is poor and there are no notable differences between the models. Overall, we therefore observe that the more complex option-pricing models can improve on the Black–Scholes methodology only for the purpose of pricing, but not for the VaR forecasts. Victora, S., Zhang, Qi, and Mingxing, Liu (2007) use previously unavailable central bank data, this paper first employs principal component analysis to derive four measures of a bank's ability to perform the core task of financial intermediation. This study then compares the performance of China's state banks, joint-stock banks, and city commercial banks along these measures. In terms of overall performance and in credit risk management, joint-stock banks perform significantly better than both state banks and city commercial banks. In China, unlike in other developing countries, the size of the bank is not correlated with its performance. Mid-size, national joint-stock banks perform considerably better than the Big Four banks and smaller city commercial banks (CCBs). We further conduct regional and jurisdictional analysis of the CCBs, which indicates that a mix of geographical and historical legacies drives the substantial variation in CCB performance. This study interviews 10 companies, which are undertakings in the commercial bank companies, as study objects.  The financial reporting data and related information will refer to the InfoTimes database and Taiwan Economic Journal (TEJ) database, as well as public data from the Taiwan Stock Exchange market. We use the following four enterprise evaluation models to assess the companies’ intrinsic value over five years [in the appendix]. The brief description of the empirical research models is as follows.


CEO Turnover, Board Chairman Turnover, the Key Determinants: Empirical Study on Taiwan Listed Company

Hou Ou-Yan, National Cheng Kung University

and Lecturer of Kun Shan University, Tainan, Taiwan

Dr. Chuang Shuang-shii, National Cheng Kung University, Tainan, Taiwan



This study dynamically investigates over time the impact of firm performance, firms’ and top managers’ characteristics on board chairman and CEO turnover of listed companies in Taiwan. And by exploring how the three aspects affect the chairman and CEO replacement process. Incorporating Cox proportional hazard regressions, we find that the probability of chairman and CEO turnover are not uniform under accounting-based performance and market-based performance measures. Importantly, this relationship is significantly clear following adoption of the firms’ and top managers’ characteristics. Upon further exploration, the increased sensitivity of turnover to performance appears to be attributable to the increase in traditional industry, firm size, debt utilization, CEO duality, top managers’ age, top managers’ education following proxy statements.  CEO (chief executive officer) generally tends to change according to a natural evolving process of a company, regardless of the company performance. But many CEOs changing are a common result in response to bad performances right now.  Such turnovers we called “punish”. An effective board of directors who work with respect to supervising and monitoring on CEOs should facilitate such turnovers. Recently, punish showed significant changes because of many companies (WorldCom, Enron, Ferranti International PLC, Cirio, Parmalat, Pollypeck International PLC, Colorol Group, Xerox, Bank of Credit and Commerce International (BCCI), and Maxwell Communication Corporation etc) involved in corporate governance problems, and accounting scandals often run along with the wave through internet and public media.  The purpose of this paper is to cast light on the impact of the key determinants that accounting-based and market-based performance measures on the publication of proxy statements or annual reports of listed firms over the period 1996 to 2005 in Taiwan. Our results contribute to two related stands of literature. First, we provide addition evidence on how firm performances and firms’ or top managers’ characteristics affect top managers’ turnover probability and suggest that firms’ and top managers’ characteristics help to improve the exploratory capacity of board chairman and CEO turnover. This paper also enhances our understanding of the factors that affect shareholders’ and boards’ decision to replace the chairman and CEO. Second, previous work has considered how firm performances affect the likelihood of CEO turnover. Our paper shows that the same factors also play a significant role in the process on chairman and CEO. The structure of this paper is organized as follows. In first section, we introduce description of research motives and purpose. In section 2, we discuss literatures about accounting-based performances, market-based performances, firm’s characteristics and top managers’ characteristics. In section 3, we describe our data, sample selection procedures, and resulting sample. We also describe our assumptions, variable definition and provide intuition for our empirical model. In section 4, we describe our analysis and present descriptive statistics, empirical results for the sample. Finally, we discuss our results, implications, and present our conclusions in section 5. The previous researches found that one of manager turnover reasons is firm operating performance. Furthermore, they compare firm performance before and after top manager turnover. Yet there is no consistent conclusions of empirical results concern firm performance after top manager turnover. Our paper divided corporate performance into two categories: accounting-based and market-based performance index. Actually firm performance is the clearest indicator of CEO ability and effort, and poor corporate performance makes managers effort as not maximizing stockholder wealth. It is, therefore, not surprising that a common finding in prior studies is that poor performance significantly increases the probability of CEO turnover (Murphy and Zimmerman, 1993; Denis and Denis, 1995; and Ittner et al., 1997). They adopted accounting earnings change, return on total assets (ROA) in accounting-based performance to research top manager turnover and firm operational efficiency. But what factors determine board chairman turnover?  Market-based performance index is market representation of firm value, generally measure by stock price or stock return. Prior research (see Weisbach, 1988 and Murphy and Zimmerman, 1993) provides ample evidence that earnings are a significant predictor of CEO turnover. Hermalin and Weisbach (1998) offer a possible explanation for this fact by pointing out that share prices reflect the market’s expectations regarding the CEO’s continued employment. Goyal and Park (2002) report a significant negative relation between the likelihood of turnover and firm performance as measured by market-adjusted return over the calendar year preceding turnover. Mason and Merton (1985) argued that expansion of capacity, introduce new product, renew assets, and maintenance expenditure are all regarded as growth opportunities.  This paper introduces firm characteristic variables to discuss the relationship between top managers turnover and firm characteristic variables particularly. In this paper, we choose firm characteristic variables, which include: industry submitted by Demsetz and Lehn (1985), growth opportunity submitted by Barclay and Smith (1995), capital structure submitted by McConnell and Servaes (1995), Mutchler et al. (1997), and firm size submitted by Kim and Sorensen (1986), Turetsky and McEwen 2001, whether chairman or CEO may of attach importance to market indicators of the likelihood of turnover (Shumway 2001). Fama and French (1992) posit that a low stock price relative to book value (high book-to-market ratio), signals a negative market evaluation of a firm's prospects.  Top manager characteristics are top managers possess distinguish characteristic themselves, in other words, top manager characteristics reflect attributes of control, oversight, and/or support of manager’s strategies and actions intended to operate business activities and expansion. Jensen and Meckling (1976) argued that a CEO who is also the board chairman contributes to the principle of maximizing a firm’s value. Several other authors, including Jensen (1993) and Cadbury (2002), argue that combining the positions of CEO and board chairman entrenches by the CEO and hinders the board’s ability to perform its monitoring functions. This is consistent with Goyal and Park (2002) finding that the probability of turnover is significantly lower when the CEO also serves as board chairman, and that turnover is less sensitive to performance in firms that CEO duality practice. But Kenser and Dalton (1987) deemed that CEO is also board chairman will let CEO dominate and control board of directors (BOD) meeting procedure and obstacle monitoring function of BOD which evaluated CEO operational efficiency objectively. Besides, Murphy and Zimmerman (1993) and Goyal and Park (2002) report a significant association between the probability of CEO turnover and CEO age. Empirical results of Hambrick and Mason (1984) found that education degree has positive effect to top manager knowledge, capability of information processing, tolerance, and creative power. Many studies (such as those Weisbach (1988), and Becker, Bo, 2006) have analyzed CEO turnover, and the development of this literature on CEO compensation. Barro and Barro (1990) research found that top manager turnover rate will rise when top manager reach retired age. Prior studies also controlled for CEO retirement as a potential factor influencing the likelihood of turnover, however, results are not uniform, Denis et al. (1997) find no significant relation between retirement and turnover, while Goyal and Park (2002) report a positive and significant association.


A Wavelet-Based Analysis of MSCI Taiwan Index Futures

Dr. Cherng-Shiang Chang, China University of Technology, Taipei, Taiwan



The dynamics of financial markets (e.g., stocks or futures) are non-stationary and their frequent characteristics are time dependent.  Most do not conform to the geometric Brownian motion from the empirical evidences.  Recently, the wavelet-based time-frequency representations have become an extremely powerful tool for analyzing nonstationary time series in many fields, such as engineering, medical sciences, biology, geology, and so on.  While the GARCH-type models are used to investigate the transmission mechanism of financial markets in much of previous studies, the wavelet methods have received more attention from the economists and financial professionals.  In this paper, we apply the wavelet multiresolution analysis to analyze the nonstationarity (time-dependence) and self-similarity (scale- dependence) of MSCI Taiwan index futures.  The time-scale dependent spectra, which are localized in time, are observed in the so-called wavelet based energy (or scalogram).  Our empirical analyses reveal that most variation of the original return series is captured at the first three scales (d1 to d3).  The sharp movements in the value of returns, which occurs around number 360~370 of the observations, is clearly represented by the energy spike of the wavelet coefficients.  Specifically, a significant portion of the energy is contained at wavelet levels (scales) 1-3.  The dynamics of financial markets (e.g., stocks or futures) are non-stationary and their frequent characteristics are time dependent.  Most do not conform to the geometric Brownian motion from the empirical evidences.  Recently, the wavelet-based time-frequency representations have become an extremely powerful tool for analyzing nonstationary time series in many fields, such as engineering, medical sciences, biology, geology, and so on.  While the GARCH-type models are used to investigate the transmission mechanism of financial markets in much of previous studies, the wavelet methods have received more attention from the economists and financial professionals.  Wavelets are functions that satisfy certain properties and are used as building blocks in the representation of other functions.  A wavelet transform is created by adopting a prototype (called the mother wavelet) and then dilating, contracting and translating it to get a set of basis functions.  A wide variety of functions can be selected to construct the transform, and this flexibility in the choice of basis functions is what makes wavelet transforms a powerful tool.  Wavelet transforms have also been widely applied in signal processing as they overcome some of the drawbacks associated with Fourier analysis of a signal.  A Fourier transform give complete information in frequency space but no information in the inverse time or spatial domain (inverse frequency space).  While for some applications the Fourier transform may provide all the necessary information, it may sometimes be necessary to retain information in the original time (or spatial) domain.  This is possible by the wavelet transform since it can use a wide variety of basis functions.  Short basis functions could be used to analyze signal discontinuities and wide basis functions could be used for frequency analysis. Early studies that utilize wavelet methods on the economics and finance areas are Ramsey et al. (1995) and Ramsey and Zhang (1997), which concentrate on stock markets and foreign exchange rate dynamics.  Arino (1996) uses wavelet-based time-scale decomposition for forecasting applications.    Norsworty et al. (2000) apply wavelets to analyze the relationship between the return on asset and the return on the market portfolio, or investment alternative.  More recent contributions have dealt with the relation between futures and spot prices (Lin and Stevenson, 2001), time and scale dependency of intraday Asian spot exchange rates (Karuppiah and Los, 2005) and heterogeneous trading in commodity markets (Connor and Rossiter, 2005), etc.  Fernandez (2006) formulates a time-scale decomposition of an international version of the CAPM that accounts for both market and exchange-rate risk.  They also derive an analytical formula for time-scale value at risk and marginal value at risk (VaR) of a portfolio.  A thorough discussion of the application of wavelets in economics and finance can be found in the survey articles by Ramsey (1999, 2002). In this paper, the wavelet multiresolution analysis is introduced and applied to analyze the nonstationarity (time-dependence) and self-similarity (scale- dependence) of MSCI Taiwan index futures for illustration.  A brief overview of the wavelet analysis is presented in this section.  For further technical details, however, the book of Percival and Walden (2000) is referred.  Consider the following functions.  The scaling function (father wavelets) is the solution of the functional dilating equation:  A variety of families of wavelets have been developed for use as the fundamental wavelet.  Figure 1 illustrates four types of orthogonal wavelets typically used in empirical analysis, which are Haar, Daubechies (Daublets), Symmlets and Coiflets.  The haar wavelet is a square wave with compact support.  It is the only compact orthogonal wavelet with symmetry, but it is not continuous unlike other wavelets.  The daublets are continuous wavelets, also with compact support.  While the daublets are quite asymmetric, the symmlets are constructed to be as symmetric as possible.  The coiflets are also symmetric with additional properties that and have the highest number of vanishing moments among the wavelets.  We can build an orthonormal basis for the Hilbert space L2(R) of square integrable functions from the function and by dilating and translating them to obtain two sets basis functions:  given that, which we call the scaling function and wavelet bases respectively.  In Equations (6) and (7) j is the dilation or scaling parameter and k is the translation parameter.  N equals the number of elements in the original signal f(t).  Moreover, scaling signal denotes the kth scaling signal from the set of scaling signal level j (or resolution j), for k=1, 2…N/2j.  Likewise, wavelet denotes the kth wavelet from the set of wavelet level j (or resolution j), for k=1, 2…N/2j.  The families are orthonormal, as scaling signals and mother wavelets are orthonormal.  The orthonormality property ensures that: where δjk and δjl are the Kronecker delta functions. Under these conditions, for any function f(t)L2(R), if the number N of signal values is divisible J times by 2, then a J-level multiresolution-analysis (MRA) performed on f(t) can be expressed as: where the smooth (approximation) coefficients sJ,k represent the trend of the original signal f(t) and each is defined as: The wavelet (detail) coefficients dj,k capture the deviations from the trend.  Each wavelet coefficient is defined as: Intuitively, the smooth coefficients represent the underlying trend behavior of the data at the coarse scale 2J, whereas the detail coefficients capture the coarse scale deviations from it.  The energy (or scalogram), defined as the mean square of f(t), can be shown as distributed between different levels and between different wavelets within each levels and equals the sum of all squared wavelet coefficients and the sum of squared scaling coefficients at scale J:


Total Factor Productivity Growth of Mining and Quarrying Industry in China

 Tianshu Liu, RMIT University, Melbourne, Australia



 This paper estimates total factor productivity growth of Chinese mining and quarrying industry and its sub-industries from 1987 to 2003 and the contribution of TFP growth to industry output growth. The growth accounting method is adopted in the neo-classical production function to calculate TFP growth. Labor input and labor income are calculated from available data with caution. The results show that total factor productivity growth contributed a major part to the output growth of mining and quarrying industry and its sub-industries, while other factors inputs growth, i.e. capital input growth and labor input growth, did not show the equivalent importance as TFP growth. Mining and quarrying industry played a major role in Chinese economic development in the early years of economic reform. In 2003, the industry value added of the aggregate mining and quarrying industry was 7623.51 billion Chinese RMB, sharing 3.43 percent of total GDP in that year. It increased 3210.14 percent from 1986, in which the industry value added was 230.31 billion Chinese RMB. The early 1990s saw the largest growth of industry output of mining and quarrying industry, which was 458.19 percent increased from 1991 to 1995, followed by 125.4 percent increased from 1986 to 1990. On the other hand, the growth rates of mining and quarrying industry output in periods of 1996-2000 and 2001-2003 were below 50 percent, of which 46.18 percent for the former period, and 31.76 percent for the latter period.  Furthermore, the seven sub-industries of mining and quarrying industry, i.e. coal mining and processing industry, petroleum and natural gas extraction industry, ferrous metals mining and dressing industry, non-ferrous metals mining and dressing industry, non-metal minerals mining and dressing industry, other minerals mining and dressing industry, logging and transport of timber and bamboo industry, had a similar growth trend as their aggregate industry category. As capital input in the mining and quarrying industry also increased 1471.35 percent during the period of 1986-2003 (labor input growth, on the other hand, was negative during 1986-2003), it is necessary to find out that which factor growth contributes the most to the output growth of mining and quarrying industry and its sub-industries by comparing with total factor productivity growth. Thus this paper studies the contribution of factors input growth to industry output growth. The paper first estimates total factor productivity growth of the aggregate mining and quarrying industry and its sub-industries, then estimates the contribution of TFP growth, capital input growth and labor input growth to industry output growth. The general neo-classical production function is used in this paper to estimate total factor productivity growth (TFP growth) in Chinese mining and quarrying industry. It states that gross output is decided by capital, labor and time changes; time changes are referred as total factor productivity, or technical change and is obtained as a residual (Siggel 2005). It is based on the assumptions of diminishing returns to capital, perfect competition, exogenous economy, and constant returns to scale. Technology progress is Hicks neutral. The production function reflects that technology progress is a major factor in stimulating economic growth, and it is the only motivator in the long run. Equation (1) shows the production function estimating output determined by factor inputs in industry level. where Y is industry output (here use industry value-added instead of industry gross output);   is an industry specific index of Hicks-neutral technical progress; K and L are capital inputs and labor inputs;   stands for a specific industry; and   is time period, indicating that output can change according to time changes. Further differentiating and rearranging the production function with respect to time can infer a growth equation composed by the above factors.  where   is  ’s industry value-added at year  , and   is  ’s industry value-added at year  . Growth rates of industry labor input and capital input are calculated using similar formulae. Therefore time series of the growth rates of industry value-added, industry labor input and industry capital input can be estimated. By assuming perfect competition, the factor elasticity equals to the share of factor income in output (Mahadevan 2004; Siggel 2005). Therefore, the elasticity of labor output can be calculated as the labor income share in industry output, which is obtained by estimating total labor income (represented by wage,  ) divided by industry value-added.   Under the assumption of constant returns to scale for establishing the production function, the sum of share of labor income and capital income equals unity (Siggel 2005). Therefore the share of capital income for specific industry can be obtained by subtracting the share of labor income for particular industry from one. The whole time series data for the share of labor income and the share of capital income for specific industry are thus available.  When the data for growth rates of industry value-added, industry labor input and industry capital input, as well as industry income shares of labor and capital are available, total factor productivity growth can be calculated as a residual from output growth by deducting capital input growth and labor input growth. It is the output growth originating from growth in technology progress or technical efficiency, representing a sum of any other factor (except the contribution from capital and labor input) that can realize economic growth in a country. Equation (6) is the final equation used to calculate total factor productivity growth of Chinese mining and quarrying industry. By estimating the share of contribution of input increase to output increase, equation (7) can be derived from equation (6). In this equation,   represents the contribution of labor input growth to industry output growth,   means the contribution of capital input growth to industry output growth, and   measures the contribution of total factor productivity growth to industry output growth (Sun and Ren 2005).   All data used to calculate total factor productivity are from various issues of the Chinese Statistical Yearbook (1987-2005). Industry value added time series data from 1986 to 2004 are available for aggregate mining and quarrying industry and its sub-industries.  The net value of fixed capital time series data collected from the Chinese Statistical Yearbook is used as capital input into the production function for three reasons. Firstly, in order to fall in line with current studies in industrial economies, capital flows could not be used in capital input estimation; in addition, capital flows are not invested and utilized uniformly in China as some state-owned enterprises may use this capital (mostly borrowed from banks) to distribute salary or bonus instead of production investment. Secondly, the concept of the current year investment could not be used as capital input as enterprises not only use current year investment into the manufacturing, but also use accumulated net value of fixed capital into the manufacturing. And thirdly, there is a time lag in the effect of investment. It is impossible for investment in medium and large projects to be completed and put to production efficiently in the same year (Zhi 1997). The data of net value of fixed capital from 1986 to 2004 are available for aggregate mining and quarrying industry and its sub-industries.


Inter-Sectoral R&D Spillovers and Subsequent Knowledge Flows

Yoo-Jin Han, Korea Institute of Intellectual Property, Seoul, Korea



In this research, I explore how much inter-sectoal R&D spillovers affect subsequent knowledge flows by adopting a pooled regression model. R&D activities are dichotomized into financial investment proxied by R&D expenditure and human resources investment proxied by R&D personnel. Knowledge is assumed to be expressed in the forms of patents and papers. In the case of Korea, the paper-based knowledge flow was affected more by R&D spillovers than the patent-based knowledge flow. R&D activities within one nation have been recognized as a tool for transferring knowledge from one sector to another (Griliches, 1998). In addition, knowledge flows caused by the spillover characteristics of R&D activities are considered to be an important factor to enhance economic competitiveness of one nation (OECD, 1996). However, there has not been any sufficient empirical evidence as to how much inter-sectoral R&D spillovers contribute to subsequent knowledge flows. Therefore, in this research, by using a pooled regression model, I intend to measure how much inter-sectoral R&D spillovers affect subsequent knowledge flows. As an illustration, I introduce the Korean case from 1995 till 2000 since in this period, innovation actors such as firms, universities and research institutes started to actively perform R&D.  A “sector” is similar to an “industry” in that it includes a constellation of analogous firms and has its own dynamic (Tirole, 1998; Scherer, 1990; Sutton, 1998; OECD, 2000). Therefore, such institutions as the OECD have indiscriminately used two terms (OECD, 2000). However, a “sector,” in general, encompasses a broader boundary demarcated by the relation between industries based on vertical integration (Breschi & Malerba, 1997; Malerba, 2002). The most representative incipient attempt regarding sectoral taxonomies was made by Pavitt (1984). He maintained that the differences in the technical changes among manufacturing sectors vary in terms of the sources of technology, involvement of user needs, and means of appropriate benefit, and that there exists a strong interdependent relationship among sectors. This point of view not only enjoins that the SIS (Sectoral Innovation System) approach be based on a clear understanding of the nature of technology (e.g., tacit or codified) and the relation between science and technology (Metcalfe, 1995), but also differentiates a “sector” from an “industry” in that the former refers to the multi-angular aspects of changes while the latter prioritizes the attributes of the final products. Breschi & Marlerba (1997) stated that the SIS can be defined as a system (group) of firms active in developing and making a sector’s products and in generating and utilizing a sector’s technologies. That is, such a system of firms is related in two different ways: through the processes of interaction and cooperation in artifact-technology development and through the processes of competition and selection in innovative and market activities. Also, Marlerba (2002) pointed out that the basic elements of a sectoral system are: a) products; b) agents: firms and non-firm organizations; c) knowledge and learning processes; d) basic technologies; e) the processes of competition and selection; f) institutions.  R&D activities comprise the creative work undertaken on a systematic basis in order to increase the stock of knowledge and devise new applications (OECD, 2002). However, since R&D activities are neither ephemeral nor simplistic, it is difficult to capture the exact amount of R&D. Schematically speaking, R&D activities can be divided into mensurable and immensurable types. The former represents the existing measures such as R&D capital investment and R&D labor investment, while the latter encompasses such unstandardized measures as intermediary innovations during production, the level of researchers’ commitment and their training efforts. Likewise, in the case of knowledge, I can dichotomize knowledge into the tacit (uncodifiable) and explicit (codifiable) types. The historically-used representative measures for codifiable knowledge are patents and papers and those for uncodifiable knowledge vary depending on the researchers’ interests. The conceptual schema above is illustrated in Figure 1. Insert Figure1 In this research, in order to simplify the framework, only measurable accounts for R&D and the codifiable aspects of knowledge are taken into account as shown in Figure 2. As mentioned above, by and large, measurable R&D activities can be divided into two dimensions – one is R&D expenditure, which refers to financial investment, and the other is R&D personnel, which represents human resources investment. R&D expenditure, i.e., intramural R&D expenditure, is all of the expenditures for R&D performed within a statistical unit or sector of the economy during a specific period. On the other hand, R&D personnel include researchers and professionals engaged in the conception or creation of new knowledge, products, processes, methods, and systems and also in the management of the projects involved. Looking into the intrinsic characteristics of each dimension, the indicators of R&D expenditures show direct efforts to enlarge the knowledge base and inputs into the search for knowledge while the indicators relating to research personnel approximate the amount of problem solving involved in knowledge production.  Regarding knowledge, I employ patents and publications, since Schmoch (1997) claimed that the parallel observation of patents and publications is needed in order to mirror the characteristics each measure intrinsically has. In many studies, patents have been the representative proxy for technology, while papers have been used for science. According to Pavitt (1998), 80% of patents are granted to business enterprises and 20% to others, while the reverse holds for papers according to Hicks (1995). De Solla Price (1965) pointed out that science and technology differ substantially in their central activities, due to the different ultimate objectives that motivates these activities. Scientists publish in order to maximize their visibility and recognition in their respective communities. On the other hand, the technologist’s objective is to construct or design a proprietary artifact of a process. Of these two, patent data has been regarded as a particularly good indicator for innovation output, because it not only represents the invention itself, but harbors commercial expectations. On the other hand, publication data has been examined from a perspective of science rather than of industrial knowledge output since scientific research does not always consider commercialization. However, nowadays, various studies showed that scientific (basic) research did affect the enhancement of industrial knowledge (Rosenberg, 1990; Pavitt, 1991; Link, 1996; Godin, 1995), therefore necessitating the incorporation of scientific knowledge issues.  However, despite their usefulness and advantages, both data have not been fully utilized at a sectoral level. The reason for this may be due to the discrepancy between technological/scientific fields and economic ones. In general, scientific fields are divided into academic disciplines such as Mathematics, Physics, Chemistry, etc., and technological fields are categorized by patent classification. On the other hand, the economic classification is based on the activities of production units, consequently being represented as the Standard Industry Classification. Effort has been made to bring about concordance between technological fields and economic fields (Hall et al., 2001). However, no earnest efforts have been made for the scientific fields and, therefore, the scientific fields are categorized into the economic industry classification in this research.


Relationship between Knowledge Sharing, Knowledge Characteristics, Absorptive Capacity and Innovation: An Empirical Study of Wuhan Optoelectronic Cluster

Shu-en Mei, Huazhong University of Science & Technology, Wuhan, P.R. China

Dr. Ming Nie, Huazhong University of Science & Technology, Wuhan, P.R. China



The aim of this paper is to explore the role knowledge sharing with customers and suppliers in an industry cluster play in driving innovation among firms. A theoretical model is built on the basis of three constructs: knowledge sharing, involving knowledge interaction with customers and suppliers; the determinants of knowledge sharing, involving absorptive capacity and knowledge characteristics; innovation. A LISREL model based on a sample of Wuhan optoelectronic cluster is used to test our hypotheses. The empirical findings reveal that knowledge sharing with customers and suppliers has a positive influence on firm’s innovation within a cluster. Furthermore, absorptive capacity and knowledge characteristics have a significant impact on knowledge sharing with customers and suppliers. Possible theoretical implication is shown that absorptive capacity plays more important role in innovation than knowledge sharing with customers and suppliers at early evolutionary stage of cluster.  Industry clusters—groups of geographically proximate firms in the same industry —are a striking feature of the geography of economic activity (Krugman, 1991) examined by industrial geographers at least since Marshall (1920). Over the years, the literature on industrial clusters has emphasised their capacity as loci for knowledge diffusion and generation. The importance of clustering for knowledge diffusion and generation was seminally stressed by Alfred Marshall, who introduced the concept of ‘industrial atmosphere’ (Marshall, 1919) and described the district as a place where “mysteries of the trade become no mysteries; but are as it were in the air, and children learn many of them, unconsciously” (Marshall, 1920.). Following this seminal contribution, later scholars have emphasised the importance of localised knowledge spillovers for innovation, due primarily to the fact that firms in industrial clusters benefit from the availability of a pool of skilled labour and that, mainly due to geographical and social proximity, new ideas circulate easily from one firm to another promoting processes of incremental and collective innovation (Baptista, 2000). The paper contributes to this field of study by considering, on the one hand, the innovation impact of knowledge-sharing with customers and suppliers within a cluster; on the other hand, by trying to address the knowledge sharing impact of absorptive capacity and knowledge characteristics. With respect to the first work, more and more innovation studies have emphasized the extent to which the innovation involves the integration of external knowledge with the existing organization (Powell, 1998). For example, Kohli and Jaworski (1990) remark that linkages with customers and suppliers are important to acquire and use market information promoting to innovation. Baptista and Swann (1998) claimed that the dense supply and demand side linkages in a cluster provide a set of knowledge inputs supporting innovation activity. However, research to date has been primarily of a conceptual or qualitative nature. On the second issue, the paper is to empirically examine the determinants of knowledge-sharing with customers and suppliers within a cluster, that is, knowledge characteristics and absorptive capacity. As emphasized by Sorenson et al. (2006), for instance, value of social proximity to the knowledge source within a cluster depends crucially on the nature of the knowledge at hand. Caniels and Romijn (2001) suggested that new knowledge is needed to make adaptations which are frequently needed in an environment which differs in many ways from the setting in which the technology was developed initially. Although knowledge characteristics or absorptive capacity are frequently theorized to be central to knowledge sharing with customers and suppliers, little literature examines the effect of knowledge sharing with customers and suppliers on innovation through integrating knowledge characteristics and absorptive capacity. Consequently, how knowledge characteristics and absorptive capacity translate into knowledge sharing and further influence innovation is unclear. In view of the increasingly important impact of sharing knowledge with suppliers and customers on innovation, and considering the predominantly prescriptive nature of the research and the lack of empirical research, it seems quite clear that research on this phenomenon should be expanded. The optoelectronic industry exemplifies many of general features of technology-intensive sectors and is regarded to be a key future sector of the emerging knowledge economy. Optoelectronic firms in P.R. China are clustered in a small number of geographic regions and competitive advantage derives from continuous scientific and technical innovation. Access to new knowledge and capabilities necessary for innovation in this field may lie outside a firm’s traditional core competence. Knowledge sharing with customers and suppliers in an industrial cluster are widely recognized as enhancing innovation for effective action. Wuhan optoelectronic cluster (Optics Valley of China) is home to the largest concentrations of optoelectronic firms in P.R. China. The joint presence of a sizeable number of co-located firms makes the cluster a local organizational field of knowledge community where knowledge could diffuse widely and informally through a thriving technological community interaction. In this study, a structural equation model (LISREL) is used to provide an elucidation of the way knowledge sharing and its determinants in local clusters affect innovation. Our purpose is to test eight hypotheses regarding the role of knowledge sharing in the innovation process on the one hand and the role of absorptive capacity and knowledge characteristics playing in effectiveness of knowledge sharing, on the other hand. The rest of the paper is organized as follows: The second section puts forward the conceptual framework and hypotheses. The third section describes the empirical study and builds a LISREL model to test the hypothesized relationships. The penultimate section reports the findings of the study. The final section draws some conclusions.  Innovation is an interactive process characterised by knowledge interaction involving various actors. Marketing research indicates that successful firms are market-oriented, practicing outside-in activities spurred by market needs and dynamics. Market-oriented firms intensively develop relationships with customers to acquire, disseminate and ultimately use market information as inputs to the innovation process (Kohli and Jaworski, 1990). If firms have processes in place to learn from customers who are familiar with product and service conditions that lie in the future (i.e., lead users), they can incorporate customers’ perceptions and preferences into innovation and marketing decisions early on . Knorringa (1999) pointed out that outside traders are the prime source of demand information for producers in the cluster. Face-to- face interaction with customers in clusters gives direct access to a firm that can learn about how to produce, use, or improve things by carrying out their activities of solving production problems, meeting customers’ requirements and overcoming various types of bottlenecks. Such knowledge sharing makes firms within a cluster easier to communicate and combine ideas that are of importance to creative processes. It might be reasonably expected that sharing knowledge with customers enhances the firm’s innovativeness. H1: Knowledge sharing with customers is positively associated with innovation. Research has documented the benefits of knowledge sharing with suppliers (Kogut, 2000). Location in an industrial cluster allows for the provision of the transmission of knowledge between nearby firms. A firm can have knowledge sharing with a supplier expressed in the adjustments it makes to the production and distribution system to conform to the needs of the supplier in question. This may generate positive externalities and allow the firm to capture spillover from its suppliers. For manufacturing firms, long-term, cooperative relationships with suppliers can convert knowledge sharing into new types of knowledge and develop new products which processes or services provide a unique capability that establishes a source of competitive advantage (Gerwin, 1993). Hence, it is hypothesized that:


Production and Allocation Decisions in the Presence of Shortages

Dr. Manohar S. Madan, University of Wisconsin-Whitewater

Dr. Chun-Hung Cheng, Chinese University of Hong Kong

Dr. Jay Sounderpandian, University of Wisconsin-Parkside



Allocation decisions are encountered when a manufacturer of goods has insufficient inventory of finished goods and/or short-term capacity to satisfy total demand to multiple retailers. Very often, the supplier is evaluated on the basis of delivery performance. The criteria for evaluating delivery performance can be either quantity-based measures such as percentage of orders satisfied on time or time-based measures such as average lateness. In this research, we develop a methodology for the producer to find optimal last minute production plans and corresponding optimal allocation of finished good to retailers.  In our approach, we consider two different types of quantity-based performance evaluation measures.  Distribution decisions are an integral part of managing supply chains. We consider a producer supplying finished goods to several retailers.  Specifically, we look at a scenario where the producer is faced with shortage of inventory and short-term production capacity in satisfying total demand.  Faced with shortages, decisions to allocate products to customers and to allocate short term production capacity have to be made.  An interesting problem in such a scenario is to find the optimal last minute production plan and optimal allocations.  Clearly, optimality here depends on which dimensions of distribution performance is the producer going to be evaluated by the retailers.  In this paper, we develop a methodology to find optimal solutions corresponding to a few different cases of quantity-based criteria used to evaluate delivery performance.  According to Cachon and Lariviere (1999), allocation decisions in a shortage environment occur commonly in industries where capacity expansion is expensive and time consuming. Allocation decisions can arise throughout a supply chain.  However, as the retailer’s position in a supply chain is very close to the consumer, customers tend to place the most importance on delivery performance (Handfield and Nichols, 1999). Harrison and New (2002) report findings from an international survey where 75% of respondents used customer delivery performance and inventory turn in monitoring supply chains.  Researchers in allocation strategies for distribution typically address the problem of allocating of individual items that are in short supply in a variety of industries.  Hwang and Valeriano (1992) discuss the allocation of nicotine-free patch where allocations were made based on sales histories and location.  Automobile manufacturers have used sales and customer satisfaction indices as bases for allocation of cars to dealers (Henderson, 1995).  According to Tompkins (1997), segmenting customers is necessary for true customer satisfaction.  In some cases, powerful customers may demand special priority (Gruley and Pereira, 1996) in allocation. The goal for allocation strategies should be to achieve different service levels for different products and different customer segments (Heijden et al. 1997). Hence, companies should consider allocation schemes based on providing different service levels to different customers. The objective of this research is to provide a methodology for determining optimal allocation of products and short term production capacity to fill purchase orders (POs). To our knowledge, existing research on allocation problems primarily considers fill rate at the individual item level i.e., stock keeping unit (SKU).  Our approach considers allocation decisions from the standpoint of satisfying individual items as well as purchase orders, where a purchase order from a retailer may include many items. We consider quantitative dimension of delivery performance involving fill-rate of items within a PO.  A distinguishing feature of our approach is that we use the concept of penalty incurred as a measure of (poor) performance instead of explicitly considering costs involved in allocation decisions. The penalty values approach presented in this research is easy to use and is intuitive to managers in making allocations decisions.  In the next section, we provide a brief literature review and highlight the importance of this research. After that we present a mathematical formulation of the allocation problem. We then discuss results for an example problem and the managerial implications of the solution methodology.  Finally, we provide a summary of this research.  Decisions integral to management of supply chain include forecasting, purchasing, production, storage and distribution (Sengupta S. and Turnbull J. 1996).  The focus of this research is the area of distribution.  Distribution performance is commonly evaluated by customers across a variety of measures.  Quality distribution systems are necessary in order to achieve customer satisfaction and reduced cost (Tompkins, 1997).  Within the distributions area, we specifically address the allocation problems.  In addition, we study allocation problems associated in delivery of multiple products within a PO for a customer. Order fulfillment metrics can be very useful in identifying strengths and weaknesses of the link between adjacent nodes in a supply chain (Johnson and Davis 1998). According to Choi, Dai and Song (2004), customer service based measures such as fill rate and stockout rate are viewed as attractive performance measures by managers in managing supply chains.  A popular measure to evaluate customer service is to measure delivery performance by way of order fill rate.  In a case study of Hewlett Packard conducted by Callioni and Billington (2001), the authors report that fill rate is one of the metrics commonly shared in a supply chain.  The allocation problem is also referred to as the rationing problem by some researchers. Most researchers address strategies for allocating individual items to purchase orders.   Eppen and Schrage (1981) introduce a fair share allocation rule, which is further extended by De Kok (1990). Verrijdt and De Kok (1996) modify the heuristic by De Kok (1990) for the situation with significantly differing filling rate targets.  Ha (1997a, 1997b, and 1997c) addresses the rationing problem for a make-to-stock production system. Cachon and Lariviere (1999) analyze turn-and-earn allocation rule used in the automobile industry. The turn-and-earn allocation rule allocates automobiles to dealers based on past sales performance. De Vericourt et al. (2000) address the stock-rationing problem for single item and multi item environments.  De Vericourt et al. (2001) investigate different stock allocation strategies for a make-to-stock production environment.


The High Frequency Responses of Australian Financial Futures to Monetary Policy Announcements

Dr. Xinsheng Lu, Northwest A & F University, China

Dr. Xuexi Huo, Northwest A & F University, China



This paper examines the high frequency responses of Australian financial futures to monetary surprises using intra-day futures data.  Using the event-window method with tick data to control for the endogeneity issue between market interest rates and the cash target rate, our empirical findings support that first, monetary policy announcements impact significantly not only on the short-term interest rate futures, but also on longer-term Treasury security futures markets. Second, the most significant responses of these markets occur in the event-window which contains the policy announcement. Third, a weak relationship between monetary surprises and the movements of stock index futures is also identified.  This paper examines the high frequency responses of Australian Treasury security yields and stock index futures to monetary policy announcements using intra-day futures data from Sydney Futures Exchange (SFE). A large number of studies have tried to assess the effects central banks’ monetary policy announcements and actions on asset prices of fixed income securities using lower frequency data such as daily (e.g. Cook and Hahn, 1989; Kuttner, 2001; Demiralp and Jorda, 2004), weekly (Piazzesi, 2005) and even monthly data (Evans and Marshall, 2001). The main drawbacks of those studies with lower frequency data include: a) There is a potential endogeneity problem associated with monetary policy announcement effects, i.e, not only does a monetary announcement impact on financial asset pricing and market volatility, but also information on financial asset pricing and market volatility could enter into the central bank’s reaction function. The potential endogenous relationship between monetary policy variables and financial asset returns and volatility makes the classical single-stage regression method used by most previous studies no longer valid, or at least the traditional regression assumption of the orthogonal error term is violated. With specially designed event-study windows using tick data, this paper seeks to overcome the drawbacks inherited from the classic regression method, and provide a more accurate assessment of monetary announcements and policy actions on Australian 90-day bank bill, 3- and 10-year Treasury bond futures contracts as well as SFE Stock Price Index (SPI) futures market With our 30-minute and 60-minute event windows, it is unlikely that: first,  the Reserve Bank would adjust their policy within this time frame; second, our modelling would accidentally pick up any macroeconomic announcement effect other than the cash rate target announcements. The remainder of this paper proceeds as follows. Section 2 outlines our research motivations and methodological advantages. Section 3 explains our data for the Reserve Bank’s monetary policy announcements, monetary policy surprises and detailed description of the Australian Treasury security futures markets data.  Section 4 analyses monetary policy announcement effect on Treasury bond futures and SFE stock index futures contracts and presents the empirical modelling results. Section 5 concludes the paper. How does the Reserve Bank’s monetary policy affect money market interest rates and treasury security yields in Australia? The question raised here is important because money market interest rates, along with Treasury fixed-income security yields and credit supply, constitute the main channels through which monetary policy affects real economic output and prices. Thus, predicting the impact of monetary policy actions on market interest rates is key for us to gauge the effect of monetary policy on the ultimate policy target.  Studies on responses of market interest rates and stock index futures rates to monetary policy often face the endogeneity problem. One way to handle this endogeneity issue is to focus on the short-run reaction of market interest rates to monetary policy target rates by designing a short-time reaction window on which monetary policy actions and policy announcements were not themselves endogenous to financial market interest rate movements. We first need to identify a list of “reaction windows” for monetary policy events and actions. We then run the following regression:  Where Δit is the unexpected cash rate target changes, for which we choose yield changes of 30-day bank-accepted bill on the day of a policy announcement as a proxy of the unexpected cash rate target movements. ΔRt is the change in the market interest rate futures or the stock index futures prices in the same day. Parameter β is used to measure the linear relationship between monetary surprises and the yield changes in market interest rates futures—or the impact of unexpected cash rate changes on market interest rate movements. Δit serves as a measure of the surprise content of monetary policy announcement. It is designed to be sufficiently short to reflect the policy rates or targets that the Reserve Bank set for the immediate future, but at the same time sufficiently “long” to react only to the extent that the changes in the target rate were not anticipated. By using the policy “event windows” technique, we can identify the impact of a monetary policy decision by isolating the surprise component of the change in monetary policy. This study employs a policy surprise proxy method, developed by Kuttner (2001), which utilizes changes in market interest rates as proxy for the surprise component of the change in monetary policy. Market interest rates incorporate a risk premium, but the change in market rate is a good proxy for the policy surprise as the risk premium is unlikely to move in the short time periods used in the event study (Piazzesi and Swanson, 2004). For each of the monetary policy events, we measure the movements in market interest rates and stock index futures prices around the event time using tick data, with an event window of 60-minute.  This reduces the information content received by the market in the event window and so the number of the events that otherwise would have to be discarded due to both the policy target rate and market interest rate futures response to the news. The 60-minute windows will also serve to remove the possibility that market interest rate movements influence monetary policy decisions. With this short time event window, the information regarding market interest rate fluctuations will surely not enter into the monetary authority’s reaction function, so that the endogeneity problem is unlikely to occur in this case. In the case of SFE stock price index futures market, the 60-minute window will also ensure us that the movements of the share price index will not influence the central bank’s policy decisions through the wealth effect of the share price movements, and thus monetary policy will not, in such a short time horizon, be adjusted by the Reserve Bank in response to stock market’ performance. In order to test the speed of market interest rate reaction to monetary policy announcements, a series of consecutive 30-minute event windows around the policy announcement time are also constructed to catch market responding patterns in different time intervals. Details of the time frame for these event windows will be discussed in the next section. We analyse the high frequency response of short- and long-term interest rate futures markets and the stock index futures market to monetary policy surprises by tick data for 90-day bank bill futures, 3-year and 10-year Australian Treasury bond futures contracts and SFE All Ordinaries SPI Equity Index Futures contracts. All tick data were obtained from Sydney Futures Exchange (SFE). The sample period runs from January 1, 1998 to December 31, 2004.


The Entrepreneurial Health Care Manager: Managing Innovation and Change

Dr. Kristina L. Guo, University of Hawaii-West Oahu, Pearl City, HI



This paper describes the process of entrepreneurship and specifically focuses on two perspectives of entrepreneurship as (1) the creation of innovation and (2) the creation of change.  Using these two important perspectives, this paper argues that the entrepreneurial manager in health care organizations should create innovative strategies and manage change in order to enhance organizational survival in the current complex and turbulent health care environment. Health care organizations are challenged by the increasingly complex forces in the health care environment.  Challenges include rising health care costs, competition, the impact managed care, aging of the population, cultural and ethnic diversity, and increased consumer demands for higher quality of care and better outcomes.  As organizations attempt to survive and grow under these conditions, they seek creative ways to manage change and innovations. Health care managers in these organizations believe that success depends on their abilities to grasp opportunities and form strategies when making rapid, flexible and innovative decisions. The process of exploiting opportunities and developing innovations is one aspect of the concept of entrepreneurship. The exploration of entrepreneurship as applicable to the health care field is believed to be crucial to the success of health care organizations (Guo, 2003).  For instance, one perspective on entrepreneurship focuses on the creation of innovation.  It is necessary in the turbulent health care environment for managers to perform entrepreneurial activities which involve the generation of innovative strategies to preserve organizational stability. At the same time, entrepreneurship can be seen as a creation of change by which managers constantly must meet the demands of the organization in order to enhance survival. Using these two important perspectives of entrepreneurship to address the creation of innovation and change, this paper attempts to explain the process of entrepreneurship in organizations and specifically describe the entrepreneurial manager in health care organizations who must undergo changes and invest in innovations to manage human resources in the complex health care environment.  There are many different perspectives on entrepreneurship.  It originally was coined by economists to refer to the assumption of risk and financial gain. Since then, many researchers have tried to define entrepreneurship.  It was Morris (1998) who summarized seven most important perspectives of entrepreneurship.  He categorized entrepreneurship into the creation of: (1) wealth, (2) enterprise, (3) innovation, (4) change, (5) employment, (6) value, and (7) growth.  Most commonly, entrepreneurship is identified as a creation of wealth, whereby it is viewed as a risk-taking endeavor and associated with the facilitation of production in exchange of profit.  As a creation of enterprise, entrepreneurship entails the founding of a new business venture.  In addition, as a creation of value, entrepreneurship is a process of value added commodity exploited for the benefit of customers.  As a creation of growth, entrepreneurship is strongly oriented to the growth of sales, incomes, assets, and employment. Furthermore, entrepreneurship can be viewed as a creation of employment to address the management, developing and training of human resources.  Most importantly, entrepreneurship involves a creation of innovation, in which new and unique combinations of resources are generated.  Finally, as a creation of change, entrepreneurship focuses on adjusting, adapting and modifying one’s own skills and approaches to meet new opportunities and challenges (Morris, 1998).  More specifically, entrepreneurship is a multidimensional process involving the environment, the organization and individuals (Morris, Kuratko and Schindehutte, 2001) Entrepreneurial organizations are proactive, innovative, and risk taking. In contrast, conservative ones take a more "wait and see" posture, are less innovative, and are risk adverse. Entrepreneurship emphasizes the drive to seek out opportunities and taking considerable risks (O’Connor and Fiol, 2002).  Entrepreneurial activities are accomplished by individuals who possess traits, skills, behaviors, and background that are crucial to entrepreneurial activity development.  Entrepreneurs are persons who perceive opportunities and assume the risks of planning and creating various means to pursue them.  Entrepreneurs are individuals who are engaged in the process of creating and building something of value from practically nothing. They focus on action and producing results, rather than observing and talking.  Entrepreneurs sense opportunities, leverage resources, take calculated risks and bring ideas into reality. The four main characteristics of successful entrepreneurs are passion for the business, product/customer focus, tenacity despite failure, and execution intelligence (Barringer and Ireland, 2006).   Successful entrepreneurs are focused and have purpose in mind.  They select among several alternatives the best option and link it to the optimum strategy (McGrath and MacMillan, 2000).  Traits of entrepreneurs include self-confidence, risk taking, flexibility, strong desire to achieve, and independence. Behaviors of entrepreneurs include total commitment, determination and perseverance, drive to achieve and grow and orientation to goals and opportunities (Chell, Haworth and Brearley, 1991). A typology of successful entrepreneurs consist of four types: (1) the personal achiever type, who is prone to motivated for self achievement, (2) the real manager type who demonstrates high supervisory ability and strong need for occupational advancement, (3) the expert idea generator who enjoys coming up with original or innovative ideas and (4) the empathic supersalesperson type who are sociable, friendly and supportive and encourage participation and are action oriented (Miner, 1997).  The entrepreneurial process consists of several steps.  An entrepreneur must identify and develop successful ideas, implement their ideas, and manage and grow an entrepreneurial organization (Morris, 1998; Barringer and Ireland, 2006).  An idea is a thought, impression, or notion.  Creativity is the process of generating a novel or useful idea but does not require implementation. In other words, creativity is the raw material that goes into innovation. On the other hand, innovation refers to the successful introduction of new outcomes. An opportunity is an idea that has the qualities of being attractive and timely and should create value for its users. However, not all ideas are opportunities.  Analyzing trends and problem solving are the two general approaches entrepreneurs use to identify an opportunity (Barringer and Ireland, 2006).  Once recognized as an opportunity by entrepreneurial managers, there is a limited window of opportunity to act.  Forces that enhance opportunities include economic, social, political, technological, and regulatory changes. These environmental trends have evolved dramatically in the last 20 years in health care. As a result, entrepreneurial managers perceive these trends optimistically and are crucial to the discovery and linkage of new opportunities (O’Connor and Fiol, 2002).  For example, identifying opportunities may involve noticing a problem and finding a way to solve it.  These problems can be pinpointed through observing trends and through more simple means, such as intuition, serendipity, or chance.  In addition, personal characteristics also tend to make some individuals better at recognizing opportunities than others.  There are several ways to generate ideas.  Brainstorming is a technique used to quickly generate many ideas and solutions.  For instance, brainstorming helps to bring about new products, services, or business opportunities.  Another way to generate ideas is through focus groups.  A focus group is a gathering of 5 to 10 people who have been selected based on their common understanding of issues to be addressed.  Another technique is to conduct a survey to gather information from a sample of individuals. The sample is usually just a fraction of the population being studied.


Survivors in the Market Economy: East German Companies after Transition

Dr. Erik A. Borg, Sodertorn University College, Sweden

Dr. Frank-Michael Kirsch, Sodertorn University College, Sweden

Dr. Renate Åkerhielm, Sodertorn University College, Sweden



The East German transition to Market Economy became a selection process for former DDR companies. Most companies did not survive the transition to the market economy. However, some have survived and are able to compete in the market economy after the transition. Studying these companies renders evidence of what is approved by customers in the competitive markets in the global economy. The research into the survival of East German companies provides and insight into companies facing competition in a market economy, and suggests how market forces influences the functioning of enterprises. Six hypotheses have been developed to account for essential reasons why some East German companies have survived, and why and how they are capable of marketing products and services in the competitive market economy. The transition to market economy has had many consequences to East German companies. The productivity of the former DDR was substantially lower than that of the West. At the time of the reunification, the companies of Eastern Germany were not competitive, due to high production cost. West German wage levels and working conditions, and the West German institutional model called Model Germany, was transferred to Eastern Germany at the time of reunification (Wiesenthal 2003). The cost of production rose in East German companies after unification.  The companies of the former DDR faced competitive market forces at an early stage of the transition, but did not have the benefits of lower production costs as their Eastern European neighbors. At the time of transition it was argued that Western Germany inherited a competitive and well-developed industrial state at the time of transition. This optimistic notion turned out not to be the case. Most East German companies where not able to compete in the unified German market, and even less so in the global market. The early optimistic prognosis of the effects of the unification, was soon replaced by more realistic notions that unification would have a substantial negative economic impact on the united German economy for many years after the unification (Headey and Headey 2002).  In spite of the apparently poor economic conditions in Eastern Germany, some companies have survived the transition. The study seeks answers not only inside the surviving companies, but in the markets for the products and services that the companies markets in the market economy. The study does not seek to merely to look at hard evidence such as investments in capital, but are looking at softer aspects such as intellectual capital and nostalgic and cultural attitudes towards the East. The research is also looking at the market value and appreciation of the designs and the branding of things made in Eastern Germany. Previous research illustrates ways that the transition to market economy has had a major impact on transition economies, and suggests that the effect has differed between businesses and between companies (Forslid et al. 2002). Some businesses have managed better in the market economy than others. The differences between the surviving enterprises and those that did not make it in the market economy renders evidence of how consumers in a competitive market select between characteristics of products and services. The example of Eastern Germany and the German unification provides examples of market selection. The survivors of the market economy provide insight into conditions that can increase our understanding of the dynamics of the market economy.  There is an interest in the destiny of Eastern Germany after the unification. The well being of the German economy is relying on the productivity on the Eastern part of the country.  The survival of East German companies can be analyzed in light of a broader view of German history and identity (Epstein 2003). In a consumer society, consumption is associated with identity. The consumption of East German products and services represent a part of German identity. A combination of traditions and industrial renewal has been seen as important in the East German transition to market economy (Musyck 2003).  Studies of East German companies that have been bought by companies in the West provides evidence of a conflict between the need for restructuring after the transition and attempts to make use of local resources. The challenge has been to adjust the company to fit the parent company in the West and to take into considerations cultural attributes of the East German company (Meyer and Lieb-Doczy 2003). Another study of East German small and medium sized companies has shown a conflict between strategic choices made to make the company fit to survive in the market economy, and the companies’ environment. Interaction between the company and its environment, has determined the companies’ behavior in the market economy and the adjustment to new conditions in the business environment (Sorge and Bussig 2003).  The financial sector has been seen as a factor in determining the survival of East German companies. Access to capital has been seen as essential to East German companies. East Germany has been an area of a high level of investment. There has been a problem with ‘bad loans’ in Eastern Germany, where companies have sought to ease the pressure from restructuring by borrowing. Loans have thereby not necessarily facilitated restructuring, but have enabled incapable and inefficient companies to prevail (Carlin and Richthofen 1995).  Questions have been asked as to how efficient the East German economy is. At the time of transition the differences in productivity between the East and West was substantial and the East German economy was nearly bankrupt. The productivity of Eastern Germany rose substantially between 1994 and 1998 due to productivity gains and labor shedding. Yet, East German firms were significantly less productive. This has been seen as being caused by differences in technology, innovation activities, international exposure, quality of the workforce and ownership structure (Funke and Rahn 2002). The ultimate judge of Eastern German products and services is the increasingly globalized market economy. Only if when companies in the former DDR can satisfy the demand in competitive markets can the long-term survival of the companies be ensured. Many East German companies did not make it after the transition. The surviving companies represent exceptions to the rule that companies under a communist planned economy did not produce much that could be marketed in a Western style market economy.  For the remaining companies we have developed hypothesis to why these companies could survive against the odds. The hypothesis seeks the answers to the reasons for survival in cultural and intellectual aspects of marketing and the management of marketable products and services. Consumption is after all a cultural phenomenon in a consumer society. Surviving enterprises may have been able to communicate something in the marketplace that has been viewed as valuable to the consumer.  Hypothesis 1: Access to investments and intellectual capital has made it possible for East-German companies to survive the introduction of the market economy. The few companies that did not disappear after reunification of Germany can have survived as a result of efficient use of intellectual capital. The knowledge of how to survive in a market economy can have been found in these companies. Intellectual capital can also be transferred to the company after the transition to market economy. The knowledge of how to produce marketable products and services are present as these companies survive in competitive markets.


Subculture Formation, Evolution, and Conflict between Regional Teams in Virtual Organizations – Lessons Learned and Recommendations

Hugo M. Latapie, Independent Consultants, CA

Vu N. Tran, Independent Consultants, CA



This paper focuses on the impact of subculture formation, evolution, and conflict within a large-size virtual organization with multiple remote collaborating teams.  As consultants we were brought in to turn around a failing inter-site development project.  Exploring the root causes of this failing project was among the first steps that we took.  Using team emails, exit interview records of those that quitted and one-on-one interviews with remaining members, we deconstruct the underlying cultural factors that led to the near demise of the project.  The lessons from this case study shed further light on the challenges of development high performance virtual teams.   We also provided some recommendations to address these challenges.  A key challenge in the development of a performance team is the development of its underlying teamwork culture (LaFasto & Larson, 2001). Teamwork culture of a team is a set of shared belief and assumptions on how team members go about getting work done collaboratively, e.g., how team identifies, communicates and resolves problems together.  Teams that cannot collaborate, i.e., having weak teamwork cultures, at best, can not achieve the level of team-productivity that goes well beyond the sum of those generated by individual team members working alone.  At worst, teams that cannot work together will fail together.  According to LaFasto and Larson, “when teams do get off track, the problems rarely have anything to do with technical expertise or content knowledge.  Rather, teams experience their difficulties in the fundamental social competencies of working together effectively and productively (LaFasto & Larson, 2001; Larson & LaFasto, 1989).” For virtual team, a teamwork culture is even more challenging to develop correctly, as team members are much more culturally diverse; living and working in different time zones; concurrently reporting to different management structures with potentially conflicting organizational objectives; and member-to-member communication is often restricted to the use of computer-mediated communication technologies instead of face-to-face meeting. Kirkman et al. identified five additional difficulties that are unique to virtual teams: (a) building trust within a virtual team, (b) creating team synergy, (c) experiencing isolation and detachment, (d) finding people with the right interpersonal and teamwork skills for virtual teams, and (e) assessing and recognizing remotely located team members’ performance fairly  (Kirkman, Rosen, Gibson, Tesluk, & McPherson, 2002).  This paper focuses on subculture development within virtual teams in global organizations.  Specifically, it explores the causes and impacts of subculture formation, evolution, and conflict among regional teams within a multi-national organization when they are pulled together to support project works.  Our objective is to contribute further information to the challenge of developing multi-national virtual teams.  A virtual team in our study has the following characteristics:  It is made up of multiple regional teams, which consists of 1 or more team members.  Each regional team has some level of autonomy, with its own local leadership for work coordination. The role of the local leader(s) can be very limited, i.e., providing administrative support, or more involved, i.e., providing centralized decision making for work assigned to regional team.  Each the regional team has its own subculture socialized by a mixture of cultures, including team, professional, regional, and national cultures.  Regional teams can collaborate via given responsibility for specific work units, i.e., the traditional distributed team model, or via tight integration, i.e., remote members work directly with one another on the same shared source code.  New regional teams can be formed and integrated into the virtual organization as needed during a project work to ensure delivery schedules.  The subject in our study is a multi-national company, known here as GlobalVillage, which adopted the virtual team approach for a major software development project.  The project ran into serious difficulty recently, requiring our organizational development consulting support. We were given access to the team’s emails, project plans, documentations, and team members, for the study.   Due to the paper-size limitation and the overall purpose of presenting it as an experience paper, we focus on the project background, the research findings, the lessons to be shared, and recommendations for organizational improvement. The company, project, location and personal-specific details not critical to our discussion have been modified for confidentiality.  GlobalVillage has been in the software development business for more than 20 years, with headquarter based in London, UK.  Recently it decided to expand its enterprise management system development and integration operations into North America, with a 2-year development contract.  To ensure the success of the project, GlobalVillage opened an engineering support office in the US in conjunction with the main development team in UK.  A virtual team was formed between the US and UK offices to support this new project.  A strong manager with proven experience in project execution was hired to lead the US team.  With a co-interview effort shared between the new local and remote project leadership, GlobalVillage hired the rest of the team members, including 3 leads, for GlobalVillage US.  The initial team built-up was considered a success.  The teams, US and UK, worked well together.  The UK team managed the overall project management, product development, unit testing; the US team provided product integration and end-to-end testing support.  The UK team provided the necessary technical training for the US team to perform their jobs.  As the US team gained more knowledge, they were asked to take on additional tasks.  Starting with bug review and analysis, the team moved up to performing small code fixes for low-priority bugs, and then to small feature analysis and design, supporting the work of the UK team. Inter-team conflicts began to emerge after 6 months into the project; the US team complained that the current bug review and analysis process was too time-consuming.  The task took a large portion of the team time, leaving too little time left for working on new tasks.  The team requested decrease work load on this task, via having UK team members taking on more work on the task, but its request was rejected. UK management clarified that bug review and analysis was the most critical activity that the US team can perform for the project. A few more months, another set of change requests came in from the US team.  Engineers were getting tired of doing small bug fixes and would like to get more involve in the development of major software components.  Again, the requests were rejected.  UK team was handling the major component development task.  It needs the US team support with small bug fixing, after bug review and analysis work. 


 Cultural and Environmental Factors: Their Effect on the Home Buying Behavior of First Generation Asian Indian Immigrants

Dr. Shashi Dewan, Winona State University, Rochester, MN

Shashi K. Dewan, Coldwell Banker at Your Service, Rochester, MN



The real estate buying behavior of people is a factor of the culture and environment that they belong to. It is important to understand these cultural factors and nuances to serve real estate clients better. This paper applies consumer behavior model to understand residential real estate buying behavior of Asian Indian first generation immigrant in United States.  According to the 2000 Census, about 1% of the U.S. population, almost 1.7 million people, is of Asian Indian descent (also referred to as Indian Americans), a jump of 105% from 1990. In 2005, the population has increased to 2.3 million, and their population is expected to triple by 2050 (AREAA, 2006). Indians are found to be the largest Asian American group in Midwest, Northeast and South in the United States. Average age of first and second generation Indians is 31years as compared to 35years for the general population (AREAA, 2006). Median family income for Asian Indians is $70,708 more than the national median income of $50,000. This group of the U.S. population is the most successful Asian group in United States, reporting top income, high education (64 percent have at least a bachelor’s degree), working in professional jobs (59.9 percent are in management, medical, or other professional occupations) and good English speaking ability (75 percent of the group speak English) (Saran, 1985; Watanabe and Wride, 2004;  ).  Three fourths of this group is first generation immigrants, born outside the U.S. Many of these people voluntarily immigrated for better opportunities (Watanabe and Wride, 2004; Fenton, 1988}.  Seventy percent of the naturalized group is homeowners. There has been a constant increase in homeownership over the years among naturalized and non citizen groups, even though Asian Indians trail the other ethnic groups (Associated Press, 2003; Freddie Mac, 2005). Real Estate agents and other related entities can no longer ignore the buying behavior and needs of these people, whose median home price is more than $199,300 the median for all Asians and national medium of $185,200 (Ha, 2005). To understand their residential real estate buying behavior, real estate agents must decipher the unique cultural and environmental factors that affect it. Consumer behavior concepts can help in this endeavor.  Consumers purchase products and services for the benefits derived from their use. The benefits may not necessarily be economic in nature, particularly in the case of a home. People are willing to spend more for some intangible benefits, such as pleasure, convenience, status, social, and reduction in stress. Most real estate purchases are high involvement goods. Decision making is quite complex. (Nicosa, 1966; Engel, Kollat and Blackwell, 1968; Howard and Sheth, 1969)  The purchase decision starts from identifying the need existence. In Asian Indians, this need gets triggered mainly when they realize that homeownership will be a better investment than renting a larger space, which may be needed due to life changing situations such as marriage, child birth, parents moving in, change in wealth and status (HarrisInteractive 2004; AREAA, 2006). Once the need is identified, they have to seek information about which kind of house will satisfy their need. Theoretically, consumers will first check internally for any information that they have due to experience (Bettman and Park, 1980; Punj, 1987; Engel, Blackwell and Miniard, 1995; Anglin, 1997). Since most first generation Asian Indians are young and educated (average age 31 years and professionals) and buying their first homes in United States, too much experience does not exist. They, therefore, tend to find information from external sources, starting with secondary information such as newspapers, real estate agents, friends, relatives and through the internet (Clark and Smith, 1979; Kaynak and Meidan, 1980; National Association of Realtors, 1990).  Asian Indians like to do their homework before they contact any real estate agent. Their first stop is going to be the internet. They will then learn about neighborhoods through personal contacts, driving through specific neighborhoods and visiting open houses. (Punj, 1987; Beatty and Smith, 1987; Baryla and Zumpano, 1995;  HarrisInteractive, 2004). Most of them will choose a real estate agent when visiting open houses or through references by friends. Since they may have to share their personal information with the real estate agent, many Asian Indians will go to a real estate agent from a different race, unless their friends or relatives have recommended someone from their own race as being very reliable and a close friend (Hodge, 1997; Lee 2000). They are hesitant in sharing information about their finances in case information gets back to their community. Previous experience in India tends to make first generation Indian immigrants cautious of their own community. Asian Indian Americans are professionals and very inquisitive, not afraid to ask questions. They will ask many questions about what kind of houses are better, what criteria to use in buying houses, and the rationale behind the criteria. They will, however, not get influenced by that information. They will chew on that information till it makes sense to them, or reject it to form their own opinion. They will visit many houses, some several times and take their time making a decision. (Hempel, 1969; Baryla and Zumpano, 1995; Gibler and Nelson 2003) In the process they may lose a house. However, they are not attached to any house, and will buy one, only if it seems to be a good investment (AREAA, 2006). The average time period of home searching is about 1 ½ years, once they have started thinking about buying a house and about 7months once they are very serious about buying a house (Beatty and Smith, 1987) They may have seen as many as 20-30 homes before making their final decision. Situational determinants influencing information search can be lack of houses available, and time pressures (Baryla and Zumpano, 1995). Time constraints do not usually affect Asian Indians (Lipner, 1994). They like to make this big item purchase after a lot of thought. In situations where they have to move cities or jobs, they will just live in rental properties, till they can find the property that is worthy. Similarly, if prices of houses are high due to the market being a seller’s market, first generation Asian Indians will wait for a turn of economic situation, till a house becomes a profitable investment.  Varied home features may influence the process of information search. Indian Americans are very particular about getting detailed information about the house such as square footage of each room, appliance quality and how they function, siding, roof and all the other details, which can then be used to evaluate the value of the house. They will spend more money than their budget as long as the investment makes economic sense. (Kiel and Layton, 1981; McCrea, 2003; Fornoff, 2005; AREAA, 2006)


Experience of Partnership, Experience with a Partner, Interpersonal Complicity: What Impact on Success in a logistic Partnership?

Dr. Franck Brulhart, Aix-Marseille University, France



One can note an increasing popularity of cooperation practices, for several years, in academic literature as well as in the practice of organizations. In spite of this unproved popularity and the many advantages of traditionally recognized co-operative practices, the levels of performance obtained are very heterogeneous. Consequently, it appears legitimate and necessary to question ourselves about the elements which are likely to increase the capacity of the partnership to achieve its goals. On this topic a more precise problem holds our attention: the part played by experience in the operation of the logistic partnerships and finally in their success. We distinguish three aspects of experience and three potential sources of learning in the management of the partnerships: that resulting from the accumulation of experience of co-operation with multiple partners, that resulting from the accumulation of experience with a particular partner, finally that resulting from the interpersonal experience of the actors of the partnership. Our results, based on an investigation carried out by questionnaire, suggest that partnership success is significantly and positively associated with the experience of partnership management, the experience of joint work and the interpersonal complicity. On the other hand, our results reveal an unexpected negative relation between the duration and depth of the relationships and the success of the partnership. One can note an increasing popularity of cooperation practices, for several years, in academic literature as well as in the practice of organizations. (Kale and al, 2002). From this point of view, one can more particularly underline the interest given to vertical co-operative relationships and, in particular, to logistic partnership (1) (Anderson and Narus, 1990; Bagchi and Virum, 1998; Bhatnagar and Viswanathan, 2000; Ruyter and Al, 2001). In spite of this unproved popularity and the many advantages of traditionally recognized co-operative practices, vertical ones in particular, the levels of performance obtained are very heterogeneous and the success rates attached to these operations remain weak (Spekman and Al, 1998; Sagawa and Segal, 2000). Still however, few elements exist concerning the means of developing and seeing partnerships flourish (Gulati, 1998; Spekman et al., 1998), in particular in the field of the logistic services. Consequently, it appears legitimate and necessary to question ourselves about the elements which are likely to increase the capacity of the partnership to achieve the goals that it set. On this topic a more precise problem holds our attention: the part played by experience in the operation of the logistic relations of partnerships and finally in their success.  Even if there is a rather consequent volume of empirical studies aimed at establishing a link between the co-operation experience and its performance, these studies remain rather inconclusive however (Gupta and Misra, 2000); according to the cases, the studies show a partial relationship between experience and success (Barkema and al, 1997) or even an lack of relationship (Pangarkar, 2003). Besides, equivalent results are present when one studies the same relationship in the context of acquisitions (Haleblian and Finkelstein, 1999; Hayward, 2002). These non-concordant results call one to continue the investigations on the topic of the learning by co-operation experience. Moreover, the various studies quoted above relate quasi exclusively to Joint Venture (JV) cases. However, according to us, there exists a major structural difference between the case of the vertical partnership which almost never gives place to the creation of an independent common entity and that of the JV. The case of vertical partnerships generates a more direct relationship between the two partners (since not intermediated by the JV), more informal (since not formalized by the creation of a common independent structure) and more "inter-organisational" (since the borders of the two organizations persist and do not disappear to benefit a common structure). These specificities of the vertical partnership lead us to refuse the direct transfer of the results relating to these vertical co-operations and justify, according to us, the implementation of a specific study.  These reasons lead us to ask ourselves the two following questions within the framework of the vertical logistic partnerships:  Can the companies learn from their partnerships past or present, and can the experience thus developed enable them to improve the performance of the co-operative relationship? On the one hand, the experience can make it possible for the companies to learn how to be more efficient in the apprehension and the treatment of situations (Hayward, 2002). On the other hand, this learning does not necessarily appear in a systematic manner; it can be lost or badly transmitted (Huber, 1991); in the same way, the results of the experience can be incompletely or badly interpreted (Haleblian and Finkelstein, 1999). This observation quite naturally leads us to question ourselves on the contingencies phenomena which touch this learning: under which particular conditions does the accumulation of experiences make it possible to improve the performance of the co-operation?  A succession of experiences of co-operation undertaken with the same partner, and the thorough knowledge of them, are these favourable to the improvement of the performance of the partnership?  For this, an empirical analysis was carried out in two parts. A first qualitative phase, based on talks with experts, was carried out on ten service provider and agro-alimentary industrialists involved in partner relationship. This first stage made it possible to refine and specify the model and the measurement of the variables. The second, quantitative, stage of research was implemented with the aim of confirming the established model. This model makes the assumption of a positive influence of the partners’ experience on the success of the partnership. The article presents the results from this second phase of research which is based on an investigation carried out by questionnaire.  In this part, we reconsider the concept itself of the success of the partnership before turning our attention to the role of experience in the success of the vertical partnership.  The measurement of the success of partnership operations is prone to debate, and a consensus does not really exist concerning the measurement of the performance of alliances (Kale et al. 2002). In this respect, and except for the case of JV, much criticism has been formulated against financial measurements. Indeed, when one tackles the case of partnership for which there is no creation of an independent entity, there is no common basis on which one can establish the calculation of the indicators. For this reason, the observer is faced with potentially asymmetrical and contradictory results (Gulati, 1998; Kale and al. 2002). Another widespread approach consists in analyzing the survival or the continuity of the relationship (Kogut, 1989, 1991; Hennart et al., 1998). However, this approach was also criticized in that it did not manage to distinguish between alliances which disappeared because they failed and those which disappeared simply because the partners had achieved their goals, the alliance thus having no further raison for being (Gulati, 1998; Kale et al., 2002). A third alternative appeared which seemed to be able to mitigate the limits evoked above, at least partially: it has to do with the evaluation of relationship satisfaction (Morgan and Hunt, 1994; Kale et al. 2002). It indeed seems to us that the concept of each party’s degree of satisfaction relative to the relationship (Geyskens et al., 1999) is the most relevant measurement in evaluating the success of a logistic vertical partnership. It indeed makes it possible to evaluate the degree to which the objectives are reached as well as the value and productive character of the relationship (Kale et al., 2002). Moreover, it involves a multidimensional concept which makes it possible to adapt to the various areas of the relationship. Lastly, several studies show a strong correlation between perceptual measurements of the performance and more "objective" measurements founded on accounting and financial elements (Geringer and Hebert, 1991) or on stock exchange data (Kale  and Al,  2002).


The Basel II Capital Accord and Operational Risk Management; Status and the Way Forward

David Häger,  University of Stavanger, Norway

Dr. Lasse B. Andersen, University of Stavanger, Norway

Dr. Terje Aven, University of Stavanger, Norway

Frode Bø, SpareBank 1 SR-Bank



The Basel II New Capital Accord implores national regulatory authorities to require financial institutions to formalise their efforts to manage operational risk. The Basel II accord establishes three approaches for management of operational risk, the Basic Indicator Approach (BIA), Standardised Approach (SA) and Advanced Measurement Approach (AMA), where AMA is the most extensive of the three. A substantial effort has been directed towards establishing methodology that satisfies the strict requirements of the AMA, and especially quantification of regulatory capital. In this paper we review and discuss this methodology and the corresponding risk management regimes. We conclude that the regulations and prevailing current practices put emphasis on risk identification, monitoring and measuring using traditional statistical analysis of hard data. The approach results in a limited ability of proactive risk management as the data are scarce and not forward looking. We suggest a shift of focus in the management of operational risk. A framework is introduced that put emphasis on the analysis of the probability of loss events occurring, rather than their consequences. The analysis is based on causal modelling, visualising loss scenarios and their causes, instead of traditional statistical analysis of historical data. A broad risk perspective is adopted in which risk is defined by the combination of possible consequences and associated uncertainties, acknowledging that management of operational risk has to see beyond the calculated probabilities and expected values. The proposed framework satisfies the AMA prerequisites, and we believe that it also represents a competitive edge, well suited for identification of risk drives, evaluation of mitigating actions and risk informed decision-making.  Within the framework of the Basel II New Capital Accord (BIS, 2006) operational risk (OR) is treated with the same formality as Credit and Market risk. The essence of this formalisation is that banks have to hold equity capital, referred to as regulatory capital, corresponding to their individual OR exposure. By managing their risk the banks can directly influence the size of this equity capital. Operational risk is by the Basel II accord defined as “the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events” (BIS, 2006). Regulatory capital is introduced as a measure to ensure that banks are not bankrupted as a result of losses under the definition above (as Barings Bank was in 1994), and is hence a protective measure for the customers and owners of the banks as well as the general national (and international) financial stability. Rather than just preventing bankruptcy there should from a banks point of view be a genuine interest in understanding and controlling the risk of the activities they are involved in. A bonus is of course reduced regulatory capital and thus higher available working capital.  Basel II defines three approaches to establish the regulatory capital for OR: Basic Indicator Approach (BIS), the Standardized Approach (SA) and Advanced Measurement Approach (AMA), where AMA implies the most extensive effort of OR management. It is the AMA that has received the most attention and is also the focus of this paper. The focus on regulatory capital in the Basel II accord has resulted in a very consequence oriented risk management approach, emphasising estimation of loss severity, in addition to estimation of loss frequencies, using standard statistical analysis methods. The result of this approach is a focus on risk measures such as the operational Value at Risk (OpVAR). This measure is estimated mainly based on historical data, and uncertainty is expressed using for example a confidence interval.  We have reviewed the current regulations and the literature on OR, and in the paper we will report on the main findings and discuss their implications. Some of the key challenges addressed are:   The dependence on historical data limits the possibilities for proactive risk management, as the data are sparse and not forward looking.  The regulatory focus on consequences results in work aiming to minimize the consequences of loss events rather than focusing on probability of occurrence. Focus is placed on statistical parameters and their uncertainties, instead of observable quantities and their uncertainties.  Concerning the third point, is a confidence interval reflecting the uncertainties of interest? The point is that a confidence interval is related to uncertainties in probabilistic parameters, expressing probabilities and expected values, interpreted as average numbers when considering infinite thought-constructed populations of similar activities to the one being studied.  Let us say a medium sized Norwegian bank applies this approach to address the uncertainty in a specific risk measure. Due to its size it has very limited internal registered data and purchases data spanning five years back from a Norwegian database. Based on these data the risk measure is computed. To address the uncertainties confidence intervals are established on the parameters of the loss severity distribution. These are initially wide, and to reduce the uncertainty further the bank purchases data from entire Europe spanning the last fifty years. The result of the increase in data points is a narrow confidence interval for the parameters and subsequently the risk estimate is presented to management as being very accurate. When increasing the data volume by using old and irrelevant data does it really make sense that the accuracy of the estimate is improved?  As an alternative to the prevailing thinking we suggest a framework that put emphasis on the analysis of the probability of loss events occurring, rather than their consequences. The analysis is based on causal modelling, visualising loss scenarios and their causes, instead of traditional statistical analysis of historical data. A broad risk perspective is adopted in which risk is defined by the combination of possible consequences and associated uncertainties, acknowledging that management of OR has to see beyond the calculated probabilities and expected values. The proposed framework for risk management satisfies the AMA prerequisites and we believe it will also represent a competitive edge, well suited for identification of risk drives, evaluation of mitigating actions and risk informed decision-making. The framework is a first step on the way to establishing an approach that in an organized and easy to follow manner can be fed organisation specific input, and where causal relations is the foundation for understanding mechanisms leading to loss.  The competitive edge of this approach can be illustrated by a simple example: a bank considers a new business line e.g. trading of shares, and bases its decision on whether or not to enter in to this business partly on an evaluation of the risk involved. Under the approach practiced today the banks’ risk management department would compute the OpVAR and present it to the management as the risk. This will at best also include a qualitative discussion on the loss scenarios involved. The risk is introduced as the best estimate of the real risk of entering into this new business, and this risk number alone could be of major importance for the decision.  Under our suggested approach a visualisation of the loss scenarios and their causes would form the basis for a discussion of the risk involved. Probabilities are assigned and uncertainties in phenomena and processes highlighted.  Identification of mitigating actions that could reduce the risk to an acceptable level is central. The OpVAR number says nothing about opportunities for risk reduction and provides limited aid in seeing the potential or opportunity in new business lines. The approach we suggest will enable a competitive advantage in that business lines, which initially seem too risky, are realised as profitable through a set of mitigating actions identified through the risk analysis process. The thoughts and ideas presented in this paper are anchored in a new research project involving leading actors within the Norwegian banking industry representing 80% of the total banking capital. Thus, where it is referred to specific bank practices we draw upon Norwegian banking expertise and experience.  We start off by reviewing previous research and prevailing practice on the subject, and continue with a presentation of our suggested framework. 


Asian and Western Market Research Differences and Similarities

Dr. Shawana P. Johnson, President, Global Marketing Insights, Inc. Independence, OH



In 2005 and 2006, the National Oceanic and Atmospheric Administration (NOAA’s Satellite and Information Service) contracted with Global Marketing Insights, Inc. to provide a comprehensive review of the international remote sensing market for aerial and spaceborne sensors focused on Canada, the United States, Europe and Asia.  Over 1,547 online surveys and 250 personal interviews were completed by the Western respondents in 2005. The Asian study produced 408 surveys and 50 interviews in 2006. The online surveys produced sophisticated statistics and the personal interview questionnaires provided qualitative inputs and support of the trends discovered in the statistics.  Although the study time periods were the same and the size of the sample population was similar, the Asian response was much lower in comparison to the West.  Although the client (and the research team) were delighted by the response rate from Asia it was interesting to discover the differences and similarities in the research techniques utilized between the Western and Asian studies.  The surveys were very technically oriented (as was the sample population) and the surveys were in English and Chinese.  Over 20 countries were targeted and Table 1 below highlights the response rates by country.  Currently the most popular methods of market research in Asia are traditional methods such as door-to-door, central location and focus groups.  Phone surveys are gaining acceptance as a larger percentage of households acquire them. Cellular phones are driving this trend. Phone surveys are currently used in business-to-business studies. More advanced techniques such as panels, and omnibus studies are making progress. The widespread use of computers are making e-mail surveys feasible, particularly in business-to-business studies.  Since the target audience of this study were so highly technical and were business oriented online surveys were utilized in hopes the response rate would be higher than online surveys utilized for Asian consumer market research.  It was somewhat difficult to adapt one market study from the West to the same industry in Asia even though the industries are the same.  The reasons for this were the number of differences in socio-cultural context that affect how research needs to be conducted. Beyond language and other obvious differences are more subtle cultural traits that needed to be considered and accounted for.  These traits can result in unreliable results if not addressed.  These cultural traits most impacted the personal interviews.  Asians typically do not want to express their opinions directly especially if they view their response as out of alignment with the interviewer or the industry represented-so many times the interviewee will provide an appropriate answer versus what they believe to be true.  In order to account for this the personal interviews were conducted by researchers as part of a group of Asian Alliance Research Partners who possessed similar cultural traits.  In business the Asian (especially Chinese) communities have what is referred to as a “network of relationships”. Anyone outside of this closed immediate network has difficulty obtaining access to that business and its employees unless an esteemed reference is provided.  Although this was not viewed as being much different from the cultural traits in Western business access to the Asian business network in this industry proved even more closed.  The Asian Alliance Research Partner Network became even more important in developing the links this community. Table 2 lists the members of the study network and their contact areas in Asia.  The main value of these Alliance Research Partners was the fact they had local Asian offices and instead of just referring respondents to the survey they hosted the survey link on their web sites and promoted the surveys at in-country technical conferences.  It was important to use these local Alliance Research Partners first, because we wanted the respondents to be exactly representative of the target customers and the Partners had access to the correct target sample. The Partners were selected after careful review of their businesses and market reach in order to ensure that our partners were covering diverse segments of the Asian industry being studied.  A Partner was actually part of the group studied and in order to avoid bias we utilized researchers who were not part of the partner organization. This is why as a general research practice, researchers don’t know participants and are outsiders. Bias was also avoided by utilizing the online research tool which was independent of the Alliance Research Partner Network.  The Partner only had to refer the respondent to the online survey.  The personal interviews also needed to be managed in a way that would prevent research bias.  This was accomplished by sharing booth space with Partners at Asian conferences which allowed the researchers who didn’t know the participants to have access to the respondents.  The Partner booth (due to its familiarity to the respondent) drew the respondent in and the researcher then completed the personal interview.  The researchers were trained in the cultural differences that they would encounter.  The researcher was focused on obtaining the confidence of the respondents. For example, the researcher focused on the fact that the surveys were anonymous, and that the respondents’ name would not be shared with others, only their views and opinions would be utilized.  The survey also had to be explained in order to allow the respondent to understand there were no wrong answers and that the researcher was truly interested in their opinion.  The researcher had to be sure not to share their opinion with the respondents and had to be friendly in order to put the respondent at ease.  It was interesting that the female researchers completed more personal interviews in the same allotted time as their male counterparts.  Over fifty percent of the Asian study respondents were under the age of 30 years old.  They were well educated with highly technical skills for that industry.  They were extremely open and very friendly.  Figure 1 highlights the positions held by the Asian Respondents. 


A Configuration Form of fit in Management Accounting Contingency Theory: An Empirical Investigation

Dr. Cadez Simon, University of Ljubljana, Slovenia



The study aims to further our appreciation of management accounting in their organizational context by building on the premises of contingency theory. The fundamental tenet of contingency theory holds that company performance is a product of an appropriate fit between the structure (management accounting system) and context (contingency factors). In this paper, an integrative contingency model of management accounting is advanced and empirically assessed. It is the issue of fit that is central to contingency theory and in this study a configuration-contingency form of fit is tested via cluster and profile deviation analysis. The configuration form was used becouse it takes a holistic view of fit and seeks to examine the effects on performance of internally consistent multiple contingencies. The results appear to support the central proposition of contingency theory. A sophisticated strategic management accounting system is not automatically associated to superior performance, rather superior performance is a product of an appropriate fit between the identified contingent factors and a management accounting system.  The study has two main objectives. The first objective is to further our appreciation of management accounting in their organizational context by advancing a contingency-based management accounting framework. Contingency theory posits that organizational structures and systems are a function of environmental and firm-specific factors (Anderson and Lanen, 1999; Haldma and Laats, 2002; Chenhall, 2003). In this study, four factors have been noted as potentially carrying significant implications for management accounting system design. These are: (1) business strategy, (2) degree to which adopted strategy is deliberate or emergent, (3) market orientation, and (4) firm size.  The study’s second objective is to empirically investigate the validity of the proposed management accounting contingency framework by testing a configuration form of contingency fit. While Dent (1990) and Fisher (1995) claim that contingency theory has become the dominant paradigm in management accounting research, such a view is debatable. A closer look into many “contingency studies” namely reveals that conditional associations of two or more independent variables with a dependant variable performance which are central to contingency theory (Drazin and Van de Ven, 1985) are rarely appraised. Most studies would be better described as “congruency” theory applications where only unconditional associations between context and structure are appraised. Another problem with contingency literature in management accounting revolves around its fragmentary and often contradictory nature. In the past, many different forms of fit have been used and tested in contingency-based research, yet very few researchers fully acknowledge the difficulties of relating different forms to each other (Gerdin and Greve, 2004). This study aims to empirically assess a configuration-contingency form of fit which takes a holystic approach and is tested via cluster and profile deviation analysis.  The remainder of the paper is organized as follows. In the next section, management accounting system is defined. Following this, the contingency model of management accounting is developed. In subsequent sections, the research method is described, the results are outlined and the conclusion provides an overview of the most salient issues arising from the study.  A review of the literature suggests two perspectives on management accounting can be taken. Firstly, management accounting can can be conceived of as comprising a set of accounting techniques. Secondly, management accounting can be viewed as concerned with the involvement of accountants in corporate strategic decision-making processes. These two perspectives are explored now.  Guilding et al (2000) provided a distillation of strategic management accounting (SMA) techniques and also criteria for viewing a particular accounting technique as “strategic”. They noted that in much of conventional management accounting, a one year time frame is assumed and that an inward focus predominates. These characteristics highlight the non-strategic nature of conventional management accounting, as strategy implies a long-term future-oriented time frame and an externally-focussed perspective (Mintzberg, 1987a; Mintzberg et al, 1995; Hunger and Wheelen, 1996; Porter, 1996). Guilding et al (2000) consequently advocated that these characteristics might be usefully drawn upon when determining what accounting techniques qualify as SMA. In their view, the techniques should demonstrate degrees of the following orientations: environmental (outward-looking) and/or long-term (forward-looking). Employing these criteria, Guilding et al (2000) drew 12 SMA techniques from the literature. In a subsequent work, Cravens and Guilding (2001) added another three techniques. Drawing extensively on these works, 16 SMA techniques have been identified for analysis in this study. These techniques have been classified into five broad categories. Three of the categories correspond to underlying themes of management accounting acknowledged in many management accounting texts: (1) costing, (2) planning, control and performance measurement, and (3) decision-making. The remaining two categories have been labelled “competitor accounting” and “customer accounting”. The techniques are presented in Table 1. Paralleling the development of strategically oriented management accounting techniques, several recent commentaries suggest that accountants are assuming a greater role in the strategic management process (Fern and Tipgos, 1988; Palmer, 1992; Bhimani and Keshtvarz, 1999). Some see the significance of this to be such that a new concept , “the strategic accountant”, has emerged.  Oliver (1991) argues that in stark contrast to their more traditional counterparts, strategic accountants are integral to strategic decision-making processes. The more mundane accounting tasks traditionally associated with the profession are being increasingly automated, freeing accountants to become involved in broader spheres of management activity. Strategic accountants can be viewed as proactive in analyzing broader business management issues rather than those narrowly defined by a financial orientation, and also more customer-oriented by providing greater counsel to clients (Coad, 1996; Nyamori et al, 2001). Roslender and Hart (2003) see SMA as intimately associated with marketing management.


The Impact of Media Spokeswomen on Teen Girl’s Body Image:  An Empirical Assessment

Dr. Victoria Seitz, California State University, San Bernardino



The media is a highly influential element in teens’ lifestyle, behavior, self-esteem and purchase decisions.  The portrayal of beauty and perfection of teen idols can put pressure on teenagers to become the ideal image.  As a result many teens are suffering from eating disorders and lack of self-esteem.  Hence, the purpose of this study was to examine the impact of media spokeswomen on teenage girls’ perceived body image.  Data was collected via self-administered questionnaires from a random sample of 100 girls.  Findings suggest that most teen girls have a celebrity/peer actor idol, and that respondents are self-conscious regarding their body shape and weight.  The study confirmed that media spokeswomen play an essential role in teens’ perceived body image and that marketers should be more cautious in the images presented to young girls.  Adolescence is often the time of accelerated mental and physical growth for female teens.  Moreover, this is the time when girls are increasingly confronted with expectations to conform to female gender role prescriptions (Johnson, 1999).   According to Basow and Rubin (1999), the ideal female image in America is thin and attractive, yet with a full body shape. As teen girls feel the pressure to be accepted in society, and by their peers, physical appearance takes on an important role in that process.Females are exposed to supermodel-like images everyday via TV and magazines that affect their body image perceptions, purchasing behavior, and self-confidence.  At such a vulnerable age, teen girls are most heavily influenced by these images than any other age group (Andersen and DiDomenico, 1992).  These are the images portrayed by the media through celebrities, peer actors, models and sports figures. To them, these are the ideal images that should be achieved (Wiseman, Gray, Mosimann and Ahrens, 1992).  Furthermore, the pressure to conform has driven a growing number of teens to pursue permanent make-up, extensive dieting and cosmetic surgery (Martin and Kennedy, 1993).  According to the National Eating Disorders Association (2004) there are 10 million women, mostly teens and young adults, suffering from anorexia and bulimia.  A very widespread criticism of fashion models is that “. . . they have created an epidemic of eating disorders among teens” (Etcoff, 1999, p. 201).Zollo (1998) asserts that since the 1980’s, teens have become heavy media users.  The introduction of teen-targeted programs has helped to nationalize their experience and have connected them through common images and expressions.  Subsequently, these images put heavy pressure on teen girls; as one teen confessed, “. . . just living in this society, is an everyday struggle not to become anorexic” (Kirberger, 2003, p. 84).The role of media spokeswomen has been critical in influencing cosmetic surgery rates among teen girls as well as their overall perceived body image (Quart, 2003).  For example, ABC’s “Connect with Kids” program aired an episode called “Mirror Mirror,” that covered young celebrities having plastic surgery.  The program also stated that teens were getting plastic surgery as high school graduation gifts (American Society of Plastic Surgeons, 2004).  In fact, the American Society of Plastic Surgeons reports that the total number of cosmetic procedures performed to people 18 years or younger was 74,233 in 2003 with the most common procedures being liposuction, nose reshaping, thigh-lift and breast augmentation (2004).  Corresponding to the growth of the industry, the amount of coverage in magazines related to cosmetic surgery has increased significantly.    Teen girl magazines such as Seventeen, Teen, and Mademoiselle had 22 articles between 1980 and 1995.  Not surprising, 42 articles appeared in Vogue during the same 15- year time frame (Sullivan, 2001). Furthermore, according to the US Market for teen and tween grooming products (Package Facts, 2003), there has been a growth in cosmetic products targeted to teens such as Estee Lauder’s MAC cosmetics and Hard Candy.  Finally, the influence of reference groups cannot be ignored when mentioning body image and related consumption behavior.  Teens are heavily influenced by others in their peer group regarding clothing, hairstyle, mannerisms, purchasing decisions and, most importantly, body image perceptions (Bachmann, John, and Rao, 1993). Given this growing trend, and the relative paucity of literature, the purpose of the study was to determine the influence of media spokeswomen on teen girls’ self-image.  More specifically, this study sought to: 1. Determine who among selected media spokeswomen types influence teen girls,  2. Determine how media spokeswomen influence the way teen girls feel about their body image and the perceived ideal body image, 3. Determine the overall influence of media spokeswomen on teen girls, 4. Determine teens’ perceived body image, and, 5. Determine the relationship between media spokeswomen and perceived body image. Children learn their gender roles and develop their identity during the first years of childhood by observation and participation (Erikson, 1980).  Brannon (1995) explains that media plays an important role in how women’s roles are portrayed and shaped.  In western societies “traditional” women’s roles consist of looking beautiful, sensitivity, and domesticity.  This message is further enhanced through the media (Poniewozik, 2004) Women, as seen through the eyes of media and cultural norms, are expected to take care of themselves and to be physically attractive (Wolf, 2002).  These roles and cultural expectations are presented by the media to women of all ages starting from childhood through adolescence and adulthood.  Further, Schor (2004) suggests that teen media is a mirror for unrealistic body images and gender stereotypes. Lokken, Lokken and Trautmann (2002) suggest that mass media presents images through spokespersons that mirror the roles and expectations set by the American culture.  For women, these images portrayed by spokeswomen condone the culture’s norms.  Psychologists have suggested that the media can affect men’s and women’s body esteem by providing a reference point against which unfavorable body shape comparisons are made (Grogran, 1999) Television and other media represent one of the most important influences on adolescents’ health and behavior (Strasburger and Donnerstein, 1999).  More specifically, teenagers in general, pay most attention to advertising that airs on cable TV (54%) and magazines (53%) (Zollo, 1998).  Cable network channels that feature music programs such as MTV, target audiences between the ages of 12 and 34 (Englis, Solomon and Ashmore, 1994).  According to Englis, Solomon, and Ashmore (1994), 85 percent of middle-class teens watch cable networks every day (Klein, 2000). Magazines and newspapers arrive regularly at millions of American homes and consequently, teen girl’s magazines have shown a dramatic growth in the last several years not only in numbers of titles but also in circulation.  In 2002, teen-targeted magazines Seventeen, YM, and Cosmo Girl were among Audit Bureau of Circulations (ABC) top magazines, with an average readership of 2,445,539; 2, 231,752, and 1,062,271 respectively (Pattee, 2004).  Furthermore, magazines have been developed that target teenage girls such as Teen People, Teen Vogue, and Elle Girl. In pursuit of the teen market, advertisers now employ a variety of executions in the media (Zollo, 1998). One of these is spokespersons.  Spokespersons can be further categorized as celebrities and/or peer actors, models and athletes.


Economic Factors, Firm Characteristics and Performance: A Panel Data Analysis for United Kingdom Life Offices

Yung-Ming Shiu, National Cheng Kung University, Taiwan



This article examines panel data evidence to investigate the linkage between insurer performance proxied by investment yield and several economic and firm-specific factors in the United Kingdom life insurance industry. The empirical results indicate that investment yield is positively related interest rate level, but negatively related to interest rate changes, reinsurance dependence, assets held to cover linked liabilities and instability of asset structure. An insurer’s operations fall into two main categories: underwriting and investment. Because they offer many investment-related products such as with-profits polices, life offices’ investment performance plays a key role in meeting their obligations to policyholders. Previous literature includes studies concerning the relationship between performance and both economic factors and firm-specific characteristics. Based on annual data from 1985 to 1995 for 1,593 United States (US) life insurance companies, Browne, Carson and Hoyt (2001) identify important economic and market factors and insurer-specific characteristics related to life insurer performance. In their paper, company performance is positively related to firm size, liquidity, bond portfolio returns, whereas negatively related to unanticipated inflation. Adams and Buckle (2003) examine the determinants of operational performance in the Bermuda insurance market using panel data for the period from 1993 through 1997. A two-way random-effects model is estimated. They find that operational performance is positively correlated with leverage and underwriting risk, whereas negatively correlated with asset liquidity. Since the duration of their liabilities generally is long-term, life offices invest most of their funds in long-duration assets to mitigate balance sheet risks. In practices, however, life insurers often intentionally mismatch the durations by holding assets with longer duration than liabilities to obtain higher returns (Colquitt and Hoyt, 1997). Because both bond and equity portfolios account for high proportions of the invested assets of a life office, bond investment earnings and equity returns are important for its investment performance. High bond returns which largely depend on the level of interest rates and high equity returns enhance insurer performance. It has been suggested that larger firms have better performance than small companies, mainly because of their economies of scale and greater capacity for dealing with adverse market fluctuations. Although life insurers rely less on reinsurance than their non-life counterparts, reinsurance still is indispensable in life insurance operations since it can increase underwriting capacity, stabilise earnings and provide protection against catastrophic losses. Nevertheless, reinsurance is costly. Increasing reinsurance dependence, i.e. lowering the retention level, may reduce the potential profitability. As stated previously, Adam and Buckle (2003) provide evidence that insurers with high leverage have better performance than those with low leverage. However, more empirical evidence such as that found in Carson and Hoyt (1995) and Browne, Carson and Hoyt (2001) supports the view that leverage risk reduces firm performance. Based on the statutory data contained in SynThesys Life (Version 3.32), 31.1 per cent of the assets of the UK life insurance sector as a whole were held to cover linked liabilities during the period of 1986-1999. However, there is no expectation about the direction of the relationship between performance and assets held to cover linked liabilities. An insurer’s product mix represents its liability structure, which might have an impact on performance. In Browne, Carson and Hoyt (2001), it is shown that some of the product categories such as ordinary life and annuity products are significantly related financial performance for US life insurers.  The general objective of this article is to ascertain whether and what economic factors and firm characteristics are associated with life insurer investment performance. This study contributes to the existing literature in the following ways. The first valuable contribution is to carry out a comprehensive research on investment performance determinants utilising both statutory returns and economic data. The present study is the first study that examines performance determinants in the context of United Kingdom (UK) life insurance. By focusing on the UK life insurance industry, this study is able to control for cross-country and cross-industry differences in regulations. Thus, this paper can be used to fill the gap in the literature. Second, this article provides important insights into economic factors and firm characteristics affecting investment performance of UK life offices. The last contribution lies in the panel data design for the empirical analysis. The use of panel data models helps overcome some of the data and research method-based limitations such as inability to control for unobservable differences across individual insurers that might influence insurer performance. Therefore, this research should be of value to insurance regulators and academics and practitioners.  The rest of the paper is organised as follows. In Section II, the model of the response of individual investment performance to economic and firm-specific factors is presented. The data employed in estimating the model also are described. The estimation and test results are shown in Section III. The last section concludes the article and draws implications for insurance supervision and management.  There are three models estimated in this study including ordinary least squares (OLS) regression analysis and two panel data models including one-factor fixed-effects (FE) and random-effects (RE) model, to investigate the relation between a life office’s investment performance and several variables. Panel data, also known as longitudinal data, are obtained by following a cross-section of individual units over several time periods. The panel data design involves the pooling of time series and cross-sectional data and accordingly panel data have special advantages over time series or cross-sectional data alone. The main advantages of this design are that it controls temporally persistent differences among offices that may bias estimates obtained from cross-sections (Maddala, 1993) and that the information extracted from panel data is likely to be more than that from pure time series or cross-sectional data (Gius, 1998). Other benefits of using panel data include reducing the collinearity among explanatory variables and controlling for individual heterogeneity. Inevitably, panel data have some limitations, such as data collection and selectivity problems (Hsiao, 1986; Baltagi, 1995). Panel data models fall into two categories, fixed-effects and random-effects models, depending on the assumption (fixed-effects vs. random-effects) concerning the coefficients. The nature of sample and inference and the assumption of the relationship between individual effects and explanatory variables determine the choices between the two alternative models. The fixed-effects model is appropriate when the sample is exhaustive and the inference is made only with respect to characteristics of individual units in the sample, whereas the random-effects model is appropriate when the individual unit effects are assumed to be uncorrelated with the explanatory variables (Judge et al., 1980; Green 2000). The one-factor fixed-effects and random-effects models are estimated by partitioned ordinary least squares without overall constant and feasible two-step generalised least squares respectively (Greene, 2002), using LIMDEP (Version 8.0). Lagrange Multiplier (LM) and Hausman tests will be conducted to determine which model is the most appropriate. High values of LM test favour panel data models over OLS model, while high values of Hausman test favour FE model over RE model.  We hypothesise that insurer investment performance proxied by investment yield is affected by a number of possible explanatory variables listed in Table 1. where i is the index of office, k is the index of explanatory variables,  t is the index of time periods, αi and βk are parameters to be estimated, and εit is the error component for office i at time t and is assumed to have mean zero, E[εit]=0 and constant variance E[εit2]= σε2.   (Insert Table 1 here)


Multinational Firms’ Foreign Direct Investment

Dr. Hilmi Nathalie, International University of Monaco, Monte Carlo, Monaco

Dr. Ketata Ihsen, Georgia Institute of Technology, NW Atlanta

Dr. Safa Alain, University of Nice Sophia-Antipolis, Nice, France



This study combines several economic scopes: management, strategy and macro-economics. It tries to understand the surrounding environment, from home and host countries’ viewpoints, which encourages MNCs’ FDI. The incentives can motivate their location choice, but the latter also depends on their objectives. After a look at the three major entry modes (wholly-owned mode, international joint venture and contractual mode) and the subjacent strategies, the paper deals with the different impacts on the source economies and the receipt countries. Those last twenty years, the development of cross-border flows has contributed to a growing interrelation between the countries. Globalization represents this phenomenon, including the increasing financial flows. Foreign direct investment (FDI) is the most interesting way for a country to increase its total investment capacity. Multinational companies (MNCs) are the major FDI providers. They adapt their strategies to their objectives and to the conditions prevailing in an internationalized world economy. This paper aims at studying why FDI are attracted by certain locations (section I), what MNEs’ delocalization strategies are (section II) and how MNCs’ FDI influence the economies of home and host countries (section III). When corporate managers decide to invest in high returns regions, they have several motives: growth motive when they look for new markets, solution to protection in importing countries, a way of saving transportation costs, escape from exchange rate fluctuations, prevention against market competition, and costs reduction thanks to cheap foreign labour. We can first see the reasons why MNCs tend to internationalize and decide to invest abroad (Hilmi and Safa, 2001). They can be separated into four categories: market and trade conditions, costs of production, business conditions, government policies and the macroeconomic framework. Market push factors: When the home country becomes limited in terms of scale and opportunities to expand, firms tend to move their production overseas, especially if there are trade barriers.  Production costs and constraints in factor inputs: labour costs and inflationary pressures result in overseas investment. Local business conditions: Competition with local firms or MNCs, or with firms abroad are drivers towards internationalization, aiming at company restructuring.  Home government policies can push a firm to invest overseas.   Market pull factors: Host economies with large markets are attractive, especially when there are regional integration agreements.  Costs of production: low costs of labour or required resources and proximity are determinant factors, as well as availability in natural resources, labour and infrastructure.  Business conditions and host government policy framework: liberalization and privatization policies, trade regulations and FDI inducements, investment treaties, transparent governance, investment in infrastructure, property rights, exchange-rate regulations create an enabling environment for FDI. Closer communication and familiarity make neighbouring countries more attractive. To understand the final choice of host country locations, we must study now the MNCs’ motives, strategies and context.  Looking for new customers, FDI in a nearby country is common because of familiarity, cultural and institutional similarity, ease of access, cross-border spillovers and similar factors. Apart from proximity, the size of the host market is important too. Affiliates permit to get around trade barriers, to avoid high transportation costs and to adapt products or services to the requirements of customers.  The location of efficiency-seeking FDI depends on the nature of the product and the production network in which it is located (UNIDO 2004, Hines et Al 2000, Schmitz 2005). There are two main networks: Buyer-driven networks (Gereffi and Memedovic 2003, Kaplinski et al. 2003) are less dependant on industry clusters. They are usually regional, but their location is derived by costs reducing factors, such as national and international policies. Producer-driven networks (Humphrey and Memedovic 2003) consider regional proximity as primordial in their geographic delocalization decision. Concerning natural resource seeking, developing countries MNCs’ motives can differ from those of developed countries TNCs. For instance, in oil, gas and extraction, developed-country MNCs conduct resource-seeking FDI to secure supplies for their home, whereas developing country MNCs strategically invest overseas to open or secure markets to supply the home economy. That is why most of the developing countries MNCs are State-owned.  Very few affiliates only seek created assets. Most have mixed reasons: they are combined with asset exploitation motives, like market-seeking and efficiency-seeking. The acquisition of new assets is a way of maintaining a portfolio of brands and complementing their manufacturing and engineering knowledge. Some MNCs, especially state-owned ones, have strategic and political objectives pursued on behalf of their home countries, like securing a vital input or underpinning a country’s development and industrial competitiveness (Lall 2004).  The rise of MNCs , in a context of world liberalization and internationalization, is essentially due to small domestic markets, rising costs, intense global and local competition, overseas opportunities and liberalized investment policies. Developing and transition economies MNCs have the same firm-specific advantages as the developed countries counterparts, but the proportion differs. The second ones possess key assets, such as technologies, brands, intellectual property; the first ones have production process capabilities, networks and organizational structure.  The choice of an entry mode is a crucial decision for multinational companies (MNCs) that can dangerously engage their future. This decision deserves to think about it seriously since it could lead to different kinds of risk.  The simple choice between “buy” or “build” as it was presented in the sixties has been enriched with new intermediary entry modes. These new modes, authorized by the law are the result of the imagination of the companies that were looking for the best way of entering a new activity. Nowadays, three entry modes are found in most of multinational firms’ literature.  The wholly owned mode, the joint venture mode, and the contractual mode are the most common entry modes (Ketata, 2006).  The level of control and the resource commitment of the MNC depend on the entry mode chosen (Siripaisalippa and Hoshino, 2000).


Short and Medium-Term Determinants of Current Account Balances in Middle East and North Africa Countries

Dr. Aleksander Aristovnik, University of Ljubljana, Slovenia



The main aim of the paper is to examine the empirical link between current account balances and a broad set of (economic) variables proposed by theoretical and empirical literature. The paper focuses on the Middle East and North Africa (MENA), an economically diverse region, which has so far mainly been neglected in such empirical analyzes. For this purpose, a (dynamic) panel-regression technique is used to characterize the properties of current account variations across selected MENA economies in the 1971-2005 period. The results, which are generally consistent with theoretical and previous empirical analyses, indicate that higher (domestic and foreign) investment, government expenditure and foreign interest rates have a negative effect on the current account balance. On the other hand, a more open economy, higher oil prices and domestic economic growth generate an improvement in the external balance, whereas the latter implies that the domestic growth rate is associated with a larger increase in domestic savings than investment. Finally, the results show a relatively high persistency of current accounts and reject the validity of the stages of development hypothesis as poorer countries in the region reveal a higher current account surplus (or lower deficit).  The current account balance is an important indicator of any economy’s performance and it plays several roles in policymakers’ analyses of economic developments. First, its significance stems from the fact that the current account balance, reflecting the saving-investment ratio, is closely related to the status of the fiscal balance and private savings which are key factors of economic growth. Second, a country’s balance on the current account is the difference between its exports and imports, reflecting the totality of domestic residents’ transactions with foreigners in markets for goods and services. Third, since the current account balance determines the evolution over time of a country’s stock of net claims on (or liabilities to) the rest of the world, it reflects the intertemporal decisions of (domestic and foreign) residents. Consequently, policymakers are endeavoring to explain current account balance movements, assess their sustainable (and/or excessive) levels and seek to induce changes to the balance through policy measures. Recent financial crises and the growth of current account deficits in many countries has raised questions about their potential sustainability (and excessiveness) and concerns regarding the potential impact a rapid and disorderly correction of these imbalances might have. Several theoretical and empirical studies have tried to address these issues, including investigating the determinants of external balances. However, Middle East and North Africa (MENA) (1) countries have not been the main focus of these analyses as the region consists of many oil-exporting countries with positive and thus relatively unproblematic external positions, especially in recent years. Nevertheless, this paper tries to fill in this gap by providing some important insights into the determination of current account balances in the MENA region in the last few decades. The MENA region is an economically diverse group of countries that includes both oil-rich countries in the Gulf like Kuwait, Saudi Arabia and Oman, and resource-scarce countries such as Egypt, Jordan and Morocco. The region’s economy over the past decades has basically been influenced by two factors, i.e. the oil price and the mix of economic structure and state policies. In the 1980s, many countries in the region undertook reforms which induced tremendous improvements in economic growth by the late 1990s. However, the region is still facing economic and social problems, with the most serious ones being unemployment, estimated at about 12.2% of the workforce (2005), and poverty (incl. inequality) (2). Indeed, much of the region is still characterized by large public sectors, with centralized governments, large and over-staffed civil services, and weak systems of accountability. This all hinders the development of the private sector and the creation of the jobs needed to significantly bring unemployment down (World Bank, 2004). The Iraq war and the ongoing Palestine-Israel conflict have also had a negative impact on the region’s economic performance in recent years. Nevertheless, as oil prices continued their upward climb the MENA region grew by an average of 6.0 per cent in 2005, up from 3.2 per cent in 2001 and compared to average growth of only 3.7 per cent during the late 1990s.  The approach taken in the paper is to view current account positions as a reflection of their saving and investment balances and to thus characterize the fundamental determinants of their levels in the short- to medium-term perspective in the MENA region. Even though such an approach is essentially empirical, it relies primarily on various theoretical models for identifying these fundamental determinants and interpreting their impacts on current account levels. Accordingly, the paper chiefly focuses on the (short and medium-term) (3) determinants of current account dynamics in selected MENA countries. In this respect, the empirical analysis expands and builds upon some previous similar attempts regarding a different group of developing and transition countries (see Debelle and Faruqee (1996), Roubini and Wachtel (1999), Calderon et al. (2002), Aristovnik (2002), Chinn and Prasad (2003), Doisy and Hervé (20039, Zanghieri (2004), Herrmann and Jochem (2005) etc.) in the following important ways:  annual data for up to 17 MENA countries in the 1971-2005 period are included; a wide number of (internal and external) macroeconomic variables suggested by the theoretical and empirical literature is used; time-series cross-sectional (panel) data with the inclusion of a variety of modern econometric techniques are employed; and  by dividing the MENA region into two diverse subgroups, i.e. oil-exporting and non-oil exporting countries, and by analyzing differences between these two groups. The paper is organized as follows. The next section briefly presents current account balance developments and trends in MENA countries in the 1971-2005 period. Section 3 describes the empirical methodology, assumptions, data and empirical results of the determinants of current account positions for the selected MENA countries. The final section provides empirical results and some concluding remarks.  The 1970s and 1980s proved to be financially and economically volatile in the MENA region by challenging the ability of governments to achieve a stable macroeconomic environment, including a stabile external position. This financial volatility was mainly driven by the two oil price booms in the 1970s that resulted in a spur in economic activity in both oil exporting and importing countries of the region, followed by oil price busts in 1981 and in the latter part of the decade. Hence, in the MENA oil-exporting countries the current account surpluses equivalent to an average of 14.6 per cent of GDP in the 1970s evaporated within a few years and shifted to an average surplus of 4.4 per cent of GDP in the 1980s (see Table 1, Appendix B). In the same period, public expenditure was not effectively adjusted to the adverse external developments which resulted in the emergence of severe internal imbalances. In addition, governments were unable to eliminate price distortions which led to chronic external imbalances. At the time, most MENA governments resorted  to excessive external borrowing to finance their inefficient public investments and resource imbalances. These developments created an environment of economic instability and high inflationary expectations in many countries of the region.  The effect of external trade shocks on the MENA region during the 1970s and 1980s, coupled with the resistance of several countries to quickly adjust to those shocks, was very well reflected in their current account balances. Many MENA non-oil exporting countries (like Mauritania, Morocco and Tunisia) could not contain their current account deficit below 5 per cent of GDP during most of the 1970s and 1980s. On the other hand, most MENA oil-exporting countries managed to accumulate extreme current account surpluses in the same period, especially in the 1970s (see Figure 1) (4). However, the large surpluses were spent rapidly and, when oil prices fell, governments were obliged to undertake difficult and painful fiscal adjustments (Krueger, 2006). (5) Eventually, these diverse trends in current account dynamics in both subgroups of the MENA countries helped to form a balanced external position of the MENA region as a whole.  For the capital-attracting MENA countries, the first half of the 1990s witnessed increased volatility in external balances as seen by the share of the current account deficit in GDP in the whole of the 1990s (averaging out at 2.6 per cent of GDP). Debt restructuring in some countries reduced interest payments on debt and helped contain the current account deficit. In an extreme case, a structural current account surplus emerged in Egypt and the Islamic Republic of Iran. Meanwhile, Jordan and the Republic of Yemen was adversely affected by the Gulf war (with a current account deficit exceeding 10 per cent of GDP in the first half of the 1990s) and Lebanon (which had just emerged from its long civil strife) showed a very high external imbalance due to reconstruction-related imports. Similarly, oil-exporting countries faced the adverse effects of the Gulf war (in particular Saudi Arabia and Bahrain) which led to a relatively low aggregate current account surplus for the countries in the 1990s (averaging out at 2.5 per cent of GDP) (see Table 1, Appendix B).


Competitive Performance and International Diversification: Hypothesis of Internal and External Competitive Advantages of Firms

Dr. Alfredo M. Bobillo, Valladolid University, Spain

Dr. Felix López-Iturriaga, Valladolid University, Spain

Dr. Fernando Tejerina-Gaite, Valladolid University, Spain

Ilduara Busta-Varela, Copenhagen Business School, Denmark



The internal and external competitive advantages of firms across different phases of internationalization depend on the resources used by industries for their financial development and growth. These advantages, as well as the influence of internal owners, facilitate the access of firms to foreign markets. This study attempts to clarify the relationship between those resources and firm’s advantages, as well as to analyze the relationship between international diversification (or degree of internationalization) and firm performance in Germany, France, the U.K., Spain or Denmark. Our results support a curvilinear relationship between the degree of internationalization (hereinafter DOI) and firm performance that is articulated in three stages in the presence of industry reputation, technological and distribution barriers and also showing high transaction cost. These findings point to a cyclic process in the firm’s international expansion, where overcoming such barriers and developing governance and coordination mechanisms to minimize transaction cost becomes the main challenge of the firm in order to compete at the worldwide level. Diversification represents a growth strategy and has a great impact on firm performance (Chandler, 1962; Ansoff, 1965). In a later development, different studies looking for a link between performance and international diversification (or degree of internationalization) show divergent results. In some cases, results evidence a positive linear relationship, whereas in others they show a negative linear, U-shaped or even inverted U-shaped relationship.  How to explain these apparently conflicting results? Some authors like Contractor et al. (2003) suggest that the absence of a quadratic term in the equation would explain why initially only a linear function was found. Another reason might be the fact that the used data captured only part of the sigmoid (S-shaped) function. These authors propose a model in three stages that explains the relationship between performance and international diversification for service firms. Similarly, Capar and Kotabe (2003) build a linear model to justify the positive linear effect that international diversification has on ROS, and then they present a curvilinear model with a significantly higher explanatory power, since it introduces a squared term of DOI. Likewise, Lu and Beamish (2004) present a theoretical framework based on three stages (S-shaped curve) for the study of multinationality and performance applied to internationally-operating Japanese firms. The recent theoretical contributions only consider some of the institutional factors that can interfere with the process of a firm’s internationalization. Our review of the literature suggests a clear relationship between a country’s financial development and its economic growth. This relationship seems to be quite significant for firms and industries dependent on external finance (Levine, 1998; Beck et al. 2001 and Wachtel, 2001). There are multiple theories that justify the relationship between different financial systems and the distribution of economic activities. These explanations rely on the accumulation of information by the financial system, the decentralization of the financial system, or the ownership and governance structure. Consistent with this framework, the institutional structure of countries and the configuration of industrial sectors play an important role in their economic growth. Carlin and Mayer (2003) outline the characteristics of the industries in the 14 largest OECD countries. With a similar view, Rajan and Zingales (1995) survey the determinants of capital structure by analyzing the financial decisions of the most developed countries. They also assert that the economic and political variables are likely to have fostered more market-based EU financial systems (Rajan and Zingales, 2003). The theory of the multinational firm points out that international investment is explained by the exploitation of firm-specific assets, such as organizational capabilities, technological knowledge, reputation or the creation of reputable brands (Dunning, 1993, Caves, 1996). The commercialization of these assets is difficult and will be internalized by the firm rather than exploited through the market. As a consequence, the modifications caused in managerial and financial systems by the institutional structures of each country might favor the external and internal capabilities of firms, thus strengthening their competitiveness. Our study  is based on two theories: the resource-based view and the social capital theory. In accordance with the first, we stress the existence of idiosyncratic resources within the firm which set up its internal capabilities and which can foster the internal competitive advantages, such as investment in intangible assets (R&D and advertising). On the other hand, the social capital theory stresses the firm’s capacity  to use the cost advantages ( in either capital or labor) through the external capabilities, built on the relations with customers, suppliers and others partners in the firms, thus driving the external competitive advantages.  Bringing these two theories together creates a model for the development for firm specific assets and allows the resources of capital and labor needed for the said assets to be obtained. Thus, our first goal is to describe, on the one hand, the possible relations that may exist between the countries’ relevant financial variables and their firms’ external competitive advantages, and on the other, the influence of skilled labor in the different countries on firms’ internal competitive advantages. Our second goal is to establish a taxonomy of countries based on the pattern of internal and external competitive advantages used by firms. In a third stage, we analyze whether the recently contrasted S-shaped relationship between performance and degree of internationalization for service firms might be apply similarly to industrial firms. In addition, our paper has incorporated the characteristics of the ownership and governance structure which, until now, had scarcely been considered in this field. Assuming that institutional structure could influence the development of countries’ financial systems and affect the industry characteristics (Carlin and Mayer, 2003), we should expect a correspondence between equity and bank dependence and the external firm competitive advantages. Similarly, we can appreciate a correlation between firm’s dependence on skilled labor and their internal competitive advantages.  Therefore, we can postulate our first two hypotheses as follows:  Hypothesis H1: We would expect a positive relationship between financial resources (banking and capital market) and the external firm competitive advantages  (capital and labor endowments).  Hypothesis H2: The dependence on skilled labor and internal firm competitive advantages relationship does not have to be positive.  Besides, it would be appropriate to test whether the international diversification (or degree of internationalization) and firm performance relationship fit the same pattern. The balance between the benefits and costs of this international expansion might explain the hypothetical stages in the relationship between DOI and firm performance.  Given that firms rely on external competitive advantages -endowments of capital and labor - or on the development of internal competitive advantages -R&D and innovation potential-, we should expect them to have a different behavior in the DOI-performance relationship. Therefore, we can formulate the following two hypotheses:  Hypothesis H3: Internal and external firm competitive advantages moderate the relationship between international diversification and firm performance. Hypothesis H4: Manufacturing firms show a performance-DOI relationship with the same shape as service companies.  The remainder of the paper is organized as follows. Section 2 details the data, methodology and variables that have been used. Section 3 reports the main empirical findings. Section 4 summarizes the conclusions.


Impact of Non-Financial Rewards on Employee Motivation:

(A case of Cellular Communication Service providing sector of Telecom Industry registered under PTA in Islamabad)

 Dr. Syed Tahir Hijazi, Adeel Anwar, Muhammad Ali Jinnah University, Islamabad, Pakistan

Syed Ali Abdullah Mehboob, Muhammad Ali Jinnah University, Islamabad, Pakistan



This study explores the relative impact of different Non-Financial rewards on the motivation level of the employees. Questionnaire was developed for the purpose of data collection. The sample was selected as the six major players of the Cellular Communication Service Providing Companies registered under the Pakistan Telecommunication Authority (PTA) and who are carrying their operations in Islamabad. Regression Analysis was used to investigate the collected data. The results show that the Overall model is significant but individual coefficient values are highly insignificant except few values that mean the factors considered for study are the major factors to motivate employees. It is also proven that all the studied Non-Financial Rewards do not have positive impact on the employee motivation individually. Work it self, Decision Autonomy, Participate Management and peer relationship are positively correlated with motivation while others do not.  All the ways and means through which employees are compensated in return of their contribution to achieve organizational goals can be termed as rewards. Managing the rewards can be considered as one of the important features of Human Resource Management. Importance of designing and developing the adequate policies and procedures can not be negate because it enables organizations to compensate their employees equally, frequently, consistently and fairly. Proper implementation of these policies and procedures plays an important role in attracting and retaining the right employees. Rewards can be classified into two broad categories: Financial Rewards (those rewards that are given in monetary terms and have some monetary value as well) and Non-Financial Rewards (those rewards that arise from work itself and working environment and do not have any monetary value).  Financial Rewards normally include Base Pay, Contingent Pay (Pay for performance, competence or contribution), Variable Pay (Bonuses), Share Ownership and other financial benefits and incentives. These rewards are also termed as Transactional Rewards because they are given as a result of a transaction between employer and employees (as a return of their services).  Non-Financial rewards normally include Recognition, Responsibility, Meaningful Work, Autonomy, Opportunity to use and develop skills, Career opportunities, Quality of work life and Work-life Balance. These rewards are also termed as Relational Rewards because they are concerned with learning, development and work experience of workers. Normally it is considered that Non-Financial Rewards are given to boost up the impact of Financial Rewards but they also have their importance and significance in order to keep employees motivated and improve their productivity. Motivation is person’s discretionary effort, commitment and engagement to perform the job assigned. High keenness shown in order to carry out the job reflects high level of motivation and vice versa. High motivational level of employees increases their effectiveness and efficiency hence increases the productivity and profitability of the organizations as well. Motivation also has two types: Intrinsic Motivation (that stems from within the person like self actualization and work accomplishment etc.) and Extrinsic Motivation (that arises from the external reinforcement like Money etc.). Pakistan’s Telecom industry is also affected by the revolutions in the communication world and is growing extensively. For this research Cellular Communication Service Providing Companies registered under the Pakistan Telecommunication Authority (PTA) are considered as the target sector. As we know very little research work has been done in this area in Pakistan so a lot more is needed to be done. Inclusion of Multi National Corporations (MNC’s) in the local market in this sector has created a competitive environment in the market and local companies are compelled to be more aggressive in order to stay in the competition.  It is commonly perceived in our society that the Financial Rewards are the only factor that can raise the motivation level of the employees and other factors might not be helpful in raising the moral and the performance of the employees. Human Resource in this high tech sector is supposed to be more professional, have more knowledge and skills. So the importance of designing an appropriate Non-Financial reward strategy to keep them motivated is becoming increasingly crucial. Organizations face problems when they pay no or less attention in paying Non-Financial rewards to their employees because they ignore the impact of these compensators on employee motivation.  Organizations usually reward and compensate their employees to increase their motivational level and to increase their performance and productivity. Organizational Rewards and compensations are all those things that employees get in return of their efforts and work that they contribute for achieving organizational goals. To be more specific According to WAYNE F. CASCIO (2003) “compensation, which includes direct cash payments, indirect payments in the form of employee benefits, and incentives to motivate employees to strive for higher levels of productivity”  (1). These rewards can be categorized as Monetary Incentives and Non- Monetary Incentives. “Monetary or cash incentives are rewards to employees for their admirable job performance, essentially involving money. Monetary incentives include salary increases, profit sharing, stock options and warrants, project bonuses, festival and/or performance-linked scheduled bonuses, additional paid vacation time. As compared to monetary rewards, non-monetary incentives reward employees for excellent job performance through opportunities. [Ballentine et al, 2003] Non-monetary incentives and rewards offer employee autonomy and personal recognition and include pleasant work environment, flexible work hours, training, new and challenging opportunities, and also mementos, trophies etc. These incentives are sometimes called internal rewards, as they meet the employees’ internal needs such as, recognition, self-esteem and fulfillment, thereby influencing employee motivation.” (Non-Monetary Rewards In The Workplace.htm) (2).  ssociation between monetary rewards and motivation seems more obvious but presently Non-Monetary rewards are considered as important as monetary rewards. Different researches have been conducted on relationship between Non-monetary incentives and Employee performance e.g. “People are motivated to higher levels of job performance by positive recognition from their managers and peers (Keller). Creative use of personalized non-monetary rewards reinforces positive behaviors and improves employee retention and performance. These types of recognition can be inexpensive to give, but priceless to receive.” (Sherry Ryan) (3). With the passage of time the employees are becoming more professional which increases the importance and intensity of Non-Monetary rewards so Modern day management is compelled to create alternative career paths for their professional employees to keep them motivated and to increase their performance. (STEPHEN P. ROBBINS) (4)


Analysis and Forecasting of the Development of Banking: the Estonian Case

Dr. August Aarma, Tallinn University of Technology, Estonia

Dr. Jaan Vainu, Tallinn University of Technology, Estonia



The main purpose of the paper is to test the possibilities of treating a bank as an enterprise that produces services and for which the same laws are valid (at least in Estonia) as for other enterprises. As Estonia is a small country, the banks here can be considered small or medium-sized, despite the high profitability of their enterprises. Banks and other financial institutions compose a unique set of business firms whose assets and liabilities, regulatory restrictions, economic functions and operation make them an important subject of research. Banks’ performance monitoring, analysis and control deserve special attention in respect to their operation and performance results from the viewpoint of varioust audiences such as investors/owners, regulators, customers, and management. This paper presents two econometric models the prognostication ability of which is very good. In addition, whether the development of the Estonianbanking agrees with R. Solow´s theory of balanced growth is considered. The first commercial bank (Tartu Commercial Bank), on the territory of the former Soviet Union, was established in Estonia in 1988. This bank went bankrupt and was liquidated in 1992–1993. As there was a great demand for banking services by the emerging private sector, the maximum number of commercial banks operating simultaneously in the small Estonian banking market was 42 in 1992. Some of them were liquidated during the banking crises in 1992–1994 and in 1998–1999, and some of them were merged into larger commercial banks.  Up until 1997, the development of the Estonian banking sector was characterized by a rapid nominal growth of total assets and loan portfolios. The year 1997 is also the beginning of a new stage in the development of the Estonian financial sector, especially in international context, which is confirmed by investment grade credit ratings assigned to Estonia. In 1998, a wave of mergers and restructuring took place in the Estonian banking sector. After the completion of these mergers, Scandinavian banks started to show greater interest in the Estonian banking market. We may say that the Estonian banking sector became healthier when Swedish banks and other Nordic investors joined the circle of bank owners, improving the future outlook of the banking system;  (e.g. by supporting and helping in the case of crises). Estonia has experienced two serious banking crises during the about 12-year period of its banking sector development and restructuring: the first crisis in 1992-1994; and the second in 1998-1999.  The first banking crisis occurred during the difficult period when drastic economic reconstruction was starting, production was reducing dramatically, and the country was beginning to experience a period of hyperinflation. A characteristic feature of the first banking crisis in Estonia was that it was caused by internal reasons, and  was overcome with Estonia’s own resources and management skills. The main causes of this banking crisis were severe problems in the entire economy, poor bank management and lack of professional skills, weak supervision both from the side of the central bank and owners. The depositors’ losses in the banking crisis were large, the money supply decreased, many loans were depreciated, and the trustworthiness of the banking system fell significantly. As for the second crisis of 1998-1999, in retrospect it is possible to notice some signs of the crisis:  1. Estonian banks took extraordinarily high financial risks through investment companies and their subsidiary companies to get large profits via speculating in the securities market. Rapid fall in prices on the share market in autumn 1997 significantly reduced banks’ profits, and at the end of 1997 and in 1998, almost all banks operated in losses. Commercial banks absorbed heavily into non-banking business.  For example, the Land Bank of Estonia, which later crashed, owned several banks that held a very high negative level of gap (interest rate sensitive liabilities exceeded significantly rate-sensitive assets) for earning excessive profits in the environment where interest rates steadily decreased during the previous years, and they were not able to adjust subordinate establishments and related companies, which dealt with leasing and investing, and with anything else but banking (i.e.,hotels, processing agricultural products, broadcasting etc.). Also other banks were absorbed in risky non-banking business; 2. The decision to expand to the Eastern market (Russia and other Baltic States), where the interest rates and potential for profitability seemed to be higher, was also too risky and premature, especially in the framework of the Russian crisis in 1998;3. There were various disputes and conflicts of interests between the owners and management, which led to wrong decisions (mismanagement). Good examples can be drawn from the Land Bank of Estonia and the Estonian Investment Bank. For example, the shareholders of the Investment Bank intended to sell the bank to the German Schleswig-Holstein Bank in autumn 1997, but the top executives threatened to hand in a collective resignation, and so the bank was sold to them; 4. Sometimes there were inadvisable relations between the bank management and political powers, and there was corresponding political pressure.  A typical “political” bank was the Land Bank of Estonia where almost all financial risks were ignored and later the Government lost its deposits in the bank amounting to more than 800 million Estonian crowns, EEK (i.e., more than 50 million euros). The authors are of the opinion that the currency board arrangement helped in Estonia to resolve banking crises rapidly and mostly effectively without remarkable rehabilitation costs. The main instruments for anticipating banking crises are the tightening of prudential requirements and strengthening of banking supervision. Recent changes in the operational framework for monetary policy and banks’ prudential ratios in Estonia were aimed at enhancing financial stability and increasing the liquidity buffers of the financial system. In short-term, the priority focused on restoring foreign investors’ confidence in Estonian economic viability. The structure of the Estonian banking sector has changed fundamentally during the last decade. Today, the banking system is highly concentrated and two Swedish-owned banks dominate in the market. The consolidation process continued throughout the second banking crisis in 1998-1999, resulting in fundamental reorganizations. We can notice all three worldwide trends in the financial consolidation process  in the Estonian market: domestic consolidation, foreign entry and cross-border consolidation; and the formation of financial conglomerates and bank assurances.  As far as we know, nobody has yet used the existing information about banks to construct production function type econometric models treating banking as a separate sector of economy (Aarma and Vainu, 2003,  2004, 2005, 2006).  One can ask what is the production or product of a bank? In our opinion, the product of the bank is the amount of the services, the volume of which can be measured by the total income of the bank, which is the measure of the amount of production.  We selected the total income of the banks (y) as the output variable (dependent variable) and used profit earning assets (x1), equity (x2), liabilities (x3) and fixed assets (x4) as factors (independent variables).  The time series were treated as consisting of three components:  We chose the power function as the type of the model.  To estimate the parameters  a  and  α  with the method of least squares, it was necessary to first find logarithms of the primary data. Then, according to the rules of analysing time series, we checked forthe existence of a trend and harmonious component in the time series of the logarithms of the selected parameters.  We followed R. Solow’s approach and assumed that the chosen factors can be regrouped so that two groups would be formed:   profit earning current assets, ; and profit earning fixed assets, .


Share Repurchases: Evidence from Thailand

Dr. Chaiporn Vithessonthi, University of the Thai Chamber of Commerce, Bangkok, Thailand



This paper presents the empirical results regarding stock return reactions to common stock repurchase programme announcements of listed firms on the Stock Exchange of Thailand (SET) between 2001 and 2005. The findings suggest that announcements of common stock repurchase programmes are significantly associated with positive abnormal returns. There is also evidence of possible information leakage prior to the announcements. More specifically, the abnormal return on one day prior to the announcements is positive and significant. As mentioned in Ikenberry, Lakonishok and Vermaelen (2000), in the United States between 1996 and 1998 there were nearly 4,000 announcements of share repurchase offers amounting to $550 billion. The use of common stock repurchase in the United States and many countries has grown rapidly since the mid-1990s, yet historically, common stock repurchase in Thailand was not allowed until December 2001 when the Stock Exchange Commission adopted a new regulation allowing firms listed on the Stock Exchange of Thailand (SET) for the first time to repurchase their shares. In the finance literature, several studies (e.g., Dann, 1981; Lakonishok and Vermaelen, 1990; Peyer and Vermaelen, 2005; Stephens and Weisbach, 1998 Vermaelen, 1981) find positive relationships between share repurchase and abnormal returns and typically anchor their work to U.S. firms, yet, relatively little has been known about share repurchase in the international context (Ikenberry et al. 2000). The lack of international evidence is troubling given the recent growth in the use of share repurchases in many emerging market countries.  Given the recent growth and development of share repurchase activities in Thailand, the purpose of this study is to examine stock return implications of share repurchase announcements in Thailand to document whether the understanding of share repurchases based on the U.S. data can be extended to the context of emerging market economies. One may question whether the stage of financial market development moderates the impact of share repurchase announcements on stock prices. If the financial markets of emerging market economies are not as developed as that of the U.S., there may be some substantial dissimilarity in the investors’ interpretation of the firm’s behaviours between investors in emerging market economies and developed economies. Hence, the results of studies on share repurchases in emerging market countries may not necessarily conform to the results of studies in more advanced economies, such as the U.S. I hope that this work will be an important contribution to the understanding of how share repurchases affect stock returns in Thailand in particular and in emerging market economies in general. Hence, studies on share repurchase implications in Thailand as one of the emerging market economies are warranted. Past empirical studies based on U.S. data suggest that stock price reactions to the announcements of share repurchase offers are positive (Comment and Jarrell, 1991; Dann, 1981). Results of some studies (e.g., Ikenberry et al., 2000; Jung, Lee and Thornton, 2005; Rau and Vermaelen, 2002, Zhang, 2002) that have empirically examined the implications of share repurchases in the international context also report positive stock price reaction to share repurchase. Based on a sample of 375 open-market share repurchase announcements and 295 stock stabilization funds announcements in South Korea, Jung et al. (2005) find that the average cumulative abnormal return for a 6-day period CAR (0, +5) surrounding the announcement date is positive and significant. This study tests this hypothesized relationship by examining whether positive abnormal returns subsequent to the announcement of share repurchase programme in Thailand can be documented. As a result, this study provides complementary international evidence regarding the implications of share repurchases in a different regulatory environment. Of particular interest is the apparent existence of a tax incentive to share repurchases in Thailand, where capital gains on stocks are not subject to personal income tax.  The remainder of this paper is organized as follows. Stock repurchases are first discussed with particular emphasis on those prevailing in Thailand, leading to my working hypothesis. In the following section, I present my sample and research methodology. The next section provides the results of the empirical analysis performed on the SET. Finally, the discussion and some concluding remarks are presented. Because self-interested managers may make use of excess cash for perquisite or negative net present value investments and thus harm the firm’s shareholder wealth when firms have excess cash (Fama, 1980; Jensen, 1986), it is important to consider agency costs and how to minimize them. According to the free cash flow theory, one possible method of mitigating agency problems is to return free cash flow to shareholders (Easterbrook, 1994; Jensen, 1986). If firms face limited investment opportunities, excess cash should be returned to shareholders by means of share repurchase or other payout methods, so that it can be re-invested in other assets (Baker, Powell and Veit, 2003; Brav, Graham, Harvey and Michaely, 2005; Stewart, 1976). As suggested by Grullon and Michaely (2004), firms that face the reduction in investment opportunities, and thus repurchase their shares to reduce excess cash, should experience a decline in their profitability. If we think of share repurchase as a signal to a firm’s lower profits, then market reactions to share repurchase become negative. Given this set of circumstances, does it imply that share repurchases always trigger negative stock returns? Clearly not: One can find substantial evidence of positive abnormal returns subsequent to share repurchase announcements. In the literature, one plausible explanation for a positive abnormal return is that the agency costs may have already been incorporated into stock prices prior to the announcement of share repurchase. As a result, share repurchases are viewed as positive news in a sense that value-destroying investments were not to be undertaken, thereby increasing the value of the firm. For this reason, stock prices increase when the announcements of share repurchase are made. In addition, stock price reactions following the announcements of share repurchases are likely to be stronger among firms with a large amount of excess free cash flows that are likely to invest in value-destroying projects than firms with a small amount of excess free cash flows.  According to the signalling hypothesis, share repurchase offers may reveal new information about the firm’s future value and performance to investors (Dann, 1981). This argument implicitly assumes that information asymmetry between insiders and outsiders of the firm exists and that managers are better informed about the firm’s true value than outside investors. In addition, the signalling hypothesis also argues that firms may deliberately attempt to convey new information about future earnings improvement to the market by repurchasing their shares (Hertzel, 1991; Peyer and Vermaelen, 2005). For this reason, the information conveyed by the announcements may induce investors to revise their expectation of the firm’s prospects. If investors upgrade their expectation of the firm’s prospects following an announcement of share repurchase, stock prices should increase. Past empirical research has found that the average abnormal return subsequent to share repurchase announcements by listed firms in Canada was positive and significant (Ikenberry et al., 2000; Li and McNally, 2007).  Another competing argument concentrating on the personal tax saving hypothesis suggests that a differential tax rate on dividends versus capital gains may lead to personal tax savings from cash distributions by means of share repurchase in lieu of dividend payouts. Under the U.S. tax law, there is a differential tax rate on dividends versus capital gains, which favours the use of repurchases: by the end of 2005, the top marginal rate on ordinary income was 35 percent, while the top marginal rate on long-term capital gains was 15 percent. Consistent with this explanation, in their study of corporate common stock repurchases, Grullon and Ikenberry (2000) report a dramatic increase in the use of share repurchases by U.S. firms since the late-1990s.  Following a sharp decline in the SET subsequent to the 9/11 event in the United States in 2001, the Board of Governors of the SET allowed listed firms in the SET for the first time to repurchase their shares and dispose of such repurchased shares effective on December 3, 2001, as a part of measures to stabilise its tumbling stock market. As a result, General Environment Conservation Plc. was then the first company in Thailand to announce its share repurchase programme on December 17, 2001.


DIY or 3PL: Study on the Third Party Logistics of Petroleum Producing Industry of China

Dr. Qi Ying and Hong Yan, The Hong Kong Polytechnic University, Hong Kong



With the professional advantages of third party logistics (3PL), outsourcing logistic activities to 3PL companies has recently become a trend in various industries. The oil industry in China is investigating and testing the possibility and feasibility of outsourcing its supply logistics functions in order to reduce high logistics costs and achieve operations efficiency. However, concerning the specific characteristics of oil production logistics, such as high value of products and production tools, large scale, and highly specialty, it may not be proper to leave all oil production related logistics activities to 3PL service providers. This research evaluates the issue from the economic point of view. Other impacting factors, such as social, cultural and political issues, are not considered. We explore the characteristics of oilfield industry and logistics activities, and suggest an evaluation framework for analyzing logistics services in oilfields. We especially address these three features—value, volume and specialty. Related operations data and parameters were collected from an oil field in China for analysis. The total cost of self service logistics (referred to as “DIY”, for Do It Yourself) and the total cost of “3PL” are compared against different classes of production materials, as well products. The analysis clearly indicates that logistics services for different materials or products can be handled in different ways.  Petroleum, as an essential energy resource, plays a strategic role in China’s economical growth and political stability. In 2004, out of the total of 3.86 billion tons of crude oil in the world, 4.51% (175 million tons) of it came from China. In contrast, China accounted for 8.19% of the world’s total oil consumption in 2004, as the second largest oil consumer in the world.. This paper takes a typical oilfield in China for research. The oilfield will be called Plent Oilfield because of strategic sensitivity in the industry. The following six special characteristics of logistics in China’s petroleum producing industry are observed in this field.  Contrary to imagination, Plent Oilfield is not a region full of underground oil, but actually consists of over 700 sub oilfields scattered over an area of several hundred squared kilometers with some oilfields far away from others, in different cities. These sub oilfields continuously produce oil and therefore production materials and equipment must be adequately provided by the minute. As a result, hundreds of depots are needed to supply different types of goods throughout a large area. The life time of a well usually ranges from ten to thirty years. In order to keep constant annual output levels, new oilfields must be developed to make up for the old wells’ decline every year. Therefore, the oilfield company is obliged to meet both development demands for the new wells and maintenance demands for the existing wells. Numerous wells leads to many problems: large amounts of materials and equipment needed for continual production, an increase in employees, and a need for more goods to meet the needs of employees (e.g. gloves for workers, stationary for managers etc.). Both high-tech dedicated instruments and large amounts of different types of building materials are needed, ranging from imported large-scale drilling machines which cost over US $10 million to tiny less-than-one-dollar nails.   Oil exists over 3 thousand meters beneath the ground. It can be “found” with the assist of detecting equipment, and sometimes estimation. With advanced technologic instruments, it is possible to get comprehensive information about oil reservoir and stratum layers. However, uncertainties in detection and production always exist and come across now and then. When an emergency does occur, if first aid materials are not available within a short lead time, not only those high-valued facilities will be lost but worker’s lives are endangered. From the point of view of logistics, express distribution or transportation for emergency materials is one of the necessary conditions for oilfield, since emergencies are a distinct characteristic of the petroleum industry. This feature can be explained in two aspects: goods and facilities. First of all, a great deal of high-valued advanced machinery is transported to exam and record underground information. In some cases of developmental procedure, highly corrosive or poisonous chemicals are used for oil production and refining. Obviously, only professional vehicles can transport such mechanical and chemical goods.  Second, facilities used in oil production are unique as well. Crude oil or petroleum products are dangerous materials due to them being inflammable substance. In light of this, petroleum inventory and transport must be handled with appropriate vehicles, e.g. petroleum storage depot, gas tank, and fueling vehicle. In the first several steps of finding a new oilfield, the movement of materials and goods is quite fluid and irregular. If stratum information and economic analysis show that one place does not hold oil or is not worthy of developing, all of the machinery and equipment must be transported to another place perhaps tens of kilometers away where possible oil and gas may be found. Therefore, the features of a logistics network are changeable until the oilfield reaches the stable development production stage. Contrastingly, in the later steps of oilfield development procedures, oilfield goods movement is relatively stable because one well has an over ten-year lifetime and needs for materials to maintain production are quite stable during this long-term period.  Similar to other oilfields in China, the Plent Oilfield owns comprehensive logistics foundations, such as convenient roads between every production unit, airports to land and take off Boeing 737 airplanes, ports to transport goods to the many cities nearby and several lines of high way connected to neighboring cities.  Total capacity of exchange machines is 180 thousand lines. The long-distance digital microwave circuit lasts for 981 km, and the optical fiber cable on transportation lasts for more than 450 km. In 2005, an ERP system was applied to manage daily oil production and administration on the platform so that all of financial settlement and material movements can be resolved. Prev ious researches on logistics costs have mainly included two major parts. One part sheds its light on the strategic aspects of logistics costs, while the other is focused on optimizing cost-effective logistics decisions. Among previous researches, there is a large focus on the relationship between logistics costs and a company’s financial performance. As stressed by Gilmore (2002), logistics costs compromise a large amount of assets and directly affects the cash flow and the bottom line. Gilmore also reported that for many companies, transportation costs can make up to between 3 and 7 percent of total sales. This number in addition to millions of dollars in annual expense of mid-sized firms  Currently, as mentioned above, logistics costs have become more important in the supply chain management field and subsequently draw more attention from scholars. The methods to analyze logistics costs used by previous researchers can be classified into four major categories: recurrence-based, regression-based, activity-based, and optimization-based (Zeng, 2003). Fera (1998) has identified and classified a relevant list of factors related to logistics for evaluating the feasibility of the international sourcing strategy of a company. The list includes both recurrence and non-recurrence costs composing global sourcing logistics management and is presented for further analysis. Additionally, a regression model was introduced by Zoroya (1998) to evaluate the logistics cost driving factors which affect the shipper’s transportation fees. Three time-based factors are mentioned to identify what factors have impact on the prices of transportation lanes. Finally Van Damme (1999) presented a logistics management accounting framework to support logistics management decisions. In the distribution cost model the benefits of activity based costing with regard to the allocation of costs and the control of processes are combined with the benefits of cash flow based accounting with regard to decision support.


Globalization and the Environment: Evidence from China

Dr. Yang Zhang, University of Macau, Macau, China

Xiuli Yang, Shenyang University, China



Globalization has become an irreversible process and environmental degradation is an issue too costly to ignore. Accordingly, this research attempts to investigate the environmental impact of globalization in China.  We find that in spite of a negative composition effect and a detrimental impact through scale effect, the resultant rise in income can enhance the capability and willingness of nation to protect environment. Trade and investment liberalization may furnish domestic firm with better access to and stronger incentive to adopt new and “greener” technologies to stay competitive in global market.  We also argue that the interaction between government, firms and consumer society matters and may affect the determination of environmental policy and subsequent environmental performance.  Globalization, commonly understood as the process of increasing interdependence among countries and their citizens and signaled by rising trade liberalization and increasing foreign direct investment, has social, cultural, political and economic dimensions. Debates over globalization has been going on for some time; a long peacetime of expansion, low unemployment rates and rapid growth in real income are generally seen as the fruits of globalization while climate change, rapid depletion of natural resources and increasing inequality in the world distribution of income are among globalization’s negative externalities. In recent discussions the most apparent divide between the two views pertains to globalization and environment. Critics assert that globalization is detrimental to environment mainly based on the pollution-heaven hypothesis (Walter, 1982). This hypothesis suggests that globalization allows firms to take advantage of cross-country differences on national environmental regulations and that falling trade barriers induces pollution-intensive industries to relocate to countries with weaker environmental regulations. A similar argument is based on eco-dumping hypothesis which suggests that governments in developing countries may purposely hold their lax environmental policies or “race to the bottom” in environmental standards so as to give their domestic producers an advantage in competitive international markets and to prevent the situation of less capital inflow, lower exports, higher unemployment and the erosion of tax base resulted from increased stringency of environmental regulations. (Christamann & Taylor 2001, Leonard, 1998).  In contrast, some argues that globalization is conducive for environmental technologies and management transfer from countries with stricter standards to developing countries (Drezner, 2000). Meanwhile, governmental failure to protect the environment might be ameliorated through self-regulation of environmental performance by firms in developing countries (Christamann & Taylor, 2001). Another significant positive impact lies on the role of globalization on income growth which eventually drives up the demand for cleaner environment, as suggested by Environmental Kuznets Curve hypothesis.   In the light of these diverse and conflicting contentions, this research attempts to provide some evidence from China. Thanks to the policy of reform and openness implemented since 1978, the last two decades have seen rapid economic growth in China and an explosive growth of FDI flowing into the country. In this dramatic process of globalization, severe environmental damages have been accompanying this rapid growth and have captured serious public concerns. Increased urbanization and economic activity has taken place in the context of an environment subject to a high level of pollution. According to CNN, the World Bank recently examined 20 of the most severely polluted cities in the world. Sixteen of these cities are located in China. In many urban areas for example, atmospheric concentrations of pollutants such as suspended particulates and sulfur dioxide routinely exceed World Health Organization safety standards by very large margins (Wang & Wheeler, 2005). Given the multi-dimensional impacts of globalization on China and the rising environmental concerns raised by the anti-globalists, it is of great importance to uncover the environmental consequence of globalization and the impact of globalization on environmental institutionalization in China.  This paper temps to investigate the relationship between globalization and environment in China. It does so by examining the complex and diverse channels through which globalization and economics openness in particular affects environment. By looking at the economic indicators of globalization and environmental performance, we discuss the pollution haven hypothesis and industrial flight and show whether and how the process of globalization is conducive to facilitate the diffusion of global environmental norms in China. It will be analyzed both positive and negative impacts of globalization on China’s environmental performance. The rest of the paper is outlined as follows. Section II reviews recent literature on globalization and environment. Section III is dedicated to specify the environmental consequences of globalization through a variety of channels. Section IV concludes.  The irreversible trend of globalization and the critical importance of environment have both made this globalization-environment nexus a hot topic among scholars and researchers. As such, considerable effort has been made to address this issue at both theoretical and empirical level.  Developing countries in general are considered to have less strict environment policy and are less stringent in law and regulations, compared to developed countries. Given this cross nation differences in environment regulation, fears have been voiced that globalization or trade liberation in particular will facilitate the transfer of pollution intensive industry to countries with less stringent regulations, rendering the recipient country pollution heaven by specifying in the production of highly-polluting products (Walter, 1982). Nevertheless, empirical research from this magnitude has showed little support for the pollution haven hypothesis (Eskeland and Harrison, 2003; Wang and Jin 2002). Some researches have focused particularly on the trade-environment debate where the overall impact of trade on environment is usually decomposed into scale effect, technology effect and composition effect (Copeland and Taylor, 2001). Using a cross country panel data on sulfur dioxide concentrations, Antweiler et al (2001) estimated the magnitudes of three effects and found a overall beneficial impact of freer trade on environment. As globalization contributes to economic growth, it has indirect impact on environment through the linkage of income growth. Environmental Kuznets Curve hypothesis states that environmental degradation will increase with income at a low income regime and decline after income level reaches a certain threshold. The search of an inverted-U curve between pollution level and income has been the topic for many research attempts (Grossmand and Krueger 1995; Selden and Song 1994).  Among extensive studies conducted examining the globalization-environment relation, considerable effort has been dedicated to the case of China but the findings are quite mixed. Chai (2002) documents an overall negative impact where positive composition effect and technology effect are offset by negative scale effect. Nevertheless, some pointed out a favorable impact where globalization and environment are found to be complimentary. Using provincial data on Chinese water pollution, Dean (2002) shows that freer trade may mitigate the environmental damage via income growth and the net effect in China is beneficial. Based on a case study of two cities in China, Shin (2004) finds that economic openness positively affected domestic environment policy. A recent study by Wheeler analyzes data on air quality in China, Brazil and Mexico. Far from experiencing a race to the bottom (lowering environment standard to attract FDI), it was found that three nations have all registered improvements in terms of air quality. Another positive effect is proposed based on self-regulation which refers to firm’s adoption of environmental performance standards beyond the requirements of government regulations.


The Integration of Business with Entrepreneurship as a Subject to Enhance Entrepreneurial Competency

Dr. Tshepiso I. Ngwenya, Tshwane University of Technology, Pretoria, South Africa



It is regrettable that, after leaving the academic arena, most graduates are unable to create employment on their own or to be absorbed into the labour market. Taxpayers’ money is used to sponsor graduates’ studies, but, unfortunately, most of those graduates cannot plough back the proceeds into the nation. For example, after completing their degrees, graduates tend to not to have any employment opportunities or create some for themselves. This raises the question of whether what is being taught at tertiary institutions is relevant to market needs and to graduates becoming good entrepreneurs or obtaining employment. Does the content of the subject entrepreneurship help and equip students, not only to cope, but also to excel in the face of commercial challenges?  The aim of this study is to integrate higher education (HE) with the business environment to obtain either employment- or self-employment-related results. It includes a comprehensive literature review, questionnaires, and interviews conducted with university students, lecturers, the management of the Tshwane University of Technology and industrialists from various companies in Pretoria. The findings suggest that there is indeed a huge need for a lecturing update to be done to help students to become successful entrepreneurs or economical employees in the working environment. The integration of business with HE institutions is critical to enhance competency, commitment and interpersonal and other valuable skills that are needed in the labour market or are essential for becoming a successful entrepreneur. We live in a competitive business era that demands that graduates possess the skills that are in demand in the business world.  To create such competent students, there must be excellent support from parents, teachers, community members, business people and other stakeholders. In South Africa, poverty and unemployment are currently daunting issues that prevent people from having better lives. However, the focus of this research is based on equipping students, especially those in the rural areas, to overcome these problems through the application of excellent education to the business sector or creating their own ventures.   The topics discussed below explain fully what has gone wrong in HE and what could be done to obtain desirable results for our students and to enhance the socio-economic and political factors, in general. The aim of this empirical research is to ensure that students are able to fit into the job market as prosperous entrepreneurs after the completion of their studies. The objective is to place students in suitable jobs after the completion of their studies to enhance the type of skill needed and to make sure that the practicality of the job will be tailored for the market. To cement this effect, lecturers should see to it that the teaching and learning materials are up to with the latest industrial developments.  Another reason for studying this particular phenomenon is to make sure that, at the end of the day, the nation is enriched with people who are able to drive the socio-economic and political aspects and to sustain those areas.  In order to achieve the desired objectives, the study first explains the main (primary) objective, and this is followed by the secondary ones, as outlined below: To close the gap between the subject entrepreneurship and business for job creation and poverty alleviation. To produce entrepreneurs who will inject entrepreneurial skill, knowledge and proficiency into the community through job creation and self-employment.  To recruit qualified, highly energetic lecturers with lecturing passion to disseminate valuable knowledge to students and the entrepreneurial sector at large through lecturing, training and community services that are relevant to enhance entrepreneurship. To do research based on the subject matter, interact on a continuous basis with industrialists and establish a relationship with them, so that students get better practical training before they venture into the market as either entrepreneurs or employees.  The problems are addressed in the format of questions, as cited by Creswell (1994), to explain what is being researched. The main daunting issue is the lack of integration of the subject entrepreneurship with the market. This situation is worsened by producing graduates who are not in demand in the labour market or who cannot create employment for themselves as entrepreneurs. On the other hand, the problem is worsened by the fact that some of the lecturers are not up to date with the new developments or do not conduct research to keep their teaching and learning up to date. Let us now look at the questions that made this empirical research necessary. What could be done to close the gap between education and business and the subject  entrepreneurship? What type of lecturers could an institution recruit in order to produce students who will become successful entrepreneurs or who fit well into the labour sector? How could integration and cooperation between business and entrepreneurship be achieved?  Once these problems are fully investigated, South Africa and the world at large will be in a better position to improve the economy and work towards eradicating undesirable factors such as poverty and unemployment. The programme designed by Garrett (2002) indicates that it helps students to acquire the skills and experience that are necessary to enter the business environment. This programme (4*l SM) has three components that help students to gain insight into companies’ operations, to interact with executives and to participate in different functional meetings. These meetings, in turn, help students to transfer what they have learned to other young people and to the local communities. The website is also a useful tool for guiding learners to become successful business people. In his report, Darby (2002) attests that the education system has failed to equip children with the enterprise skills that are needed in many walks of life. His survey shows that teachers and parents are pillars that support a child to become a business-minded individual and to succeed in life. However, in the case of teachers, they often do not know what business is all about, and, more surprisingly, his study has indicated that most of the people in the United Kingdom are not business minded, unlike their American counterparts who regard business as the American dream.


The Impacts of Relationship Marketing Tactics on Relationship Quality in Service Industry

Dr. Yi Ming Tseng, Tamkang University, Taipei, Taiwan



This research explores the effects of relationship marketing (RM) tactics on enhancing relationship quality in the services industry. Through data from banking, airlines, and travel agencies, we discuss five types of relationship marketing tactics and how they influence the customers’ perceptions about long-term relationships.  We also include customers’ inclination toward the relationship as a mediator into the model to help the framework more completely. Research findings support that tangible rewards, preferential treatment, and memberships are effective in developing customers’ long-term relationships, and behavioral loyalty is also influenced by relationship quality.  Relationship marketing has been conventionally defined as "developing," "maintaining," and "enhancing" customer relationships (Berry and Parasuraman 1991).  What are the effective methods for developing and keeping these relationships and how they work may be complex questions. Relationship marketing tactics are methods that can be actually executed for implementing relationship marketing in practice.  We would like to propose and discuss five main kinds of relationship marketing tactics in the service industry and construct the relation model of these tactics and other relationship marketing concepts. These efforts will be helpful in yielding insights in the field of relationship marketing. Our research goal is to understand if the application of relationship marketing tactics will be helpful for developing a long-term transaction relationship. The objectives of this research are threefold. First, how significant are the effects of relationship marketing tactics on constructing long-term relationship? Second, will the consumers’ perceptions of a long-term relationship reinforce the relationship quality?  Third, through the implementation of relationship marketing tactics, will it further influence the construction of consumers’ behavioral loyalty? Relationship marketing has received much attention and is seen as a new area in academy and practice.  As the competitive environment becomes more turbulent, the most important issue the sellers face is no longer to provide excellent, good quality products or services, but also to keep loyal customers who will contribute long-term profit to organizations.  Relationship marketing has been seen as the mainstream of thought in programming a marketing strategy both in industrial marketing and consumer product marketing.  Dwyer, Schurr, and Oh (1987) argue that relationship marketing encompasses all the marketing activities which are designed to establish, develop, and maintain a successful relational transaction. Relationship marketing can be effectively implemented through the applications of computer database techniques. They provide customer information and advice to the decision maker for choosing adequate communication tools to access customers (Landry, 1998).  Past research studies emphasize the individual relation and hope to keep the relation longer. The final purpose of relationship marketing is to gain the maximal value of a customer, who can contribute to the corporation’s long-term profit.  High relationship quality means that the customer is able to rely on the salesperson’s integrity and has confidence in the salesperson’s future performance, because the level of past performance has been consistently satisfactory. Relationship quality is a higher-order construct that consists of several distinct dimensions (Dwyer, Schurr and Oh, 1987).  Previous research studies provide various conceptualizations on it, but considerable overlap exists.  Building on past conceptualizations, we consider that relationship quality encompasses satisfaction, trust, and commitment. In this section we develop a conceptual model connecting the relationship marketing tactics, relationship quality, and behavioral loyalty.  The customers’ “perceived long-term relationship” is in a position linking the marketing tactics and its consequences. We also consider “inclination toward relationship” as a potential moderator among the conceptual framework, which influences the perception formulation of the relationship marketing effect. The conceptual framework is presented in Figure 1. This framework is developed through three related parts. First, we build up the connection relations between relationship marketing tactics and perceived long-term relationship (PLR). This theoretical relation is based on the cognition that relationship building is developed when customers recognize the favors offered by the service marketers inside or after the transaction process. Second, when customers can feel the intention by the service marketers and perceive the existence of a long-term relationship, they will favor the relationship and keep an excellent relationship quality with the service marketers. At last, good relationship quality results in the behavioral loyalty that can bring a firm repeat purchases and long-term profit.   Direct mail includes letters or catalogues mailed directly to the customers, which is proven to be a good method to communicate with the customers. (Dwyer, Schurr, & Oh 1987; Anderson & Narus 1990; Morgan & Hunt 1994). Furthermore, retailers can establish a customer relationship by direct mail, because of increasing the chances of interacting with customers. We recognize that direct mail should be a strong predictor for building customer relationship. Retailers offer tangible rewards such as visible benefits like price discounts, gifts, or coupons (Peterson, 1995) to keep customer loyalty. Tangible rewards model the customer behavior by transforming their mind from “loving the service” to “getting a benefit from the service”, and this new perception of customer makes the market more active and enhances the service more acceptable in the introduction stage.  Moreover, the rewards also overcome the competitors’ actions in the market.  Therefore, we propose the following hypothesis:  H1: The more that direct mail is used, the higher the level is of the long-term relationship perceived by the customers  H2: The more tangible rewards that are offered, the higher the level is of the long-term relationship perceived by the customers  


Profit and Cost Efficiency of Philippine Commercial Banks Under Periods of Liberalization, Crisis and Consolidation

Santos Jose O. Dacanay III, University of the Philippines, Baguio City, Philippines



This paper examines the profit and cost efficiency of commercial banks in the Philippines from 1992-2004, covering periods of financial liberalization, crisis and consolidation.  Two-stage procedure is employed.  The first method involves the estimation of profit and cost efficiency using the stochastic frontier approach.  Results indicate that profit efficiency slowly decreased from a mean score of 92% in 1992 to 84% in 2004 while cost inefficiency hovers around 11% to 12% from 1992 to 1998, and then jumps to 14% to 15% from 1998 to 2004.  Efficiencies are found to be inversely related to asset size.  Off-balance sheet services are found to be cost-absorbing, and substitute for traditional banking products.  Elasticities of the costs of labor and deposits are found to be negative, providing evidence for the use of more and low-cost labor and the abundance of deposits as cheap source of funds.  In the second stage, regression shows profit efficiency scores of universal banks are significantly higher than plain commercial banks, suggesting scope economies from expanded and equity investment activities of universal banks.  Foreign banks are more cost inefficient due to higher personnel cost.  Modest improvement in bank efficiency after liberalization in 1994 is registered but cost inefficiency increased in the aftermath of the Asian financial crisis.  Acquired banks in mergers are not necessarily inefficient, and the weighted efficiency scores of the acquired and surviving banks before the merger has not improved three years after the merger, suggesting that synergy gains need a longer time to be realized.  The paper aims to examine the profit and cost efficiency of Philippine commercial banks from 1992 to 2004, spanning three episodes that represent substantial metamorphosis for the Philippine banking sector in the past decade.  These are the financial liberalization in 1994; the Asian currency and financial crisis in 1997; and, the wave of mergers and consolidation from 1998-2004.  In 1994, the Philippines passed legislation (Republic Act No. 7721) liberalizing the sector which resulted in the entry of ten foreign banks in 1995.  In 1994, the share of the foreign banks in terms of total assets of the commercial banking sector was 8.6%.  By 2004, ten years after financial liberalization, foreign banks’ share of the total assets of the sector nearly doubled to 17.05%.  Liberalization contributed to the thinning of bank interest spreads with positive welfare effects (Unite and Sullivan 2003), yet it also contributed to vulnerabilities of the financial system and even led to crises (Demirgüç-Kunt and Detragiache 1998; Mehrez and Kaufman 2000).  Officially, the onset of the Asian financial crisis was marked by the devaluation of the Thai bath on July 2, 1997 which was followed by a more freely float of the Philippine peso on July 11 and its depreciation by 40% three months after.  The literature characterized financial crises by currency (balance of payment) and banking crises, with a possible two-way direction of causation between the two as they can result from common factors (Kaminsky and Reinhart 1999).  The Philippines was the least affected by the crisis with only four distressed financial firms, two of which are banks and the other two nonbank financial institutions which were eventually closed (Bongini, Claessens and Ferri 2000).  A unique post-crisis response of the Bangko Sentral ng Pilipinas (BSP) was the encouragement of mergers and consolidation in the sector (Gochoco-Bautista 1999).  Banks merged as they tried to reap economies of scale advantages and positioned themselves against a more intense threat of competition domestically and across borders.  Milo (2000) reported seven mergers from 1998-2000 while Manlagñit and Lamberte (2004) accounted fourteen mergers from 1998-2003.  The four-firm concentration (C4) ratio in 1992-1994 averaged 47.32% and 52.10% for loans and deposits, respectively.  These dropped to 40.46% and 46.42% for loans and deposits, respectively, following liberalization in 1995-1997.  For the period 1998-2004, the average C4 ratio for loans and deposits were 46.2% and 49.6%, respectively.  Hence, the C4 ratio reverted back to their pre-liberalization levels following consolidation in the industry after the Asian financial crisis.  The rest of the paper is organized as follows.  The next section discusses the empirical framework and methodology.  The third section presents the data, defining the variables and presenting the sample selection.  The fourth section discusses the results and analysis.  The last section concludes.  According to Greene (1980), the translog function is the most frequently selected model to measure bank efficiency because it is a flexible functional form.  The absence of a priori restrictions on substitution possibilities among the factors of production allows both economies and diseconomies of scale at different output levels.  The translog cost frontier of the banks is given by Equation 1.   where ln Cit is the natural logarithm of the total cost; ln yjit is the natural logarithm of the jth output; ln wlit is the natural logarithm of the lth input price; ln zpit is the natural logarithm of the pth netput.  The subscripts i and t pertain to bank and time of the observation, respectively.  βs are the coefficients to be estimated.  The term ln μc is non-negative random variable associated with inefficiency of input used, given the levels of outputs and the quasi-fixed inputs.  The ln μc implies that the observed cost for the given level of outputs and quasi-fixed inputs are not as small as would be possible if the banks were fully efficient in their use of input.  ln εc is random variable associated with measurement errors in the input variable or the effect of unspecified explanatory variables in the model.  The profit function uses essentially the same specification with few changes.  First, the dependent variable for the cost function replaces ln C with ln (π+θ), where π is the bank’s profit for a particular year.  Since minimum profit can be negative, the absolute value of the minimum profit in the sample plus 1 equals θ is added to every firm’s dependent variable in the alternative profit functions so that the natural log is taken of a positive number.  Thus, the bank with the lowest profit value for the year, the dependent variable will be ln (1) = 0.  Second, all other terms and variables are the same except the terms ln μc and ln εc which are relabeled to ln μ and ln ε, respectively.  Scale economies can be calculated from Equation 1.  Banking scale economies are measured by the reciprocal of the elasticity of cost with respect to output.  Increasing, constant and decreasing returns to scale or economies of scale are present if the estimates are greater than, equal and less than 1, respectively.  The study employs the parametric translog stochastic frontier approach to estimate the profit and cost functions.  The translog functional form, due originally to Christensen, Jorgenson and Lau (1971), has several virtues: i) it accommodates multiple outputs without necessarily violating curvature conditions; ii) it is flexible, providing a second-order approximation to any well-behaved underlying cost frontier at the mean of the data; and, iii) it forms the basis of much of the empirical estimation and decomposition of cost efficiency based on a system of equations.  The translog form of the cost and profit functions adopted in this study is consistent with the concept of economic optimization.  Furthermore, since the parametric techniques correspond to the cost and profit efficiency and economic optimization concepts, the stochastic frontier approach (SFA) will be adopted for this study.  In using the SFA, the inefficiency and random error components of the composite error term are disentangled by making explicit assumptions about their distributions.  The random error, ln ε, is assumed to be two-sided, normally distributed, and the inefficiency term, ln μ, is assumed to be one-sided, half normally distributed.  The parameters of the two distributions are estimated and can be used to obtain estimates of bank-specific efficiency.  Coelli’s (1996) Frontier Version 4.1 is used to estimate the cost and profit efficiency of the banks.  The software estimates the cost and profit models using maximum likelihood estimation technique.  In the second stage, regressions are employed to test potential correlates of the profit and cost efficiency measures. 


The Use of Collectivist and Individualist Culture as an Indicator for Finding Patterns of International Tourists

Dr. Veerapong Malai, Bangkok University, Thailand



This paper examines how cultural effects on preference types of tours. The individualism - collectivism dimensions were used for culture illustration.  One of our primarily interests is in looking at how to model cultural impacts.  Tourists make decisions to select types of tours based on their culture. Data were collected from 600 tourists across three Asian and three Western nationalities (Japanese; Hong Kong Chinese; Thai; Germans; British; and from USA). For practitioners, this study suggests that managers could use the framework as a guide to examine how tourists in different foreign markets would respond to their selections.  Due to the eruption of economic crisis in late 1996, one of the government’s priorities was inevitably the amendment of economy.  Presently, tourism is one of the fastest growing industries in the world.  From the numerous instruments, tourism industry became one of the proposed means.  Income renders from such industry became the major source of income for Thailand subsequent to the economic downturn.  Earning from tourist industry has increased from 2.3 hundred thousand million in 1997 to 3 hundred thousand million in 1998, which is the increase of 8.9% annually.  Moreover, the tax refund for tourists policy further benefited the nation as the number of tourists entered into the Kingdom of Thailand increased from 7.76 million in 1998 to 8.58 million and 9.5 million in 1999 and 2000 consequently.  Due to the fact that increase number of tourists lead to the increase in jobs and the spread of income to communities and long term development, hence, the tourist industry was admitted as one of nation strategic means for increasing Thailand’s competitive advantage in national plan number nine from 2002-2006.  The plan expected income from tourists to increase 10% annually. Despite the growing rate of the tourism industry, the emerging global markets, declining profitability and increase competition internationally as well as consumer behavior all affect the industry today.  Therefore, to attract a larger influx of tourists and remain competitive a more specific strategy is compulsory.  Tourists have one or more purposes in mind when deciding their destination, hence, it is mandatory to understand subcategories of tourists and ensure that such product are provided as well as the apt marketing strategies to attract them.  Consequently, such fact led to the inquisitions of studying whether culture being national and individual level is indicator of tourist behavioral and preferences.  The primary intention of the research is to bear findings that would bring forward the benefits for both public and private sectors relating to tourism industry. It is projected that the result would provide valuable guidance for the planning and development of service according to the exact needs of tourists as well as providing references for academic purposes. Culture is about permanent beliefs, and individuals develop such beliefs either in their own native culture or in the cultures with which they are associated (Daghfous, Petrof, & Pons, 1999).  These beliefs condition the way people view the world, so culture may influence attitudes and perceptions towards marketing stimuli, and thus, how people respond to the marketing mix.  Usually, this would be somewhat more complex than just the differences in cultural manifestations themselves (Lowe & Corkindale, 1998).  However, we argue that much “cross-cultural” research is not really looking at this complexity and needs to become more conceptual in its approach – much of it is primarily descriptive at the moment.  Culture can have an impact on marketing in many ways.  At the relatively simple level is the real world manifestation of everyday actions.  All the little details of daily life may be different across cultures because they have started, sometimes, from different conceptual bases, and usually have adapted to different environments.  Thai cuisine, for example, is quite spicy, so Thai Airways usually serves spicy food on flights because most of its customers prefer this.  Northwest Airlines is much less likely to serve very spicy food because most of its customer base, Americans, does not like such food.  This is a trivial example, but much “cross-cultural” research is essentially on this level – quite descriptive of what are essentially real world manifestation details.  One “learns” from such research that Thai and Americans are different, which many people already knew, without much depth of discussion about how culture has really influenced things.  Certainly managers need to know about such little details, so this kind of work is quite important managerially, but it is not really a very interesting conceptual issue. Luna and Gupta (2001), for example, barely mention such details in presenting a framework for understanding cultural impacts on consumer behavior.  More interesting is how culture influences people’s conceptualization – the things behind the manifestation of real world actions.  Culture spans the boundary between the conceptual world and the real world.  What people in different cultures do ultimately relates somehow to some underlying concept.  Thus, one sort of cultural impact is on concepts – people may not see the same amount of a concept, or the concept may even have different structure.  In extreme cases, a concept may not even exist in another culture.  Ueltschy and Krampf (2001), for example, demonstrate that the concepts of service quality and satisfaction with service delivery may differ somewhat across cultures.  In the simple conceptual model proposed here, we hypothesize that culture has a direct impact on how people in different cultures respond to service quality value or brand name value.  Further, we suggest that looking mainly at the cultural impact on concepts themselves still does not necessarily fully capture how culture influences things. A more thorough approach might be to look not only at whether different cultures view concepts differently, but also at whether relationships among concepts work differently in a different cultural context.  Certainly, there is quite a lot of research demonstrating that people from different cultures frequently perceive relationships among things differently (e.g., for a popularized overview, with extensive references to academic research, Nisbett, 2003).  How relationships work in different cultures may also differ.  For example, in examining how marketing contributes to performance of newly introduced products across Korea, China, and Canada, Mishra et al. (1996) say that they do not believe there is a single formula for success, but that what works best may differ across countries.  This is coming close to our assertion: culture influences how things relate to each other.  One of the most frequently used sets of measures in cross-cultural research is Hofstede’s schema (Easterby-Smith & Malina, 1999), consisting originally of four dimensions: power distance, individualism-collectivism, masculinity vs. femininity, and uncertainty avoidance (Hofstede, 1997).  Individualism-collectivism is frequently applied to the development of cross-cultural models where there are Asia – Western contrasts (Straughan & Albers-Miller, 2001). Asian cultures are collectivist, where children are raised within the context of the extended family, exposed to a variety of viewpoints from their parents, grandparents, uncles, and other adults in the family, and develop strong group orientations. On the other hand, in Western individualist cultures, children are usually brought up in the nuclear family, are less exposed to various points of view, are trained to be much more self reliant, and focus much more on themselves (Triandis, 2001).  Because a number of cognitive-behavioral aspects correlate with individualism-collectivism, there is growing acceptance of individualism-collectivism as one key dimension for understanding cross-cultural differences in attitude and behavior (Azevedo, Drost, & Mullen, 2002).  This interesting dimension, therefore, is used here to develop propositions about cultural impact, but we do not imply that other cultural dimensions or other cultural schema might not also be relevant.  Indeed, culture is much more complex than a single dimension, and other dimensions probably do have some impact.  We use individualism-collectivism because it has already been demonstrated to have some impact in marketing response, so that it should be useful for investigating our hypotheses. 


Demographic Change, Bank Strategy and Financial Stability

Dr. Stefan W. Schmitz, Oesterreichische Nationalbank, Austria



The purpose of this article is to disseminate the main results and the ensuing financial stability implications of the FINMA-program on “Ageing and its implications for banks and bank strategy”. The first question that arises is whether demographic change is of relevance for banks and financial stability, at all. The paper answers this question in the affirmative. It then goes on to analyse the impact of demographic change on the environment in which banks operate (i.e. economic growth, interest rates, and residential real estate markets) and on the level and composition of household demand for bank services and products. It summarises how banks might adapt their strategies in response to demographic change. Finally, it draws the potential implications for financial stability. The purpose of this article is to disseminate the main results and the ensuing financial stability implications of the FINMA-program on “Ageing and its implications for banks and bank strategy”. The objective of the programme was, first, to discuss the impact of negative population growth, increasing longevity and migration on banks and bank strategies over a horizon of up to 20 years; and, second, to draw conclusions concerning the stability implications for the banking system. It consisted of an issue paper (Wood 2006) and two workshops on “Ageing and its implications for banks and bank strategy” in April and September 2006, respectively. The first one was devoted to the impact of demographic change on the banking environment (i.e. on economic growth, real interest rates, and residential real estate markets) but also included a presentation on demographic projections for Austria and the EU and presentations on the impact of demographic change on banks. The second one focused on the strategic responses of banks to demographic change. It confronted banking consultants and bank strategists with the findings of the first workshop. As part of the program, the OeNB has also put the issue on the agenda of the ECB’s Banking Supervisory Committee and led the respective study group. (1)  The programme was motivated by the important role financial stability plays for Oesterreichische Nationalbank and the ESCB, in addition to their main objective of preserving price stability. (2) Banks form the core of the Austrian financial system. In addition to the study of current developments, the anticipation of potential long-term developments in the economy and their effects on the banking system form an integral part of macroprudential supervision. While the literature on ageing and its consequences for the macro-economy, financial markets, and public finances has grown rapidly in recent years, the impact on banks has received little attention so far. A number of European and international institutions study the impact of ageing under various perspectives: the Economic Policy Committee and the European Commission published a study in which the effect of demographic change on public expenditure in the areas of pensions, health care, long-term care, education and unemployment transfers are projected for all 25 EU member states until 2050. (3) The Group of Ten studied the implications of ageing for financial markets. (4) The ECB’s Monetary Policy Committee and the Governing Counsel engaged in intensive discussions on the impact of demographic change on the macroeconomy, on the current account, and – of course – on monetary policy in 2006. Given the intensive study of the impact of ageing on so many sub-sectors of the economy, it is striking that its effects on banks and bank strategy have received so little attention. This program aims at closing this gap.  The first question that arises is whether demographic change is of relevance for banks and financial stability, at all. We identified three main channels of interaction between demographic change and banking that provide the basis for an affirmative answer to this question: first, banks are exposed to the repercussions of demographic change indirectly via its impact on the macro-economy, on financial markets, on real estate markets and on and household portfolio composition. Second, the increasing volume of funded pension provision and the blurring boundaries between banks and more traditional providers of age-related products have increasingly exposed banks to risks related to demographic change. This is illustrated by examples from the Austrian market: banks play an important role as shareholders of occupational pension funds and they are providers of capital guarantees for pension products. Third, demographic change can result in changes to the product portfolio of banks.  The conceptual framework for the analysis rests on the theory of financial intermediation, on contractual and market incompleteness as well as on the risks inherent to the structure of the bank balance sheet.(5) When analysing the impact of the programs findings on financial stability we look at selected items on the bank balance sheet and the profit and loss account.  The key issues of the paper are the following: What is the main content of current demographic projections (section 2)? How may demographic change affect the environment in which banks operate (i.e. economic growth, interest rates, and residential real estate markets) (section 3)? How do the banks that presented at the workshops (plan to) react to demographic change (section 4)? What are the potential financial stability implications thereof (section 5)?  Demographic projections for the EU and for Austria provide the quantification of what we consider as “ageing” throughout the article, namely decreasing fertility, increasing longevity, and the growing importance of migration for demographic developments.  Although the world population is expected to grow from 6.1 billion in 2000 to 8.9 billion by 2050, an increase of 46%, growth rates are declining in most major economic areas. (6) The median age in the EU is expected to increase from 38 to 48 years between 2000 and 2030, while the median age worldwide will eventually converge to approximately 45 years until 2050. The distribution across age groups will change with a substantially growing elderly and decreasing young populations. The EU will experience the lowest fertility rates worldwide and a standstill of natural population growth. In addition, the increase in longevity and the continuing dynamics of international migration will contribute to significant changes of the demographic structure. Overall the population in the EU-25 is expected to grow until 2025 due to net migration effects but to fall thereafter. The share of the young population aged 0 to 24 will approximate 23% in the EU as well as in Japan, while it is expected to reach 30% in the USA. At the same time, people aged 80+ will approximate 12% of the EU’s overall population by 2050, compared to 15% in Japan and 7% in the USA.  Austria will follow the EU trend regarding a natural population decline but nonetheless grow to approximately 9 million by 2050 due to net-immigration. At the same time, the structure of the population will change broadly in line with the EU average from 2005 to 2050. The share of people between 0 and 24 years of age is expected to decrease from 28 percent to 24; whereas the share of people aged 65+ will increase from 16 percent to 28 and that of 80+ from 4 percent to 11. However, the economically relevant total dependency ratio (7) will increase only very modestly from 101 percent to 108.(8) From a regional perspective, the population will grow in urban areas around the main economic centres, whereas the peripheral regions will lose residents.  Overall it has to be kept in mind that the future cohort sizes of current and past birth cohorts can be projected with some accuracy, while the future fertility rates, longevity and net-migration are outcomes of very complex societal, social and economic dynamics. So the uncertainty associated with long-term demographic projections is high. (9) Nevertheless, they provide consistent scenarios to evaluate particular opportunities and challenges for societies in the coming decades.  The main results of workshop I on the impact of demographic change on the bank environment have already been documented (10) and shall be summarised here only in brief.


Measuring the Effects of Employee Orientation Training on Employee Perceptions of Quality Management: Implications for Human Resources

Dr. M. Akdere, University of Wisconsin-Milwaukee, Milwaukee, WI

Dr. Steven W. Schmidt, East Carolina University, Greenville, NC



This empirical study examines employee perceptions of quality management at three different time periods. New employees at a large United States manufacturing organization were surveyed regarding their perceptions of their organization’s quality management practices before they attended a new employee orientation training, immediately after the new employee orientation training, and a month after the new employee orientation training. A description of the study, as well as findings and conclusions, are presented.  “Quality is the goodness or excellence of any product, process, structure or other thing that an organization consists of or creates.  It is assessed against accepted standards of merit for such things and against the interests/needs of producers, consumers and other stakeholders” (Smith, 1993, p. 241).  The importance of quality in organizations today cannot be overemphasized.  An important aspect of Smith’s (1993) definition above is the idea that the concept of quality means different things in different organizations.  Although definitions may vary from organization to organization, many researchers agree that effective quality initiatives of any sort involve every employee in the organization.  Many also agree that training and communication are important factors in organizational efforts to improve quality (Mandal et al., 1998; Goodden, 2001; Hansson, 2001).  “Orientation is the planned introduction of new employees to their jobs, their coworkers, and culture of the organization” (Cook, 1992, p. 133, quoted in Blackwell, 1997). Most organizations offer an employee orientation training program coordinated by the human resource department (Blackwell, 1997). New employee orientations serve many purposes and have many meanings from both an organizational and an employee perspective. Researchers have found that successful new employee orientation programs help new employees become familiar with their organizational environment and help them understand their responsibilities (Robinson, 1998). They have also been found to be positively related to job satisfaction (Gates & Hellweg, 1989); employee socialization (Klein, 2000); and have been recommended to aid in employee job enrichment and morale building (Kanouse & Warihay, 1980). Research has also shown that employers benefit from new employee orientations in that they receive well-trained, highly motivated new employees as quickly as possible (Robinson, 1998).  How effective is the new employee orientation process in conveying organizational-wide issues like quality? Do employees learn from new employee orientations, and is that learning carried back to the workplace? It is difficult to address these questions because of the dearth of research on the topic. Wanous and Reichers (2000) note that “orientation programs have rarely been the subject of scholarly thinking and research” (p. 2). They continue by noting that “the current body of research work (on new employee orientation programs) is too small for meta-analysis” (p. 2), and as a result, they changed the methodology used in their 2000 study to descriptive summary (Wanous & Reichers, 2000). Other researchers have come to similar conclusions. While most organizations use formal orientation training, “there is surprisingly little in the academic literature examining the impact or most appropriate structure of these programs” (Klein, 2000, p. 3).  The purpose of this research is to examine new employee perceptions of quality management.  The study is unique in its examination of employee perception of quality management over a time period which includes new employee orientation training. Employee perceptions were measured both before and after this training, as well as one month after the conclusion of employee orientation training. Mandal et al., (1998) note that “companies committed to TQM (total quality management) invest in training” (p. 88).  Bacdayan (2001) concurs: “Training costs may be justified as a long-term investment in TQM skills at the grassroots level” (p. 596).  This paper examines the results and findings of this study. Based on the results and findings, conclusions are made and recommendations are presented.  This study set out to examine new employee perceptions regarding quality management in their organization.  It was conducted during a time that included new employee orientation training and time spent in the workplace. Surveys were used to gauge these perceptions. As such, this study employed a theoretical framework based on adult learning theory and quality management theory.  Adult learning theory is important in this study, as the study set out to gauge learning about organizational vision and leadership. Adult learning orientations that form the theoretical basis for this study include cognitivism and its emphasis on information processing, storage and retrieval, learners’ needs, learning styles, and the organization of learning activities to meet those varying needs and styles (Robinson, 1994). Social learning theory was also used as the basis for the theoretical framework of this study.  As defined by Bandura and Walters (1963), social learning theory focuses on learning from the observation of people in social settings, mentoring, socialization, and guiding.  Elements of constructivism, which include group learning, experience, and reflection (Von Glaserfelt, 1995), were included in the theoretical framework of this study as well.  From a quality perspective, Deming’s (1982) theories about quality were used in the theoretical framework of this study.  They include Deming’s idea that quality involves everyone in the organization, his theories on continuous improvement, and his systems approach, which emphasizes the role of leadership to drive quality efforts. Deming also emphasized the importance of teaching and facilitating quality efforts in organizations.  Apps’ (1994) leadership theories also stressed the importance of quality in every activity at every level, and the importance of communication and education in all quality initiatives. Senge’s (1990) concepts of the learning organization and quality management were used in the theoretical framework of this study as well.  Figure I illustrates the organizational profile for quality management (Baldrige Criteria, 2006).


Value at Risk in Fixed Income Portfolios:  A Comparison Between Empirical Models of the Term Structure*

Dr. Pilar Abad, Universidad de Barcelona, Diagonal, Barcelona, Spain

Sonia Benito, Universidad Nacional de Educación a Distancia (UNED), Madrid, Spain



This work compares the accuracy of different measures of Value at Risk (VaR) of fixed income portfolios calculated on the basis of different multi-factor empirical models of the term structure of interest rates (TSIR). There are three models included in the comparison: (1) regression models, (2) principal component models, and (3) parametric models. In addition, the cartography system used by Riskmetrics is included. Since calculation of a VaR measure with any of these models requires the use of a volatility measurement, this work uses three types of measurements: exponential moving averages, equal weight moving averages, and GARCH models. Consequently, the comparison of the accuracy of VaR measures has two dimensions: the multi-factor model and the volatility measurement. With respect to multi-factor models, the presented evidence indicates that the mapping or cartography system is the most accurate model when VaR measures are calculated (5%). On the contrary, at a 1% confidence level, the parametric model (Nelson and Siegel model) is the one that yields more accurate VaR measures. With respect to the volatility measurements, the results indicate that, as a general rule, no measurement works systematically better than the rest.  In this paper we compare the empirical multi-factor model of the Term Structure Interest Rate according to their ability to assess the risk of a fixed income portfolio, i.e., in terms of the accuracy of the Value at Risk (VaR) measures built from them. The models we have included in the comparison are: (1) regression models, (2) principal component models and (3) parametric models, in particular the Nelson and Siegel model. In addition, the cartography system used by Riskmetrics is included. Table 1 summarizes the main features of these models.  To estimate value at risk we have used parametric methods, or also called variance-covariance approach. As indicated by many authors, e.g. Chong (2004) and Sarma et al. (2003), parametric models, in spite of their limitations, are the ones most widely used in financial practice. In keeping with this focus, the VaR measure is obtained from the portfolio variance which, given a multi-factor empirical model, is given by the variance-covariance matrix of the explanatory factors of the TSIR. An estimate of this matrix requires the use of a volatility measurement. Different volatility measurements are used in this paper, just as in Sarma et al. (2003), Chong (2004) and Alexander and Leigh (1997): exponential moving averages, equal weight moving averages and GARCH models.  This provides a new dimension to the comparison. On one hand, we compare the ability of the factor models to calculate the VaR. On the other hand, we evaluate which volatility measurement provides the most satisfactory results, regardless of the factor model used.  The rest of the paper is structured as follows. Section 2 present the data we have used. Calculation of VaR in the context of the multi-factor models is described in section 3. Section 4 presents the empirical result. The last section of the paper contains the main conclusions.  In this paper, we use daily data on zero-coupon interest rates with terms of 1, 2, 3, …., 14 and 15 years. The method used to estimate the zero-coupon rates is the one proposed by Nelson and Siegel (1987). The interest rates have been estimated on the basis of the average closing prices of the most liquid references traded on the secondary Spanish public debt market, minimizing errors in prices weighted by duration. The database is formed by data from 4 years, from January 2, 2001 to December 30, 2004.  This section evaluates the ability of the multi-factor empirical models for purposes of fixed income asset risk management by building a parametric measure of VaR as an indicator of a portfolio’s risk.  The VaR of a portfolio is a measure that tells us the maximum amount that an investor could lose in a given time horizon and with a certain probability. Formally, the VaR   is the percentile % of the probability distribution of the changes in the portfolio value:  where  is the confidence level and  represents the change in the portfolio value in time horizon .  In this paper, to estimate value at risk we have used parametric methods. The parametric methods, also called the variance and covariance focus, are based on the assumption that the changes in value of a portfolio follow a known distribution that is generally assumed to be Normal. Since the mean of the changes in a portfolio’s value is null, the VaR at a confidence level of in a one-day horizon of portfolio  j is given as:  where  is the percentile  of the standard Normal distribution. Using this procedure, the only parameter to be estimated in order to calculate the VaR is the typical conditional deviation of the value of portfolio j ().  This assumption is the one most commonly used by financial market operators and is widely used in the literature on VaR in spite of the existing consensus that the tails of the financial return distribution are fatter than those of the normal distribution. An alternative is to use the t-Student distribution. Nevertheless, Chong (2004), who uses parametric methods to estimate the VaR and compare the Normal distribution with the t-Student distribution, shows that the most satisfactory measures are obtained by assuming that the distribution is Normal.  In a portfolio of fixed income assets, the portfolio duration can be used to obtain the variance of the portfolio value from the variance and covariance matrix of the interest rates that determine its valuation [e.g., see Jorion (2000)]:  where  is the variance and covariance matrix of the interest rates and  is the duration vector of portfolio j. The elements of this vector reflect the sensitivity of the portfolio value to changes in the interest rates that determine its value.  Given a multi-factor empirical model, i.e., starting with equation (1), the variance and covariance matrix of the interest rate changes () can be obtained from the following equation:


The Effect of Government Performed R&D on Productivity in Canada: A Macro Level Study (1)

Rashid Nikzad, University of Ottawa, Canada

Golnaz Sedigh, University of Ottawa, Canada

Reza Ghazal, University of Ottawa, Canada

Frederick Kijek, National Research Council Canada (NRC)



The contribution of the public R&D on the productivity of a country was always an important question. This paper estimates the effects of government and higher education R&D on the labor productivity of Canada at the macro level. Moreover, the paper investigates how the shares of public R&D changed during last decades in Canada. The main finding is that government R&D and higher education R&D have positive and significant effects. In this paper, we study the impact of the R&D performed by the government, higher education and business sectors on the labor productivity of Canada for the period of 1981-2004. The study is based on the model developed by Coulombe and Acharya (2006). Our data sources are Statistics Canada, Penn World Table, and SourceOECD. In addition to finding the effects of R&D performed by business, government, and higher education sectors, we study how government R&D funds have been distributed among different fields and sciences.  The structure of the paper is as follows. The next section consists of the literature review on the effect of R&D on productivity and the role of the government in R&D. Section 2 presents some facts about the shares of business, government, and higher education R&D expenditures of Canada. Section 3 introduces the model and presents the econometric results. Section 4 concludes. Moreover, some highlights for further study will be presented in section 4.  Research and development (R&D) is known to be the most important factor of economic growth during last decades. As a result, many governments follow special programs to stimulate R&D in their countries. Many studies have been conducted on R&D policies in industrial countries and their effectiveness. Bernstein (1986) estimates the effects of both direct and indirect tax incentives on R&D tax expenditures in Canada. Bernstein (1988) estimates the effects of intra and inter-industry R&D investment spillovers on the costs and structure of production in seven Canadian industries from 1978 to 1981. Bernstein (1989) did a similar study on nine major Canadian industries for the period of 1963 to 1983. Hall (1996) studies R&D spillovers at the firm level with an emphasis on the impact of government spending and R&D tax policies. Hall and Reenen (2000) survey the econometric evidence on the effectiveness of fiscal incentives for R&D. They also describe and criticize the methodologies that are used to evaluate the effect of the tax system on R&D behavior. Hall (2002) states that the following economic policies may be used in response to market failures in R&D: internalizing the externalities of R&D, taxing or subsidizing R&D activities, and regulating R&D activities. Carlsonn and Jacobsson (1997) try to analyze the conditions that help successful formation of new technological systems in an economy. Metcalfe (1997) has a short discussion about the UK’s R&D policy during last 20 years. Storey and Teteher (1998) review public policy measures which were implemented in European Union counties to support new technology-based firms during the 1980s and early 1990s. Klette, Moen, and Griliches (1999) study the mechanisms of evaluating the economic impacts of R&D policies based on firm-level data. Kenneth, Pittman, and Reed develop a model to calculate the effects of tax incentives in the USA. McDonald and Teather (2000) describes how some groups in the Government of Canada have used a performance framework approach to successfully measure R&D performance in federal organizations. Lipsey and Carlaw (2000) criticize the neoclassic approach to R&D policy. They suggest a substitute approach, which they call Stracturalist-Evolutionary Theories. Tassey (2004) presents a conceptual framework for the analysis of federal strategies in R&D investments. Such strategies must recognize the full range of public and private technology assets that constitute a national innovation system. In the real world, the method of government support of R&D depends on the type of the market failure as well as the objectives that each specific country follows. In most cases, market failure is a combination of market imperfections such as inappropriability, imperfect competition, and asymmetric information. Inappropriability, or diffusion of knowledge beyond the control of the inventor, means that the private rate of return to R&D is lower than social return of R&D. Also, the existence of risk prevents firms from investing in R&D activities, especially for small firms that do not have access to enough funding. To respond to inappropriability of technology and free-rider problem in this area, governments use patents and other instruments to protect intellectual property rights, especially for the technologies that are specific to the production of a particular good. Intellectual property right provides monopoly power for technology producers, increases the costs of imitation, reduces the effects of inappropriability, and therefore, encourages R&D performers to invest in R&D activities. However, patents are not a suitable solution for technologies that are of general use. In addition to a patent system, governments of industrialized countries use other instruments to give incentives for R&D investment, to increase the private rate of R&D returns to its social level, and to reduce the impacts of market imperfection on R&D investment without necessarily giving monopoly power to R&D performers. These instruments are as follows (2): government-sponsored R&D, government procurement of new technologies, direct subsidies, loans, and repayable contributions to business, universities, and non-profit organizations, and tax incentives. Guelle and Pollelsbrghe (2003) similarly state that the three policy instruments that the government typically uses to stimulate R&D are as follows: public (government or university) performed research, government funding of business-performed R&D, and fiscal incentives.  Public research is mainly performed in public laboratories or universities and is mostly paid for by the government. The aim of public research is to generate basic knowledge and to satisfy public needs. Compared to public laboratories, universities are less responsive to R&D policies and usually follow their own agenda. By using the second instrument, the government may help R&D activities that have a potentially high social return. If the government has its own objectives, it may perform necessary R&D activities in public laboratories or use the second instrument and directly fund the private sector to perform necessary R&D. According to Guelle and Pottelsberghe (2003), since the gap between the private and social returns is highest in basic research, we can assume that the government should concentrate more in this area. However, the government may also help the R&D activities that are performed by business. In the income tax legislation of Canada, R&D (scientific research and experimental development) is defined as systematic investigation or research performed in a field of science or technology by means of experiment or analysis. Department of Finance (1997) divides the R&D into three categories as basic research, applied research, and experimental development. Basic research means an activity for advancement of scientific knowledge without considering its practical application. If the research is done for a specific practical application, it is called applied research. According to Department of Finance (1997), most claims for R&D tax incentives in Canada are for experimental developments, which are activities that are performed to achieve technological advancement for creating new (or improving existing) materials, devices, products or processes.  The aim of the government could be to reduce the private cost of R&D or to introduce the available technological opportunities to the private sector. This way, the government reduces both the cost and the risk of R&D. Tassey (1997) mentions a couple of reasons why the market fails to allocate resources for R&D, while emphasizing more on the risk factor. He divides the R&D process into three phases of basic research, generic technology, and applied R&D. He states that the risk of R&D activities decreases as we move from basic research toward more applied research. Tassey (2004) suggests such a time span between different types of research in the commercialization of the results:  R&D can be performed or funded by any of the four sectors of business, government, higher education and non-profit organizations (3). Diagram 1 presents the amount of R&D performed in the business sector (berdp), government sector (gordp) and higher education sector (herpd) as their shares in the GDP of Canada. The share of R&D performed in the business sector had been increasing up until 2002. It started declining thereafter. The shares of R&D performed in higher education and government were almost constant from 1971 to 1987. However, the share of government R&D declined and the share of higher education increased since then.


Contact us   *   Copyright Issues   *   Publication Policy   *   About us   *   Code of Publication Ethics

Copyright 2000-2017. All Rights Reserved