The Journal of American Academy of Business, Cambridge

Vol.  12 * Num.. 1 * September 2007

 The Library of Congress, Washington, DC   *   ISSN: 1540 – 7780

 Online Computer Library Center   *   OCLC: 805078765 

National Library of Australia * NLA: 42709473

Peer-Reviewed Scholarly Journal

Most Trusted.  Most Cited.  Most Read.

Members  / Participating Universities (Read more...)

All submissions are subject to a double blind peer review process.

Main Page     *     Home     *     Scholarly Journals     *     Academic Conferences     *      Previous Issues     *      Journal Subscription


Submit Paper     *     Editorial Team     *     Tracks     *     Guideline     *     Standards for Authors / Editors

Members  *  Participating Universities     *     Editorial Policies      *     Jaabc Library     *     Publication Ethics

The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Journal of American Academy of Business, Cambridge will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work. All submissions are subject to a double blind peer review process.  The Journal of American Academy of Business, Cambridge is a refereed academic journal which  publishes the  scientific research findings in its field with the ISSN 1540-7780 issued by the Library of Congress, Washington, DC.  The journal will meet the quality and integrity requirements of applicable accreditation agencies (AACSB, regional) and journal evaluation organizations to insure our publications provide our authors publication venues that are recognized by their institutions for academic advancement and academically qualified statue.  No Manuscript Will Be Accepted Without the Required Format.  All Manuscripts Should Be Professionally Proofread Before the Submission.  You can use for professional proofreading / editing etc...

The Journal of American Academy of Business, Cambridge is published two times a year, March and September. The e-mail:; Journal: JAABC.  Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via the e-mail address above. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above. Address advertising inquiries to Advertising Manager.

Copyright: All rights reserved. No part of the material protected by this copyright notice may be reproduced or utilized in any form or by any means, including photocopying and recording, or by any information storage and retrieval system, without the written permission of JAABC journals.  You are hereby notified that any disclosure, copying, distribution or use of any information (text; pictures; tables. etc..) from this web site or any other linked web pages is strictly prohibited. Request permission / Purchase this article:


Copyright 2000-2019. Jaabc. All Rights Reserved

The Financial Management of Military Pilot Training Costs: An Assessment of U.S. Navy and U.S. Air Force Pilot Training Attrition Rates

Albert Joseph Sprenger Jr., Embry-Riddle Aeronautical University

Dr. James T. Schultz, Embry-Riddle Aeronautical University

Dr. Marian C. Schultz, The University of West Florida



Obtaining aeronautical ratings can be extremely costly expensive. The U.S. Air Force and U.S. Navy have elected to send their pilot candidates to civilian flight schools prior to starting military training to insure they possess the basic capabilities to pilot aircraft. The Air Force and Navy programs differ in the number of hours students receive prior to starting military training; the Air Force program is 50 hours including receiving their Private Pilots rating, while the Navy program is 25 hours. This study examined the differences in attrition rates of Navy student pilots at U.S. Navy Pilot training, and Navy student pilots at U.S. Air Force Pilot training. The research hypothesis was that Navy students who train with the Air Force will attrite at a greater rate than students undergoing training with the Navy. The hypothesis based on a lack of experience of the Navy student, as compared to Air Force students, who begin training with a private pilot’s license. The research utilized the causal-comparative methodology. The hypothesis was tested utilizing a two-dimensional Chi-Square nonparametric test of significance, at the .05 level of significance. The research hypothesis was not supported; there was no significant difference in attrition rates between Navy students at Air Force Pilot training as compared to those attending Navy Pilot training. Initial observations indicate that Navy students attending Air Force training are behind Air Force students based on the Air Force’s requirement that incoming flight students have completed a minimum of 50.0 hours of pilot instruction, and have received a private pilot’s license prior to starting joint specialized undergraduate pilot training. The Navy has no requirement for a private pilot’s license, and only allows student pilots a maximum of 25.0 hours of Navy sponsored piloting time prior to starting flight training, regardless of whether the student will attend training with the Navy or the Air Force.  The only location which Navy students attend Air Force training is Vance Air Force Base (AFB) in Enid, Oklahoma. Re-locating to Vance AFB removes the students from any peer groups that were formed during their Navy Aviation Preflight Introduction (API), which occurs at Naval Air Station, Pensacola, Florida. This in itself could be a de-motivating factor if the student did not volunteer to go to Vance. Not only is the Navy student removed from the Navy relationships formed in Pensacola, but thrust into an environment which is completely unfamiliar to them in regard to military formalities, base and squadron layout.  With the difference in flying experience prior to starting training, coupled with the faster pace of Air Force training, the researchers theorized that Navy student pilots have a difficult time competing against Air Force students with more flying experience,  and a knowledge of the protocols associated with Air Force organization and tradition. Examining the differences in attrition rates between Navy students at the two locations training allowed the researchers to determine if a lack of flying experience, as compared to their Air Force counterparts, was a factor in Navy flight students failing to complete Air Force pilot training. It has been observed that Navy student pilots in the Air Force T-37 program attrite at a higher rate than Navy student pilots in the Navy T-34 program. The purpose of this study was to determine if Navy student pilots who participate in the Air Force T-37 training program attrite at a greater rate, as compared to Navy student pilots that participate in Navy T-34 training. Failures from either the Navy syllabus or the Air Force syllabus can be the results of many reasons, such as inadequate flying skills, lack of adaptability, academics, and medical factors. It is assumed that these reasons for failure are shared equally by these two groups of Navy students. Therefore, the underlying problem was to determine whether a Navy flight student that went to Vance AFB for training was at a greater risk of not progressing to Intermediate Flight Training than a Navy student that trained with a Navy training squadron.  In this study it was assumed that the data for all of the students used in this study were accurately entered and displayed on the database for fiscal years 2002 to 2004. It was also assumed that all Air Force students started the flight program with at least 50.0 hours of flight time, and that all Navy students started with at least 25.0 hours of flight time. It was also be assumed that Navy students participating in Air Force primary training at Vance AFB either volunteer, or were chosen at random with no bias to disciplinary or academic problems during the Navy’s API.  This study was limited to the attrition rate results of fiscal year’s 2002 to 2004 using Navy students from Vance AFB, and Navy students at Naval Air Station’s Corpus Christi, Texas and Naval Air Station Whiting Field, Florida.  The United States Navy has historically been interested in screening future pilots for flight training to reduce attrition, enhance operational readiness and reduce costs. As early as 1939, over 30 different psychological tests were administered to Navy pilot students entering the flight training syllabus. Psychologist’s called this The Pensacola Project (Arnold, n.d.). Slowly, different tests began to stand out as high-quality predictors of success in training. Prior to World War II, it was hard for the United States and its allies to properly predict performance and attrition rates of aviators.  In the United States, during World War I, aviation psychologists were not as convinced as their European counterparts that physiological and sensory testing could adequately predict pilot performance (Griffin & Koonce, 1996). The Americans leaned more toward psychomotor and intelligence testing instead of the Barany chair, which had some success in France.


Liquidity Provision in Informationally Fragmented Upstairs Markets

Dr. Orkunt Dalgic, State University of New York at New Paltz, NY



The ability of upstairs market makers to observe certain important characteristics of their customers is a distinguishing feature that contributes to the quality of execution, i.e. liquidity, provided by these markets. One such characteristic is a customer’s likelihood of trading, also termed “unexpressed demand” by Grossman (1992). However, the existence of switching costs can lead to investors using the upstairs market to limit their trading relationships to a small number of market makers, and upstairs market search costs can reduce the potential number of trade counterparties contacted during searches. Such costs would cause at least some of the information about upstairs market investors’ trading likelihoods to remain privately known by individual market makers. This paper develops a general framework where information about investors’ trading likelihoods is split into private and public components. An increase (decrease) in the proportion of private information reduces (improves) upstairs market execution quality relative to Grossman's (1992) model, and relative to the downstairs market. Moreover, when the proportion of private information is larger, an increase in the competitiveness of upstairs market making may lead to a greater reduction in upstairs market liquidity.  Key words: brokers, dealers, brokerage firms, upstairs markets, liquidity, unexpressed demand, trading preferences, business relationships, market fragmentation. The term “upstairs” market refers to a market where buyers and sellers negotiate trades in designated “upstairs” trading rooms of brokerage firms. On the other hand, a market such as an exchange floor or its electronic equivalent, is known as a “downstairs” market. Upstairs markets are generally believed to have certain special properties, which Seppi (1990) and Grossman (1992), among others, have argued improve liquidity provision. One such property is that upstairs market makers know the identities, and other characteristics, of their customers. (1) Seppi (1990) argues that superior knowledge of customers allows upstairs market brokers to screen informed trades, which enhances upstairs market liquidity. Robust empirical support for Seppi’s (1990) screening hypothesis, by Smith, Turnbull and White (1999) among others, implies that superior knowledge of customers is a distinguishing characteristic of upstairs market brokers. Furthermore, Grossman (1992) argues that familiarity with investors' trading preferences and willingness to trade in certain states of the world allows upstairs market makers to provide enhanced market liquidity. For instance, customers may want an upstairs market broker to buy or sell a certain quantity of stock repeatedly in pre-specified intervals or whenever the price falls within a pre-specified range. If a large enough number of shares of a relatively illiquid stock will be traded, the order may need to be executed in multiple transactions over long periods. In such cases, upstairs market makers can learn information that will affect the future price of the security. Grossman (1992) refers to investors’ likelihood of future trading as the unexpressed demand of investors. Bessembinder and Venkataraman (2004) use data from the Paris Bourse, and Booth, Lin, Martikainen, and Tse (2002) use data from the Helsinki Stock Exchange to find strong evidence consistent with the hypotheses of Grossman (1992) and Seppi (1990).  In Grossman’s theoretical model (1992), the upstairs market is composed of identical market makers that observe the expressed and unexpressed demands of all investors using the upstairs market, i.e. the total upstairs market order flow. (2) However, faced with switching costs, investors are likely to concentrate trades with a small number of upstairs market brokers, especially for the most frequently traded, i.e. liquid, securities. Moreover, the search costs of upstairs brokers will limit the number of counterparties they contact during searches. These costs will make it likely that at least some portion of the unexpressed demand of investors will be hidden from the upstairs market. Although trading and professional relationships among upstairs market makers may enable the dissemination of information about investors’ trading likelihoods and preferences, potential losses from front-running practices are likely to reduce information sharing. (3) Furthermore, common practices like preferencing and internalization can also limit information diffusion. (4) Conversely, mergers of market making firms, or the hiring of individual brokers from competing firms, are likely to improve the information environment of the upstairs market. For example, an individual upstairs broker moving to another brokerage firm may transport some of the knowledge of the trading habits of the old firm's customers to the new brokerage firm. Furthermore, investors themselves may actively seek liquidity by shopping around in the upstairs market among different brokerage firms. During the search process investors are likely to reveal their trading habits and preferences, and thus their unexpressed demand, to many brokerage firms in the upstairs market. However, changes to the upstairs market’s clientele and to investors’ trading habits should prevent unexpressed demand from ever becoming fully public. This paper summarizes the set-up of the Grossman (1992) framework. It then extends the framework to capture varying levels of informational fragmentation in the upstairs market related to investors’ unexpressed demand. Expressions are derived for the upstairs market price, equilibrium number of market makers, and execution quality. In particular, the following contributions are made to the upstairs markets literature. The relative liquidity advantage of an upstairs market is found to vary inversely with the proportion of the private component of unexpressed demand, as characterized by the concentration of investor-broker trading and business relationships in the upstairs market. Furthermore, when some unexpressed demand is private, greater competition for upstairs market making, as indicated by a larger equilibrium number of market makers, may adversely affect liquidity. This is because, while reducing the price impact of investors’ expressed demands, an increase in the number of upstairs market makers also causes greater fragmentation of the information about unexpressed demand. Another finding is that given all else remains equal, the equilibrium price of a security in the fragmented upstairs market model carries a premium over Grossman’s (1992) model. The premium increases with the proportion of private unexpressed demand, and arises because in a fragmented upstairs market some information about unexpressed demand is unobserved and therefore not priced in equilibrium. This leads to an overvaluation of the security as compared to the informationally integrated upstairs market.  


Team Effectiveness and Leader-Follower Agreement: An Empirical Study

Dr. Susan D. Baker, Morgan State University, Baltimore, MD

Dr. Daniel A. Gerlowski, University of Baltimore, Baltimore, MD



The role of teams in organizations has become a dominant theme in theoretical, applied, and empirical research.   This paper is grounded in the literatures of leadership, followership, and team effectiveness.  It builds on the work of Sundstrom, McIntyre, Halfhill, and Richards (2000), which called attention to the role of team composition in determining team effectiveness in the workplace.  Our research attempts to determine whether leader-follower agreement about leader and follower characteristics effects team effectiveness.  The role of teams in organizations has become a dominant theme in theoretical, applied, and empirical research.   Further, team abilities and skills represents an area that has reached into most business program curricula as well as becoming a standard skill set required in many occupations at multiple levels.  A related literature focusing on leadership developed over time, and more recently, this literature has been extended to include research on followership. This paper is grounded in the team, leadership, and followership literatures.   It extends the work of Sundstrom, McIntyre, Halfhill, and Richards (2000), which called attention to the relationship between team composition and team effectiveness, into issues addressed in the leadership and followership literatures.   Our empirical analysis concerns team effectiveness as a function of team homo- or heterogeneity along leadership and followership dimensions controlling for socio-demographic differences among team members.   Our work relies on survey data from six sites of healthcare organizations in the mid-Atlantic region, drawn in the fourth quarter of 2005.  Respondents completed a questionnaire containing the Leadership Inventory Practices-Self (LPI) (Kouzes and Posner, 2003a), the Performance and Relationship Questionnaire (PRQ) (Rosenbach, Pittman, and Potter III, 1996), and questions about broad socio-demographic status.  The LPI and the PRQ instruments provided data on the respondent’s leadership and followership characteristics.  Survey distribution  ensured that each respondent’s team was identified.  Supervisors who normally evaluated all teams’ performance were asked to provide information on team effectiveness.   To determine whether leader-follower agreement about leader and follower characteristics impacts team effectiveness, we employed a variety of empirical tools.  In each case the null hypothesis states that agreement between team leader and team members on selected survey instruments dealing with leader and follower characteristics does not impact team effectiveness.  The alternative, or research hypothesis, states that homogeneity between leader and follower characteristics does impact team effectiveness.  We define key terms used throughout this research to ensure a common framework clarify the main constructs used and to place them in the context of their literatures.  Effective Team: a small task group that has a shared “common purpose, interdependent roles, and complementary skills” (Yukl, 2002).  It also satisfactorily meets the tasks standards and expectations of its organization and clients for “quantity, quality, and timeliness” (Hackman and Waltman, 1986).  Follower: an active, participative role in which a person supports the teachings or views of a leader and consciously and deliberately works towards goals held in common with the leader or organization  (Baker, 2006). Followership: a process by which a person fills the role of follower, supporting the views of a leader and consciously and deliberately working toward common goals shared with the leader or organization.  The active participation of both follower(s) and leader is essential to the process (Baker, 2006). Leader: a role in which a person leads, guides, commands, directs and supports the activities of another or others, who are commonly called followers, to achieve goals held in common with the leader or organization. Leadership: a process by which a person fills the role of leader, influencing another or others to achieve goals held in common with the leader or organization.  The other(s) are called followers, and his/their active participation is essential to the process (Baker, 2006). The three constructs examined in this study are team effectiveness, leadership and followership.  All three constructs are grounded in the literature of their respective fields. In today’s organizational milieu and behavioral literature, “teams” receive much attention, whether the team under discussion is called a work group, a work team, a high performing team, an effective team, a self-directed work team, or a leaderless team.  The growth of team literature has occurred in conjunction with changes that have occurred in the American workplace since the 1980's when the profits and market shares of hierarchical, vertically integrated American companies were challenged by streamlined global competitors (Orsburn and Moran, 2000).  In the 1990's work teams proliferated throughout industry, leading to the creation of classifications of teams.  Defining an effective team, though, proved to be a harder task.  As Hackman and Walton (1986) observed, "there is no single, unidimensional criterion of team effectiveness" (p. 79); team effectiveness requires more than "counting outputs" but must also consider "social and personal criteria."  The definition is made even harder because effectiveness is dependent upon on "system-specified (rather than researcher-specified) standards."  Theorists who explored the construct of team effectiveness included Vaill (1978), who examined high-performing systems; and Katzenbach and Smith (1993), who posited a team performance curve that defined five different types of work associations that were charted on the two axes of performance impact and team effectiveness. 


Improving IT Service Delivery Quality: A Case Investigation

Dr. Jihong Zeng, New York Institute of Technology, Old Westbury, NY



In the e-commerce environment, business has increasing dependency on information technology (IT) to deliver services to customers. IT service availability has dramatic influence on customer satisfaction and corporate reputation of the enterprise. Consequently, the demand for 24 x 7 service availability is greater than ever. Information systems which provide information infrastructure for business applications have become a critical and integral component for business service delivery. System downtime means lost of revenue and competitive advantage for the business. The business requires IT service providers and information system managers to ensure that service-affecting incidents do not occur, or that efficient and effective remediation must be taken to provide high-availability services. A lot efforts and improvement have been made to ensure high-availability in each individual technology industry. However, not enough focus has been given on how to improve the overall end-to-end IT service availability from end-user’s perspective. Without visibility into the overall availability of underlying components including information systems, applications and operational processes, it is impossible to make informed business decisions about IT resources. This paper introduces ITIL availability management concept and presents how to apply ITIL best practices to decompose service delivery into components or subsystems. Block diagram modeling technique is deployed to assess the overall service availability. This holistic approach helps pinpoint the bottlenecks to the required service level. It also demonstrates the capability to help provide cost-effective solutions to improve service delivery for existing as well as future application and infrastructure design and implementation in a highly competitive e-business environment. Service availability has become one of the most important aspects of service delivery in the highly visible e-business economy. Consequently, the demand for 24-hour a day, 7 days a week operation is greater than ever. Over the past decade, information technology (IT) has transitioned into a critical role in the enterprise, which not only supports business service delivery but also help business to constantly drive innovation and improvement in order to gain edge over other competitors.  IT service downtime imposes huge loss of revenue for large enterprise. As an example, table 1 lists service availability, equivalent downtime and average annual revenue loss for various industries based on the research survey by Meta Group (Meta Group, 2000). Service availability also has dramatic impact on customer satisfaction and corporate reputation. It is particular true while your customers are just a mouse click away from your competitor’s offerings in the highly competitive e-business environment (Fisher, 2000).  High availability is not new in IT industry. A lot efforts and improvement have been made to ensure high-availability in each technology industry. However, risks to service availability may be caused by technology, process as well as human error throughout the whole IT infrastructure and within every management process (Pope, 1986). There is not enough research and focus given on understanding and improving the overall end-to-end IT service availability from end-user’s perspective. Without visibility into the overall availability of underlying service delivery components including information systems, applications and operational processes, it is impossible to make informed business decisions about IT resources investment to provide cost-effective solutions to address the service level requirements from customers.


Relationship Between the Use of Internet Information for Health Purposes and Medical Resource Consumption for an English-Speaking Sample

Dr. Hager Khechine, Laval University, Canada

Dr. Daniel Pascot, Laval University, Canada

Dr. Pierre Prémont, Laval University, Canada



Many researchers in the fields of information systems and medical sciences are showing special interest on Internet use for health-related matters because the Internet is becoming an important source of information for patients and clinicians. Indeed, statistics reveal that almost 113 million U.S. citizens looked for health information on the Internet in 2006. The purpose of this research is to study the relationship between the use of Internet information by English-speaking patients and their consumption of medical resources. We perform a quantitative study based on a ten-item questionnaire. The sample is made of 120 patients suffering from a long-term disease and accustomed to the use of the Internet for health-related issues. Construct validity and reliability were ensured. Most items have loadings greater than 0.5. The path coefficient between the variables is significant and high. We conclude that the use of health information by patients is contributing to increase their healthcare resource consumption. This result can be explained by the fact that patients may misunderstand, be overwhelmed, or be confused by the poor quality of the information obtained from the Internet. We expect this study to have a theoretical and practical impact on the fields of management information systems and medical sciences. Indeed, we believe researchers should be concerned about the role that Internet information can play in the management of medical systems and about the design of health-related Websites.  During the last decade, the number of scientific meetings and studies about the use of online health-related information by patients has dramatically increased. Many topics have been investigated or sometimes theoretically treated. For instance, some studies have focused on the effects of the Internet on the "Patient-clinician" relationship (Hjortdahl et al., 1999; Anderson et al., 2003). Researchers have try also tried to understand the impact of the Internet on the quality of the healthcare services (Eysenbach et al., 1999). A survey of Pew Internet & American Life (2003) concluded that Internet information helps patients improve their health state, prepare the meetings with physicians, and decide if other medical consultations are necessary. The topic of this research deals with the field of ″cybermedicine″. In particular, we are interested in studying the distribution and use of health-related information online by the patients. Numerous health professionals are opposed to the growth of cybermedicine due to its harmful effects on patients. This research attempts to raise concern about the use of medical information displayed on the Internet. This paper is organized as follow: we first present the background related to the use of Internet for health purposes. Next, we explain the objective of the research and the research model. Methods for data collection and analysis are detailed in the following section. We end the paper with the results, some topics of discussion, and the conclusion. A growing number of websites are dedicated to health. Those websites offer medical information that helps patients make decisions about their health (Mittman and Cain, 1999). Some surveys claim that the number of websites related to health issues was over 15000 in 1998 (Miller and Reents, 1998). This number reached 20000 in 2000 (Bush et al., 2000). To our knowledge, there is no exact estimation of the number of websites specializing in healthcare. Due to its dramatic growth of this Internet sector, no more statistics have been carried out on this particular topic since 2000. Mittman and Cain identify two driving forces that contribute to this growth. The first one is the ″pull″ factor, which deals with the growth in consumer demands for more health products and services. The second force is the ″push″ factor related to the market pressures aiming to meet patients’ needs and to create new ones (Mittman and Cain, 1999).  According to Greene (2000), many patients spend more time surfing a healthcare website than they do with their physician. In 1998, 40% of Internet users looked for information about health (Elsberry, 2000). In 1999, the number of individuals who used the Internet for health purposes reached 40 millions (Weber, 1999). In 2002, a study by Pew Internet & American Life (2003) concluded that 93 million U.S. citizens used the Internet for health purposes. This number reached 110 million according to the Harris Interactive poll (2002).


Entropy in a Social Science Context

Dr. Joseph L. Moore, Arkansas Tech University, Russellville, AR



The paper will give an overview of the Second Law of Thermodynamics, entropy, and relative entropy.  There will be a listing of areas where the latter two concepts are being employed today. The principle discussion will be in terms of the social sciences. The author will give brief examples drawn from prior research in economics.  However, the emphasis will be on the research technique employed. The hope is that others will be motivated to try the technique in their endeavors. Entropy is a concept that is drawn from physics. In recent years, the notion has been applied to other areas, including most of the social sciences. Starting about 25 to 30 years ago, some economic research was done to employ this concept. Since that time, there has been a smattering of articles in economics. Nevertheless, in the opinion of the current author, the concept is not well understood. This paper has two purposes: (1) to educate more people on the concept, as applied in the social sciences (2) to question an interpretation of the concept as employed in another piece of research, done by a different author. In equilibrium, energy tends to flow spontaneously from being concentrated to becoming spread out. The word tends implies that the energy can remain concentrated for long periods of time. The second law says nothing about when or how much. The word spontaneously means that only the energy in the closed system is available; outside energy can impact the operation of the second law. In other words, equilibrium corresponds to a disordered distribution of the sets. This is not always true when the sets are influenced by extreme forces. The word equilibrium implies an end state.   Some scientists believe that the second law of thermodynamics does not apply to living organisms. Although, hindering the law is necessary for us to be alive. The second law of thermodynamics is frequently referred to as “time’s arrow”. It points to how we think time goes. This implies that it is what we have seen and, more importantly for us, in this paper what we think is going to happen.  Entropy in a closed system must remain constant or increase. The notion of entropy is being employed today in the fields of psychology, sociology, engineering, mathematics, statistics, economics, and information theory. The use of the word in this paper will be drawn from psychology, economics and information theory.  Some would suggest that thermodynamic entropy and information theory entropy are not the same concepts. However, they are related in that both measure randomness.  Claude Shannon is generally regarded as the father of entropy theory and information theory. Shannon believed that entropy did not apply to the social sciences. Nevertheless, psychologists have attempted to use the notion of entropy to define “cognitive” concepts. Here the word cognitive is being read as “the thing perceived” If we assign specific postulates or claims to be analyzed, then entropy relative to maximum entropy can be defined as the “degree of belief” in the proposition. This is called relative entropy.  Maximum entropy can be defined as: P(event 1) + P(event 2) + P(event3) = 1  A model should be chosen that is consistent with the facts and is as uniform as possible in terms of assigning the probabilities. The second law is often read as leading to a state of maximum entropy. Prior Studies.  Over the past 30 years, there have been numerous articles published in the American Economic Review that address the state of consensus among economists.  The initial work was done in 1976 by Kearl, extended in 1979 by Kearl et al, subsequently extended in 1992 by Alston, Kearl, and Vaughn.*  Numerous jokes about the consensus among economists have been heard for years.  At a more serious level, “a second, common perception of economists is that there is a widespread and serious disagreement about important issues and hence that economists can contribute little to an analysis, solution, or understanding of these issues” (Kearl et al. 1979).  The real significance of this was expressed subsequently in the article where the same authors state: “the perceptions of irrelevance and/or disagreement may, unfortunately, be used by policy makers to justify the abandonment of analysis and the adoption of simplistic and perhaps superficial answers to complex problems where potential insights might be obtained with economic analysis” (Kearl et al. 1979). A recent study entitled “Consensus Among Economists Revisited”, by Dan Fuller and Doris Geide-Stevenson updates and adds to earlier research. 


Emergence of Customer-Centric Branding: From Boardroom Leadership to Self-Broadcasting

Dr. Mohammed M. Nadeem, National University, San Jose, CA



With increased global competition, it has become essential for leaders in every industry sector, from commodities to consumers packaged goods, to understand the new and emerging theory and practice of customer service for successful deployment of brands. The brand has become a strategic business concern for every senior corporate executive and board member. This research explores how a customer-centric approach makes a brand not only stronger but also on a path to profitability. This research mainly examines how successful boardroom leadership connects the customer to the brand through its motivated associates and all of its stakeholders. The purpose of this study is to demonstrate how emerging self-broadcasting customers become devotees of a brand by experiencing it on a deeply emotional level over time, and cementing their loyalty to the products and services of their choice. The final sections discuss the limitations of the exploratory study by providing conclusions, and ideas for future research for branding effectiveness. Companies involved in brand creation or transformation should pay as much attention to their internal reality as they do to their customers. The goal should be maximum relevance and alignment with the employee audience. As consumers spend more time controlling, uploading, downloading, filming, recording, and sharing their own personal experiences with products, services, and brands, marketers are expected to figure out to be relevant and credible. Brands are also expected to embrace the consumer’s desire to create, control, and share and empower them with simple creation tools that allow them to self-express over, and over and over. (Broddy, 2006).  People buy products, but they choose brands. So the ultimate marketing goal for any company is to create a brand identity that separates it from everyone else. The strongest identity is that of a leader. To build a brand leadership identity there are four main components: brand awareness, brand perception, brand icons and brand loyalty. Not surprisingly, brand leaders have the best brand awareness, the best quality perception, the best-known brand-boosting icons and the strongest brand loyalty. The secret is to create marketing programs that build up all four dimensions simultaneously. The key to building and maintaining brand leadership is a visionary strategy, brilliant execution, and a totally integrated marketing plan. The powerful brands also understand how to build strong brand loyalty, using interactive media, direct response, promotions, web marketing and many other devices that provide relationship-building experiences. Retaining a few more customers can boost much higher profits than expected, and it’s so much easier if a company is seen as a leader (BLM, 2007). Brand differentiate standardization, customization, reduce risks, and complexity and communicate the benefits and value a product or service can provide.  This is just as true in business-to-business as it is in business-to-consumer (Pfoertsch, 2006).  We are seeing a consolidation of brands by companies as they try to leverage their promotional and advertising dollars across fewer brands (Chinta, 2006).  Brand must deliver a distinctive benefit. Brands will have to standout, assert uniqueness, and establish identity as never before (Kotler, 2005). Companies are being forced to react to the growing individualization of demand. At the same time, cost management remains of paramount importance due to the competitive pressure in global markets. Thus, making enterprises more customer-centric and efficiency is a top management priority in most industries. Mass customization and personalization are key strategies to meet this challenge. Companies such as Procter & Gamble, Lego, Nike, Adidas, Lands End, BMW, or Levi Strauss, among others, have started large-scale mass customization programs (Tseng and Pillar, 2003). In addition to the usual marketing channels, the visual branding of the software consumers use to interact with retailers and service providers as well as with their employers is an increasingly important tool in the endeavor to promote the relationship between companies and individuals (Simon, 1998).  Brand equity is a set of assets (and liabilities) linked to a brand’s name and symbol that add to (or subtract from) the value provided by a product or service to a firm and /or that firms customers (Aaker, 1996). Research conducted by Harvard Business School shows that the longer a customer is with a company, the greater the annual profit generated from that customer, Fig.1. These increased profits come from a combination of increased purchases, cost savings, referrals and a price premium (Cutler, 2005):


Downsizing, Corporate Survivors, and Employability-Related Issues: A European Case Study

Dr. Franco Gandolfi, Regent University, Virginia Beach, VA

and Central Queensland University, Rockhampton, Australia



This research article examines the accounts of survivors of reorganization and downsizing processes of a large car manufacturer in Europe. It looks at how corporate downsizing survivors adjusted to meet the new reality and dynamics of the corporation and how individuals developed new skills and competencies for their new roles and responsibilities within the reorganized firm. The study also reflects upon issues relating to the motivation and attitudes towards employability and learning aspects of individuals. The research highlights the onus upon individuals to take responsibility for their own training and development needs and to initiate learning opportunities. The advancement of self-development skills was shown to be of particular importance in transforming a corporation successfully. The occurrences of major organizational change, including restructuring and downsizing, represent some of the most profound (Gandolfi, 2006) and problematic issues facing modern-day corporations, non-profit organizations, governmental agencies, and global workforces (Carbery & Garavan, 2005). Corporate restructuring, or simply ‘restructuring’, is a relatively broad concept. Black and Edwards (2000), for instance, define restructuring as a major change in the composition of a firm’s assets combined with a major change in its organizational direction and strategy. The change management literature distinguishes between various types of restructuring. Heugens and Schenk (2004) present three forms of corporate restructuring, namely portfolio, financial, and organizational restructuring. This research paper is concerned mainly with organizational restructuring which is defined as a dimension with significant changes in the structural properties of an organizational entity (Carbery & Garavan, 2005). Multitudes of reasons have been put forward to justify the adoption of restructuring (Carbery & Garavan, 2005). Bowman and Singh (1993) assert that the desire to increase an organization’s levels of efficiency and effectiveness is generally at the core of managerial thinking and action. Prechel (1994) contends that organizational restructuring is not a primary strategy per se, but occurs as a “by-product” (Carbery & Garavan, 2005: 489) of portfolio or financial restructuring. This is mainly due to the fact that changes in the strategic and financial capital structures of an organization are likely to call for corresponding changes in an organization’s authority hierarchies (Prechel, 1994) and decision-making processes (Carbery & Garavan, 2005). Organizational downsizing or ‘downsizing’, on the other hand, constitutes a particular category or form of corporate restructuring (Carbery & Garavan, 2005). Downsizing generally involves the reduction in personnel (Cameron, 1994) and frequently results in the redesign of work processes to improve organizational productivity, efficiency, and effectiveness (Kozlowski, Chao, Smith, & Hedlung, 1993). Since the early 1990s, downsizing has generated a great deal of interest among scholars and managers alike (Gandolfi, 2007). As a consequence, a considerable body of literature on the phenomenon of downsizing has emerged (Gandolfi, 2006). Carbery and Garavan (2005) view downsizing as “a deliberate strategy designed to reduce the overall size of the workforce” (p 489). Downsizing is distinguished from non-intentional forms of organizational size reductions and a variety of downsizing techniques has appeared, including natural attritions, hiring freezes, early retirements, and, more frequently, layoffs (Gandolfi & Neck, 2005). Downsizing is used reactively in order to avoid bankruptcy and secure survival (Fisher & White, 2000) or proactively in order to increase productivity and enhance competitiveness (Gandolfi, 2007). Some research points out that downsizing is commonly adopted after large investments in labor saving technologies have been made by the organization (Carbery & Garavan, 2005). De Vries and Balazs (1997) deem downsizing an inevitable outcome and manifestation of globalization where organizations are continually forced to make adjustments to strategies, products, services, and the cost of labor. At its core, downsizing has regenerative purposes (Carbery & Garavan, 2005), yet empirical evidence suggests that the overall consequences of downsizing are persistently negative (Gandolfi, 2006, 2007). A substantial amount of scientific and anecdotal research has been generated on survivor illnesses, or the so-called “survivor syndrome” (Gowing, Kraft, & Quick, 1998; Carbery & Garavan, 2005). Cross-sectional and longitudinal data suggest that downsizing survivors exhibit a plethora of symptoms and illnesses, including decreased levels of commitment, loyalty, motivation, trust, and security (Gandolfi & Neck, 2005). A considerably less researched area concerns the extent to which downsizing survivors adjust to the new realities and dynamics of the organization, develop new skills and competencies, and take on new roles and responsibilities within the organization (Gandolfi, 2006).


The Move Towards Convergence of Accounting Standards World Wide

Dr. Consolacion L. Fajardo, National University, CA



This paper will discuss the theoretical bases for the move towards convergence of international accounting standards. It will look into the efforts of the U.S. Financial Accounting Standards Board, the International Accounting Standards Board, and the European standards setters to achieve the convergence of accounting standards world wide.  The benefits and the problems accompanying implementation will be addressed. The expectations is that establishing a common standards of accounting internationally will be beneficial to users and preparers by improving consistency, comparability, reliability, and greater transparency of financial information reported by companies around the world.  As a consequence, it is expected to increase cross-border investments, deepen international capital markets, and reduce costs of multinational companies who must currently report under multiple national accounting standards. Many corporations are multinationals having business operations in different countries around the globe.  However, the problem is that accounting standards differ from country to country due to differences in the legal system, levels of inflation, culture, degrees of sophistication and use of capital markets, and political and economic ties with other countries (Spiceland et al, 2007).  These differences cause huge problems for multinational companies.  Companies doing business in other countries experience difficulties in complying with multiple sets of accounting standards to convert financial statements that are reconciled to the GAAP of the countries they are dealing with.  As a result, different national standards impair the ability of companies to raise capital in the international markets.  The financial crisis in Asia and the accounting scandals in the U.S. and other countries during recent years have accentuated the fact that reliable financial reporting is vital to the effective and efficient functioning of capital markets and the productive allocation of scarce economic resources.  The failures of Enron, WorldCom, and Parmalat demonstrate the high costs of “window dressed” financial statements not only to particular companies but also to the global economy as a whole. Markets penalize uncertainty--continued investors’ concern on the quality of financial reporting and corporate management will be an impediment to economic growth, job creation, and personal wealth (Tweedle and Seidenstein, 2005).   The Sarbanes-Oxley Act of 2002 is the immediate response of the U.S. to curtail unethical accounting and business practices that imposes monetary penalties and/or jail terms for violators.  But that is not enough--it is expected that rigorous, improved, and uniform accounting and reporting standards would lessen the risk of corporate scandals, reduce losses and costs to investors/creditors, and restore public confidence world wide. Accounting standards differ from country to country which is causing problems to multinational companies.  A company doing business in more than one country will have to prepare financial statements based on those countries’ accounting standards to make the financial information consistent in terms of standards and thus comparable for economic decision-making.  It is costly and time consuming to prepare financial statements that are reconciled to the GAAP of the various countries that companies are dealing with around the globe.  Consequently, different national standards may become an impediment for companies desiring to obtain capital or make investments in international markets.  Currently, subsidiaries of multinational companies must comply with different national standards; the parent company must consolidate different national financial reports into single statements in accordance with its own home country’s accounting rule.  This process called reconciliation is very costly, time-consuming, and a waste of scarce resources. This paper will include a review of literature in an attempt to find answers to three questions: (1) what are the theoretical bases for the move to converge accounting standards globally? (2) What is the process of convergence by the FASB, IASB and European standard setters? (3) What are the benefits and problems in implementing the convergence of accounting standards world wide?  FASB (1976) defines conceptual framework as a constitution, a coherent system of interrelated objectives and fundamentals that can lead to consistent standards that prescribe the nature, function, and limits of financial accounting and reporting.  The fundamentals are underlying concepts of accounting that guide the selection of events to be accounted for, the measurements of those events, and the means of summarizing and communicating them to interested parties. 


Human Resource Management and Strategy in the Lebanese Banking sector: Is there a fit?

Dr. Fida Afiouni, American University of Beirut, Beirut



This article investigates the nature of Human Resource Management (HRM) practices applied in the Lebanese banking sector, examines the strategic nature of the HRM function, and sheds light on current problems that hold the human resource department back from properly implementing its practices. The case study method has been applied on 10 banks in Lebanon from different sizes and nationalities with the resource-based view (RBV) as a main theoretical framework. The dominant findings indicate that, out of the 10 banks studied, seven banks have in place HRM practices that are not aligned with the bank’s strategy. In those banks, the absence of top management’s support, the lack of cooperation of line management, and the low credibility of the HR function hinder the proper implementation of HR practices and keep the HR department from playing a strategic role. The role of the HR department in many organizations is at a crossroads. On one hand, the HRM function is in crisis, increasingly under fire to justify itself (Schuler, 1990). On the other hand, organizations have an unprecedented opportunity to refocus their HRM systems as strategic assets.  Many scholars (Huselid, 1995; Huselid & Becker, 1996; Huselid, Jackson, & Schuler, 1997) agree that a strategic approach to human resource management requires the development of consistent human resource management practices that ensure that the firm’s human resources will help achieve a firm’s objectives. This strategic approach to human resource management requires top managers’ awareness that a firm’s performance can be affected by human resource management practices. Some empirical studies support this statement (Arthur 1994; Huselid, 1995; Huselid & Becker, 1996).  While these studies have been useful for demonstrating the impact of strategic human resource management on a firm’s performance, they have revealed very little regarding the proper implementation of those practices. The aim of this article is to identify HRM practices applied in the Lebanese banking sector, examine the strategic nature of the HR department, and investigate the factors that impede the proper implementation of those practices. The literature on HRM and organizational strategy is critically examined with the resource-based view as a main theoretical framework. The research methods are then described. Finally, major conclusions and avenues for further research are proposed. Traditionally, the human resource management function played a role in strategy implementation, but rarely in strategy formulation. Often viewed as an expense-generator or an administrative function, the HRM function is currently imposing itself as a value-added partner. Over the years, many scholars and practitioners (Hall 1993; Huselid et al., 1997; Ulrich, 1997; Barney & Wright, 1998; Wofford, 2002; Hatch & Dyer, 2004; Ordonez de Pablos, 2004) placed the emphasis on making HR managers strategic business partners and making people a value-added source within organizations. The role of the HR function, however, has evolved throughout the years at paces that differ from an organization to another and from a country to another. This creates heterogeneities in human resource management systems and practices across organizations and across countries. While in some organizations the human resource management function is well developed and plays a strategic role, the personnel department is still prevalent, with its focus on administrative and legislative issues in others. Thus, we observe a large diversity in the conception of the human resource management function in its practices, roles, and objectives. Within the strategic human resource management literature, some scholars adopt a universal approach and recommend the implementation of the “best practices” for a strategic human resource management (Arthur, 1994; Pfeffer, 1994; Huselid 1995; Becker & Huselid, 2000). This paradigm uncovers a generic set of high-performance work practices. According to Becker and Huselid (2000), seven programs can improve a firm’s performance: employability, selective recruitment, teamwork and decentralization, high remuneration, intensive training, eliminating inequalities and boosting team spirit, and extensive information sharing. Other scholars adopt a contingency perspective and are prone to find a “best fit” between the company’s strategy and HR practices (Wright, 1998; Gratton & Truss, 2003). These authors state that there are no good or bad HR practices, but practices that “fit”.  Becker and Gerhart (1996) have a valuable contribution in this field by claiming that the two approaches are complementary and elaborate the “bundles and firm-specific configuration”. This approach seeks to have a horizontal fit (among the HR practices) and a vertical fit (between the HR function and firm’s strategy). Another study conducted by Huselid et al. (1997) distinguishes between human resource management’s technical and strategic efficiency.


Social Structure Characteristics and Psychological Empowerment: Exploring the Effect of Openness Personality

Dr. Sarminah Samad, Universiti Teknologi MARA Shah Alam Malaysia



The purpose of this paper is to determine the influence of social structure characteristics on employees’ psychological empowerment and whether openness personality plays a role in moderating the above stated relationship among Customer Marketing Executives of a telecommunication firm in Malaysia. Hierarchical regression analyses of 482 responses in the study revealed that all aspects of social structure (self-esteem, power distribution, information sharing, knowledge, rewards, transformational leadership and organizational culture) are important in determining employees’ psychological empowerment. Further, openness personality variable was found to be a moderator to the relationship between social structure characteristics and employees’ psychological empowerment. Theoretical and managerial implications of the findings and future research are discussed. Increased global competition, the advent of technological innovation, globalization and changes in both workforce and customer demographics have given rise for organizations to be more efficient and productive. Consequently, to maintain a competitive edge in the service industries, considerable emphasis has been placed on providing quality services for customers. This is no exception in Malaysian telecommunication industry. Therefore Customer Marketing Executives in this sector are considered as front-line employees that provide the face to face service for the organization. Front-line employees according to Daniel et al. (1996) have direct, influential customer contact that could influence customer’s perception on service quality. Conduit & Mavondo (2001) suggested that motivation of front-line employees in a service company is crucial to the service delivery process. Literature have documented that the positive attitude and behaviors of employees have been found to be related with the experience of service by customers (Chebat & Kollias, 2000). Therefore, improving the motivation of employees has become an important area of concern among managers in service organizations. Additionally, dynamic business environment has been forcing most organizations to change their traditional approach of management. This is due to the traditional management techniques used in business organization has become obsolete. Further, rapid change of technological ages has created the new millennium with a competitive landscape that demanding customers with individual needs come out from the changing environment. Therefore, adapting new approach of management to boost up organization performance and high quality of services as well as maintaining high level of motivation is priority to managers. One of the newer techniques used by organizations and that has attracted great interest from scholars and practitioners is psychological empowerment. Empowerment in workplace is important as it is related to personal outcome variables, such as perceived burnout, autonomy, feelings of job satisfaction and commitment to the organization (Hatcher & Laschinger, 1996). Further, according to Conger & Kanungo (1988), the practice of empowering subordinates is a principal component of managerial and organizational effectiveness. Attributed this concept as a dynamic and complex phenomenon, (Staw & Epstein, 2000) stressed that this technique did have significant effect on firm performance and reputation. The empowerment topic has received a great deal of interest in the past decade and numerous studies have been directed at determining its casual antecedents (for example Spreitzer, 1996 and Thomas & Velthouse, 1990). This topic has also received substantial attention in past research due to its significant impact on work attitude such as effectiveness, strain and satisfaction (Hatcher & Laschinger, 1996). Employee empowerment has been defined in several different ways due to diverse definitions in the scholarly literature (Heller, 2003). Scholars have distinguished two main views on empowerment mainly structural and psychological perspectives. Structural empowerment focuses on empowering management practices such as delegation of decision making from upper to lower levels of organization (Heller et al., 1998) and increasing access to information and resources among individuals at the lower levels (Rothstein, 1995). Accordingly, the main idea of structural empowerment is that it entails the delegation of decision -making prerogatives to employees, along with the discretion to act on one’s own (Mills & Ungson, 2003) From the psychological approach Conger & Kanugo (1988) define empowerment as the motivational concept of self-efficacy. Thomas & Velthouse (1990) describe empowerment as intrinsic task motivation and cannot be captured by single concept. They define psychological empowerment in a set of four cognitions reflecting an individual’s orientation to his or her role in term of meaning, competence (almost similar with Conger and Kanungo’s self-efficacy), self-determination or choice and impact or influence.


The Impact of the Asian Tsunami Attacks on Tourism-Related Industry Stock Returns

Dr. Chih-Jen Huang, Providence University

Dr. Shu-Hsun Ho, Providence University

Chieh-Yuan Wu, Providence University



The Indian Ocean experienced a devastating tsunami in Asia on the morning of 26 December 2004. This study utilizes a market-adjusted returns model of event study to analyze abnormal stock returns in Thailand’s tourism industry. The study differs from previous studies of market reactions to unanticipated events in terms of cross-country analysis. We investigate reactions of tourism-related industry stocks in the following markets after the Asian tsunami event: Taiwan, Hong Kong, New Zealand and Australia, from June 2004 to March 2005. In addition, this research compares differences of the abnormal returns in tourism and leisure, transportation and logistics, insurance, construction materials and construction development industries in Thailand from June 2004 to March 2005. We examine the stock market reaction for 135 days prior and examine four days and 15 days following the Asian tsunami event.  Results of analysis show partial, significant negative stock abnormal returns for the tourism and leisure industry in Thailand. On the other hand, there are also partial significant positive stock returns in the construction development and construction materials industries after the tsunami occurred. There is no significance found for the tsunami’s critical influence on Taiwan, Hong Kong, New Zealand and Australia. The Indian Ocean experienced a devastating tsunami on the morning of 26 December 2004 that had significant impact across Asia; the fourth largest of the super strong shock of scale since the 20th century. The massive earthquake, which posted 9.0 on the Richter Scale caused serious destruction across fourteen countries. The most serious incidence occurred in Indonesia, Sri Lanka, India, and Thailand. In this study, we focus on the injured country, Thailand, in order to analyze whether the tsunami event benefited other countries or not. Recent reports indicate that high numbers of tourists have visited tsunami-impacted countries, such as Taiwan, Hong Kong, New Zealand, and Australia since the Asian tsunami. The study mainly explores stock prices in the above countries. Because of difficulties with collecting stock prices from South Asian countries and also because tourism received the most serious hit to the Thai economy, we choose to investigate Thailand because of the massive market reaction in this country, and also its benefit to other countries, which include: Taiwan, Hong Kong, New Zealand, and Australia. Previous studies discuss the Asian tsunami crisis artificially, which is unusual in exploring a natural crisis. This research expects to confer the psychological impact of the Asian tsunami damage in terms of its stock market reaction. In this research, the purpose of context discusses whether the Asian tsunami influences enterprise stock price and emphasizes the tsunami’s influence across different industries in Thailand and in other countries, when a disaster occurs. According to a recent report by Fidelity, an investment institution, the disasters in South Asia wounded many industries, which include tourism and leisure, transportation and logistics, insurance, while construction materials and construction development industries may have benefited from the disaster. Therefore, we suppose that these industries’ stock market prices may be affected by the disaster. Further, we offer analysis of the Asian tsunami incident in terms of how it affected companies or industries, at a common date. In this investigation, we establish research study paper on the effects of the Asian tsunami of December 26, 2004 on its stock price. We use the econometrics method of event study analysis.According to extant research, event study is generally divided into two kinds, either types of events, or a single event. This study’s research object is the Asian tsunami as a single event, so it cannot adopt traditional event study methodology. Therefore, we use a non-traditional event study method that relates to the estimation model of investment combination suggested by Grace, Rose and Karafiath (1995), Shen and Lee (2000). Standardized methods by Patell (1976) and Boehmer, Musumeci, and Poulsen (1991) are shown to outperform traditional, non-standardized tests in event studies. However, standardized tests are valid only if there are cross-sectional, uncorrelated observational returns. In this paper, we propose simple corrections to these test statistics to account for such correlation.  Accordingly, we examine different accumulating abnormal returns on individual companies in the tourism industry in Thailand, Taiwan, Hong Kong, New Zealand, and Australia. In addition, we utilize cross-sectional analysis to examine the effects of important financial factors on abnormal remunerations.  In this study, our primary challenge is to select public companies in which the company’s name indicates that it involves tourism, because there is no global SIC code tourism in Hong Kong. The sample is retrieved from an historical stock price database for the 2004-2005 period, with the criterion that the firm has sufficient data for each variable in this study’s program model.  The identification of time parameters is then divided into two parts that define event day, estimate period and event period. Detailed content is as follows: I. Define Event Day: After deciding on the event to examine, we must confirm when it took place.


The Relationship between Leadership Behavior and Organizational Performance in Non-Profit Organizations, Using Social Welfare Charity Foundations as an Example

Dr. Ruey-Gwo Chung, National Changhua University of Education, Taiwan

Chieh-Ling Lo, National Changhua University of Education, Taiwan



Although the main mission of an NPO is “not for profit,” it must still pay attention to effective management practices.  The current study took social welfare charity foundations as subjects and used a questionnaire to explore the effects of top managers’ leadership behavior on organizational performance.  We found that in all of the 77 valid samples, leadership behaviors in 10 social welfare charity foundations were “high transactional-low transformational,” 23 were “low transactional-low transformational,” 35 were “high transactional-high transformational,” and 9 were ”low transactional-high transformational.”  In addition, different leadership behaviors had obvious differences on internal communications and management and on finance structure.  From the perspective of full-time employees, top managers’ leadership behavior tends towards “low transactional-low transformational,” while the volunteers regard it as “high transactional-high transformational.” The recent trend in Taiwan towards a well-developed society and a high standard of living has forced the gradual appearance of the Non-Profit Organization (NPO).  Taiwan’s democratization and the proclamation of related laws and regulations have also enhanced the advances of NPOs.  Even so, compared with for-profit businesses which emphasize the necessity of innovation, efficiency and institutionalization for their survival, the lack of organized management could lead to trouble in many NPOs.  To solve these problems, efficient human resource management is a priority, since the ability of the NPO to provide services is related to the quality of its personnel.  NPOs employ both full-time employees and volunteers, who require different management approaches.  Based on the literature review, it was found that previous studies mainly focused on the volunteer side, with few discussing the actual management of an NPO.  In addition, appropriate leadership behavior is important for maintaining members’ devotion to the organization.  Therefore, this study will use social welfare charity foundations (SWCF) as an example and focus on the following purposes: (1) Understanding which leadership behavior is adopted by the top managers in NPOs; (2) Exploring whether or not the leadership behavior of top managers that is suitable for full-time employees is also suitable for NPO volunteers; (3) Investigating the effects of different top managers’ leadership behavior on organizational performance in NPOs. The Non-Profit Organization:The Non-Profit Organization (NPO) is labeled as “the third sector,” which is different from the business and government sectors.  Based on the main economic activities in NPOs, Salamon and Anheien (1997) generalized NPOs into 12 different groups: culture and entertainment, education and research, health care, social service, environment, residence and development, law and politics, charity, international activities, religion, business, and others (Yeh, 2000).  Social welfare charity foundations (SWCF) have the following characteristics: (1) the SWCF is not based on profit, so its organizational performance neglects profit; (2) the funding sources are from public donations, government subsidies, and revenues from the services it performs.  As a result, all of those entities – the SWCF itself, donors, government agencies, and the public – are concerned about the performance of SWCFs; (3) the SWCF is less formalized and centralized, since the contributions of both its professional employees and volunteers play an important role in it; (4) the main task of a SWCF is to provide services for the disadvantaged, where the measurement of service quality depends on the personal perceptions of those being served.  Thus, the use of subjective indices to evaluate organizational performance is problematic (Lin, 2000). Leadership Behavior: One definition of leadership is, the process whereby one person tries to influence others to attain the expected objectives in a group of more than two persons.  Generally speaking, leadership behavior can be categorized as either transformational or transactional.  Bass (1985) defined transformational leadership as the behavior of inspiring members to create performance above expectations, i.e., by enhancing members’ confidence and upgrading the value of working results to inspire members’ extra efforts.  Transformational leadership comprises the following dimensions: (1) Idealized influence: this dimension focuses on the leader’s personal characteristics, which provide his/her mission and vision and enhance their subordinates’ self-respect to win their respect and trust; (2) Individualized consideration: this dimension focuses on the leader’s concerns about every employee’s development and differences.  This type of leader not only satisfies employees’ current needs, but also assists them in fulfilling their potential; (3) Intellectual stimulation: this dimension focuses on the leader encouraging subordinates to use their experience and knowledge to solve problems.


A Critical Process for Methods Selection in Organizational Problem Solving

Chia-Hui Ho, Far East University, Taiwan



This paper aims to explore a critical process for evaluating management methods. This paper also aims to discuss, from a critical systems perspective, how world views (which necessarily have ideological aspects to them) will influence method-users to choose particular methods for organizations. Thus, a new process called Participative Method Evaluation (PME) is established. PME is founded on the idea that a person's understanding of a method is influenced by his/her social ideology. The basic concern of the evaluation of method needs to be how method-users and organizational/environmental stakeholders can examine their ideological differences through processes of critique in order to make more informed choices.  PME embraces three stages: Surfacing, Triangulation and Recommendation. Surfacing aims to expose and explore the various assumptions about, and views on, the candidate method and the organizational situation. Triangulation compares and contrasts the various perspectives, and if possible an accommodation of views is sought. Recommendation provides practical suggestions to stakeholders as to the likely effects of using the method being evaluated, and where appropriate highlights possible modifications and/or alternatives.  Human beings follow a pattern of behavior based on their knowledge. It is claimed that knowledge is necessarily derived from individual experience combined with social and cultural influences (e.g. Gregory, 1992), and this knowledge can be seen as a basis for the individual's value judgment. From Burrell and Morgan's (1979) point of view, individuals always hold a particular world view (a so-called 'paradigm'), according to which they perceive reality. This world view is derived from their learning experience and personal belief. Although an individual's world view might shift, he/she cannot hold two different world views at the same time. Thus, at a particular point in time, an individual can only interpret anything according to his/her current state of awareness. The question therefore arises, how can we escape from our own value assumptions (ideological traps) and socio-cultural judgments? Moreover, what can we do to deal with different social judgments and individuals' personal assumptions, in order to handle social conflict? Commonly, the people affected by decision to use particular methods are not involved in the intervention process. Those who are affected are often unable to tell the method-users which method they think will be suitable. This means that we should not predetermine what method will be applied without first understanding the current situation, especially who is included and excluded from the method choice procedure. Many critical systems thinkers (e.g. Ulrich, 1983; Midgley, 1992, 1997a) have already acknowledged this problem, as have the authors of Total Systems Intervention (Flood and Jackson, 1991; Flood, 1995). This paper is concerned with the underlying assumptions made by method-users, candidate methods (expressed in the writing of their authors), and stakeholders in and beyond the organization. It argues that methods should not be classified into fixed categories. Instead, a method should be interpreted according to the current organizational context and method-users' assumptions. The process of interpretation should be critical, in that assumptions should be subject to review and, as far as possible, be made transparent to and open to change by, those who will be affected by intervention.  The significant question that needs to be addressed is, who should be considered as stakeholders of a method evaluation process? Answering this question will indicate whose views (and associated ideologies) might need to be considered when it comes to applying the method for method evaluation. The stakeholder concept "enables an organization to identify all those other organizations and individuals who can be or are influenced by the strategies and policies of the focus organization." (Fill, 1995, p.23). This paper firstly discusses the nature of participation before identifying three groups (and sub-groups) of stakeholders who are involved in, or affected by, intervention, and so need to contribute their views about the candidate method. It then argues that the three (or more) perspectives on the candidate method that are provided by these stakeholders provide a more complete picture of the suitability of the candidate method than a method-user could generate without stakeholder participation. Having reviewed some key assumptions concerning the need for ideology-critique, and the importance of considering the perspectives of the method-user, the candidate method and both organizational and environmental stakeholders, it is now possible to draw these assumptions together to create a new method for method evaluation.


Are Real Estate and Stock Markets Related? The Evidence from Taiwan

Dr. Ning-Jun Zhang, Southwestern University of Finance and Economics (SWUFE), P.R. China

Dr. Lii Peirchyi, Tamkang University, Taiwan, Republic of China

Yi-Sung Huang, Southwestern University of Finance and Economics & Ling Tung University, Taiwan



This paper studies the long-run relationship between real estate and stock markets, using both standard cointegration tests of Johansen and Juselius (1990) and Engle-Granger (1987) and fractional cointegration test of Geweke and Porter-Hudak (1983), in the Taiwan context over the 1986Q3 to 2001Q4 period.  The results from both two kinds of cointegration tests indicate that these two markets are not cointegrated with each other.  In terms of risk diversification, two assets should have been included in the same portfolio. Knowing and testing the long-run relationship between real estate and stock markets are very important for portfolio investors who want to diversify in these two assets markets.  As we know that, if assets markets are found to have a long-run relationship then this would suggest that there may be little long-run gain in terms of risk reduction by holding such assets jointly in a portfolio.  Previous empirical studies have employed cointegration techniques to investigate whether there exist such long-run benefits from international equity diversification (see Kwan et al., 1995; Masih and Masih, 1997).  According to these studies, asset prices from two different efficient markets cannot be cointegrated.  Specifically, if a pair of asset price is cointegrated then one asset price can be forecast (is Granger-caused) by the other asset price.  Thus, these cointegration results suggest that there are no gains from portfolio diversification, in terms of risk reduction.  This study attempts to make some contributions to this line of research by exploring whether there exist any long-run benefits from asset diversification for investors who invest in Taiwan’s real estate and stock markets.  In this study, we test for cointegration using both standard cointegration tests of Johansen and Juselius (1990) and Engle-Granger (1987) and fractional cointegration test.  The results from three tests all suggest that these two asset markets are not pariwise cointegrated with each other.  The finding of no cointegration can be interpreted as evidence that there were no long-run linkages between these two asset markets and thus, there exist potential gains for investors from diversifying in these two asset markets over this sample period.  These results are valuable to investors and financial institutions, holding long-run investment portfolios in these two asset markets.  The remainder of this study is organized as follows.  Section II presents the review of some previous literature. Section III presents the data used.  Section IV presents the methodologies used and discusses the findings.  Finally, Section V concludes. The relationship between stock prices and real estate prices has been the subject of substantial debate in both the academic and practitioner literature.  The current literature on the relationship between equity and real estate markets tends to show conflicting results.  Much of the empirical evidence seems to support the notion that two markets are segmented.  For example, Goodman (1981), Miles et al., (1990) and Liu et al., (1990) and Geltner (1990) have documented the existence of segmentation within various real estate markets and stock markets.  However, Liu and Mei (1992), Ambrose et al. (1992) along with Gyourko and Keim (1992), have produced contrary results in that real estate and stock markets are integrated.  It is apparent that it is unclear whether the real estate and stock markets are segmented or integrated.  The primary objective is to ascertain whether any significant relationship exists between these markets and what implications this may have for active market traders.  A simple motivation for our study is that it can yield a number of insights that may aid investors and speculators to forecast future performance from one market to the other. The data sets used here consist of quarterly time series on stock price index (lstkp) and real estate price index (lresp) covering the 1986Q3 to 2001Q4 period.  To avoid the omission bias, we also incorporate real interest rate (liret) into our study.  Stock price index and real interest rate were obtained from the AREMOS database of the Ministry of Education of Taiwan.  Real estate price index was collected and constructed by Hsin-Yi Real Estate Inc. Descriptive statistics for both real estate and stock markets returns are reported in Table 1. 


The Effect of Convertible Debt Issuance on Product Market Competition

Jie Yang, Huazhong University of Science and Technology, Wuhan, PRC

Dr. Xinping Xia, Huazhong University of Science and Technology, Wuhan, PRC



This paper investigates the effect of convertible debt issuance on Cournot game outcome. With the assumption of no default risk, compared with standard equity or debt financing, the conversion feature of convertible debt serves as a committing device for a conservative stance under the normal case of return, thus encourages an aggressive stance of its rival firm. This strategic disadvantage of convertible debt can explain the long-run underperformance after issuance. This paper investigates the effect of convertible debt issuance on strategic output market behavior. The relationship between firm’s financing policy and its strategy in product market has been recognized since the innovative study of Brander and Lewis (1986). They point out that the Cournot firms subject to some market uncertainty will use the limit effect of debt to commit to increase output in an attempt to gain a strategic advantage. The basic point is that shareholders will ignore reductions in returns in bankrupt states, since bondholders become the residual claimants in this states. After that, the research in this area has been a focus. Maksimovic (1988) extends Brander and Lewis' model of strategic effects of the limited liability of debt by considering multiple periods of interaction. Showalter (1995) analyses the optimal strategic debt choice in Bertrand (price) competition. Glazer (1994) distinguishes between short and long-term debt at the time of analyzing the relationship between capital structure and product markets. These work contributed to establish such a principle that firm’s financial decisions and product market strategy interacts. The previous studies, however, have restricted attention to a subset of the feasible instruments, such as the simple mix of debt and equity. Then we will ask how more sophisticated financial instruments, such as various kinds of convertibles, affect the product market outcome. In this work, we analyze how conversion feature of convertible debt, change market strategy of the rival firms in Cournot game. We point out that with the presumption of no default risk, under normal case of return (that is, better states of natural world lead to higher marginal profits), any aggressive stance of issuing firm to increase output will induce convertible debt holders to convert to common stock so as to share earnings with shareholders in good states and keep it as straight debt to get back the fixed amount of repay in bad states. Thus managers in behalf of current shareholder’s wealth maximization won’t take an aggressive stance, and the conversion feature included in convertible debt serves as the commit device of conservative output stance in product market competition. The foresighted rival firm anticipate it, which encourage its adoption of an aggressive strategy to increase output. As a combination of straight debt and contingent equity, convertible debt is usually interpreted as a hybrid security to reduce the information and agency costs of external finance. Green (1984) shows a mix of convertible securities and debt is superior to straight debt because the conversion option reduces the inclination of the entrepreneur to engage in risky projects. Myers (1998) suggests that convertible debt can mitigate both the over-investment problem and the underinvestment problem at the same time based upon the conflict between the shareholders (owners) and management. Isagawa (2000) proves that convertible debt is superior to common debt and equity in controlling managerial opportunism under certain conditions. These researchers predict that an appropriately designed convertible debt will help restore investment incentives so that managers will make efficient capital expenditure decisions. In these studies the role of product market in which the firm operates is to provide an exogenous random return that is unrelated to financial policy. However, recent empirical evidence finds poor long-run operating performance and stock price underperformance following convertible debt issuance (Lee and Loughran, 1998; Lewis et. al., 2001), which does not support these theoretic predictions. Our work suggest that while the conversion feature of convertible debt can help reduce the agency costs of outside finance, its negative effect on product market behavior may be neglected by previous study and thus leads to inconsistency with models.  The contribution of the paper to the literature is that it points out the conversion feature effect of convertible debt on market outcome. When some firm issue convertible debt to avoid agency costs, its negative effect on market strategy may be foreseen by its rival firm, and thus constitutes strategic disadvantage for issuing firm.


Creating and Sustaining Competitive Advantages of Hospitality Industry

Hui-O Yang, Swinburne University of Technology, Melbourne, Australia

Dr. Hsin-Wei Fu, Leader University, Tainan, Tiawan



This study provides a meaningful framework for the assessment on competitive advantage.  The main purpose of this study is exploring how to create and sustain competitive advantages of hospitality industry by scanning their business environment, including external environment and internal environment.  External environment factors contain country, industry, stakeholder, competitor, strategic networks, differentiation, and branding.  Internal environment factors include resource-based, human resource, and information technology. Hospitality is the welcoming of strangers as guests into one's home to dine or lodge.  It provides both tangible and intangible goods to customers, such as products and services.  Adding values for customers, employees, and owners has become a central theme in strategic management for hospitality companies.  To create values for these stakeholders, a firm should achieve a competitive advantage over its competitors by adapting itself to the uncertain industry environment, understanding the changing needs of customers, and responding to new market entries (Byeong and Haemoon, 2004).  Achieving competitive advantage has been recognized as the single most important goal of a firm.  Without achieving competitive advantage, a firm will have few economic reasons for existing and finally will wither away (Porter, 1980).  It is generally accepted in the strategic management literature that those executives who are able to scan their business environment, including external environment and internal environment, more effectively will achieve greater success (Olsen, Murthy and Teare, 1994).  This success will come if they are able to match the threats and opportunities in that environment with appropriate strategies.  Hospitality executives must analyze both external factors and internal resources to develop a strategic plan and obtain competitive advantages (Harrison, 2003a and 2003b).  Figure 1 is a framework of environmental scanning which is used to explore the external environment and internal environment to investigate how to create and sustain competitive advantages for hospitality industry. Country analysis is a kind of general macro-environment analysis.  Firm must analyze large amounts of demographic, economic, cultural, social, political, religious, and legal data to determine the markets that are most receptive to their product and service offerings.  Country analysis could be used to identify an appropriate location and to tailor the offerings as much as possible to the tastes of people in that location (Crook, Ketchen and Snow, 2003). Porter(1980) provides a framework that models an industry as being influenced by five industry forces and it is called Porter’s five-forces approach (PFA).  The PFA adopts an outside-in approach in understanding competitive advantage in that it views competitive advantage as stemming from these five industry forces.  This approach is based on an assumption that firms within an industry possess identical or similar resources.  As a result, a firm’s success depends greatly on how to react to market signals and how to accurately predict the evolution of the industry structure (Byeong and Haemoon, 2004). The threat of new entries refers to the prospect that new players will enter an industry.  New entrants generally lead to an erosion of industry profits if the entry barriers are low.  However, the likelihood of new entry is low if the entry barriers are high, which include anything that discourage new competitors from entering the industry, such as product differentiation, threat of severe retaliation against newcomers, exclusive contracts, high capital requirements, saturated distribution channels, large economies of scale, and restrictive government regulations (Crook, Ketchen and Snow, 2003; Harrison, 2003a and 2003b).  When entry barriers are high, existing firms enjoy a measure of protection that can inhibit rivalry and enhance profits.  In the hospitality industry, entry barriers are not particularly high (Harrison, 2003a and 2003b).  Firms must consider the viability of substitutes.  The threat of substitutes is one of the major factors that intensify competition in the lodging industry (Byeong and Haemoon, 2004).  For example, teleconferences using video equipment or telephone can affect lodging operators by reducing opportunities of business travelers’ room nights.  When close substitutes are available, firms must devise ways to make their products or services more attractive than the substitutes (Crook, Ketchen and Snow, 2003). Competitors have economic power based on their ability to compete.  Competitors with disproportionately strong resource bases can be aggressive and create a strong rivalry (Smith, Ferrier and Ndofor, 2001).  It is important to define the nature of rivalry in each market, as well as the industry as a whole (Harrison, 2003a and 2003b).  When the intensity of competitive rivalry is high, profits suffer.  Rivalry is enhanced when industry growth is low, because growth-minded companies must steal customers from other firms to meet growth objectives.  Also, if customers can easily switch among providers, or if there is lack of differentiation among providers, firms must compete on price to attract customers (Crook, Ketchen and Snow, 2003).  However, competitive pricing is the least desirable type of competitive strategy for the hospitality industry.  This is because it is of real benefit only to the lowest cost producer and can be easily copied, resulting only in short-term gains (Wong and Kwan, 2001).


Backward Integration and Risk Sharing in a Bilateral Monopoly: Further Investigation

Dr. Yao-Hsien Lee, Chung-Hua University, Taiwan

Yi-Lun Ho and Sheu-Chin Kung, Chung-Hua University, Taiwan

Tsung-Chieh Yang, Chung-Hua University, Taiwan



This paper investigates implications of the first-order conditions a la Lee et al. (2006) to show that the principal’s ordered quantity and profit-sharing ratio (i.e., backward integration) can affect the agent’s cost-reducing effort.  We also state the intuitions behind the propositions in the paper. A considerable agency-theoretic literature has developed recently that addresses procurement of goods and services as often being characterized by bargaining and contracting between the government (principal) and a single supplier (or several suppliers, i.e. agent(s)). Papers focusing on this theme (see Baron and Besanko (1987,1988), Laftont and Tirole (1986) and McAfee and McMillan (1986)) study the purchase of a particular good within the framework in which uncertainty, asymmetric information, and moral hazard are simultaneously present.  In the context of bilateral monopoly contracting practices with uncertainty and asymmetric information, Riordan (1984) establishes necessary and sufficient conditions for the existence of contracts that are efficient and incentive compatible. Most recently, Riordan (1990) shows that some backward integration by the risk-neutral principal (downstream firm) is optimal if it increases the risk-neutral agent's (upstream firm) production and that backward integration increases with the sunkeness of the agent's investment.  Although risk sharing, moral hazard, and asymmetric information have been studied extensively in the above models, there has been almost no investigation of the extent or precise nature of their effects on a bilateral monopoly that maintains a long-standing relationship, for instance, business partners.  Lee et al. (2006) extend Riordan's (1984) bilateral contracts model to include moral hazard and backward integration in a framework of long-term business partner structure of stable and mutual relationships among trading partners.  Their model moves toward the study of uncertainty, asymmetric information, moral hazard, and risk sharing in a procurement contracting framework by introducing backward integration into the model of vertical shareholding interlocks previously examined in the above models.  Unfortunately, they did not go further to explore implications first-order conditions, which can be used to examine the responsiveness of the agent’s cost-reducing effort to changes the principal’s ordered quantity. The main purpose of this paper is to use the model of Lee at al. (2006) to discuss implications of first-order conditions obtained in their model.  In the process, we will demonstrate the impacts of changes in the principal’s ordered quantity on the agent’s cost-reducing effort.  The remainder of the paper is organized as follows. Section 2 reviews the basic results in the model of Lee at al. (2006).  Section 3 analyzes implications of first-order conditions.  Section 4 concludes the paper. In what follows, we call  an effort subsidy if it is positive and  an effort tax if it is negative.  It is easy to see that Assumption 1 is satisfied as long as we choose the proper specification of parameters. Stated another way, the problem concerning determination of what quantities to be produced can be solved. Although this is an essential problem for the principal, most previous studies have ignored this aspect.  This also allows us to analyze the effect of fluctuations of quantity ordered from the principal on the principal's backward integration and effort subsidy and on the agent's cost-reducing effort.  Assumption 2 simply puts a positive upper bound on the agent's marginal information cost (or hazard rate).  This also indicates that the marginal information cost for the agent to overstate its true cost cannot be too large.  Now, solving the system of equations by a simple algebraic calculation yields.  It is easy to see that(the agent’s cost-reducing effort), (the principal’s profit-sharing ratio), and (the principal’s effort subsidy) are all positive.  Since for expositional ease, we shall refer it as an effort subsidy. Furthermore, implies that it is always best for the agent to exert cost-reducing effort activity. This is consistent with the implications of individual rationality and incentive compatibility.  Equation (13) suggests that regardless of whether the agent has truthfully reported its production cost, the principal should choose profit- sharing and effort subsidy strategies to enforce its contracting mechanism, although the agent will be better off if it uses a truthful reporting strategy because of the property of double separation.


An Analysis of Contingent Contracting in Acquisitions

Dr. David R. Beard, University of Arkansas at Little Rock, Little Rock, AR



The literature has identified various motives for the use of earnout contracting in the acquisition of target firms.  In particular, Kohers and Ang (2000) and Datar, Frankel, and Wolfson (2001) contend that earnouts are relegated to mergers where problems of informational asymmetry and agency are so detrimental that this costly type of contacting must be employed to protect the interest of bidder shareholders and target firms.  This research examines a sample of acquisitions in which earnouts are used and contrasts that to a sample of “traditional” acquisitions to explore specific hypotheses concerning agency, informational asymmetry, and the use of an earnout as a means of financing. Numerous contracting technologies have evolved to reduce some of the problems inherent in merger transactions.  For example, each party has incentives to propose a contract that overvalues itself and undervalues its opponent, thereby gaining a larger share of any benefits to the merger.  Another possible problem is that informational asymmetries between the two parties may be such that a quality target may not be identified or if identified may not be able to credibly reveal its value to the bidding firm.  Among the contracting solutions to these conflicts are the joint venture, the partial acquisition, and the earnout.  The third technique mentioned, an earnout, mitigates informational asymmetries by shifting some of the risk of misvaluation to the target firm.  Briefly, in an earnout, the bidder agrees to pay the target an initial amount for the acquisition plus predetermined future payments contingent on the target’s achievement of performance milestones within a specified time period.  In earnouts, the acquired assets can be those of either an entire firm or a subsidiary of a firm.  If a bidder misvalues a target, the contingent payment portion of this deal will be reduced, possibly to zero.  The earnout contract also provides the target with the ability to signal its quality.  Only high quality targets will agree to have a larger portion of the deal to be paid as a contingent claim based upon future milestones of the combined firm.  An earnout is a relative newcomer to contracting technologies in mergers and acquisitions.  The literature contends that the use of this technique in acquisitions leads to a mitigation of bidder misvaluation resulting from informational asymmetries between the parties and alleviates adverse selection problems associated with the significant informational asymmetries and agency problems in these transactions.  Yet another reason for the use of this acquisition vehicle is that it facilitates retaining valuable human capital from the acquired firm.  The contingent nature of this type of contracting method can be arranged such that owner/operator knowledge is retained, non-compete constraints are placed on these individuals, and the retained human capital has the incentive to put forth optimal effort in order to maximize the contingent payments associated with an earnout.  On the other hand, earnouts impose the costs of inefficient risk sharing, increased contractual complexity, increased administrative costs, and litigation risk potentially offsetting any informational benefits.  Nonetheless, the use of contingent payments in mergers and acquisitions is growing.  The increased use of earnouts despite their costs and complexity implies that the benefits associated with this acquisition vehicle outweigh its costs.  That is, the gains an earnout creates or the problems it solves must be of some significance in order to outweigh the pitfalls that the use of this contracting technology entails.  The relevance of this study stems from this idea. Bidders propose earnout contracts for a variety of reasons, ranging from reduction of the problems associated with asymmetry of information, to reduction of problems associated with agency.    It is well known that successful bidders in competitive auctions, including mergers, are likely to overbid, whether due to overoptimism and hubris (Roll, 1986) or as a form of winner’s curse resulting from incomplete or uncertain information (Eckbo, et al., 1990).  The latter is especially likely when the target is a private firm, a firm with few assets in place, or when the value of the target is dependant upon the knowledge of the managers or clientele relationships that can easily be “pocketed” and taken to another firm.  In the absence of competition (explicit or implicit) for the target firm, however, the bidder is likely to protect itself against overbidding resulting from incomplete information about the target and offer a lower price. In cases when the target firm is informationally opaque, managers of the target firm are unable to credibly convey their favorable private information to the bidder.   The earnout mitigates this problem through the contingent payments associated with the contract.  The bidder will be able to adhere to its valuation of the target by structuring the upfront payment and the contingent payments in such a way that its valuation is verified if the target performs as the bidder predicts.  The bidder and the target agree on contingent payments tied to various milestones concerning future performance and structured to reflect the payoffs each believes appropriate to compensate the target.  If the future milestones are met and exceeded, the target owners will receive higher payouts, which will compensate them in such a way that is more in line with their own valuation.


Capital Structure in Taiwan’s High Tech Dot Companies

Dr. Hsiao-Tien Pao, National Chiao Tung University, Taiwan, ROC



This study investigates the important determinants of capital structures of the high tech dot companies in Taiwan using a large panel from the year 2000-2005. Three time-series cross-sectional regression models (variance-component model, first-order autoregressive model, and variance-component moving average model) and one multiple regression model  with 10 independent variables (seven corporation’s factors and three macro-economic factors) are employed. The variance-component model has the smallest root mean square error. This indicates that the time-series and cross-sectional variations in firm leverage are very important factors in model fitting. The major difference of determinant in high tech dot companies is business risk. It has positive and significant impact on capital structure. Because high tech is the more speculative industry, more speculation is associated with high risk and high investment opportunity. Firms with higher investment opportunity have higher demand for capital to sustain their investment. Therefore, business risk is positively related to debt ratio. Managers can apply these results for their dynamic adjustment of capital structure in achieving optimality and maximizing firm’s value. Regarding the qualitative aspects of capital formation within the high tech dot companies of the 90s, we find that beginning about 1995 a mob mentality set in within the investment community. Essentially, no rational reason could be quantified for the ability of the dot coms to attract large amounts of investment capital. That is, on the surface, there seemed to be an irrational behavior within the investment community. If we mine the information deeper, it would be quite rational for the venture capitalists to fund the dot coms to the extent that they did.  Examining the phenomenon of the high tech dot coms, several factors come into play. Firstly, the general economy was doing well and the allure of high tech business was irresistible to stock purchasers. The thought that much of the world business would be internet/computer orientated took root and became the glamorous hot issue of the day. Venture capitalist read the fervor and proceeded to fund startup companies in record numbers. As a result, the capital structure or the determinants of the capital structure of the high tech industry seems to be significantly different from that of the other industries. Ever since Myers article (1984) on the determinants of corporate borrowing, literature on the determinants of capital structure has grown steadily. Part of this literature materialized into a series of theoretical and empirical studies whose objective has been to determine the explanatory factors of capital structure. Titman and Wessels’ article (1988) on the determinants of capital structure choice took such attributes of firms as asset structure, non-debt tax shields, growth, uniqueness, industries classification, size, earnings, volatility and profitability, but found only uniqueness was highly significant. But Harris and Raviv (1991) in their similar article on the subject pointed out that the consensus among financial economists is that leverage increases with fixed costs, non-debt tax shields, investment opportunities and firm size. And leverage decreases with volatility, advertising expenditure, the probability of bankruptcy, profitability and uniqueness of the product. Moh’d, Perry, and Rimbey (1998) employed an extensive time-series and cross-sectional analysis to examine the influence of agency costs and ownership concentration on the capital structure of the firm. Results indicated that the distribution of equity ownership is important in explaining overall capital structure and managers do reduce the level of debt as their own wealth is increasingly tied to the firm. Moreover, Mayer (1990) indicated that financial decisions in developing countries are somehow different. Rajan & Zingales (1995) took asset structure, investment opportunities, firm size and profitability as the determinants of capital structure across the G-7 countries. They found that leverage increases with asset structure and size, but decreases with growth opportunities and profitability. Again firm leverage is fairly similar across the G-7 countries. Booth, Aivazian, Demirguc-Kunt, and Maksimovic (2001) took tax rate, business risk, asset tangibility, firm size, profitability, and market-to-book ratio as determinants of capital structure across ten developing countries. They found that long-term debt ratios decrease with higher tax rates, size, and profitability, but increase with tangibility of assets. Again the influence of the market-to-book ratio and the business-risk variables tends to be subsumed within the country dummies. Otherwise, In time-series test, Shyam-Sunder and Myers (1999) showed that many of the current empirical tests lack sufficient statistical power to distinguish between the models. As a result, recent empirical research has focused on explaining capital structure choice by using time-series cross-sectional tests and panel data. Recently, some studies have explored capital structure policies using different models on different countries (Francisco 2005; Dirk, Abe & kees 2006; Fattouh, Scaramozzino & Harris 2005; Chen 2004; Pao & Chih 2006). Furthermore, Kisgen (2006) examined credit rating and capital structure, and Jan (2005) developed a model to analyze the interaction of capital structure and ownership structure. Though the achievement is rich, but little articles explore the capital structure with different industries.


“Using Consumer Panel Participants to Generate Creative New Product Ideas”

Jenny Clark, TNS

Clive Nancarrow, University of the West of England, UK

Dr. Lex Higgins, Murray State University, Murray, Kentucky



Organizations continue to attempt to identify new products and product features that will provide competitive advantage.  Globalization and technology development continues and many companies struggle to keep up with this rapid rate of change.  Creativity is often cited as a prerequisite to organizational success and management of most organizations realizes the present environment is one of ‘innovate or die.’  One’s cognitive style or ‘creativity’ style has been used many times as a tool to help categorize and better understand creative thinking.   We report the use of creativity style in a large consumer panel in Europe to generate new creative ideas.  Participating panel members tended to adopt one of four cognitive styles when assigned creative problem solving tasks.  We report a description of these cognitive styles and describe a categorization of the styles we identified in panel participants. Virtually everyone agrees that organizations must continually improve products and services to compete in today’s dynamic business environment.  Brook and Mills (2003) have pointed out that successful innovation within the organization requires an “aggressive and relentless thrust” toward new ideas for products and services.  However, the manner in which organizations can systematically envision and develop ideas for new products is often not clear.   It is well established that ideas for new products or services that appear within organizations are often ignored or unintentionally suppressed by organizational culture.  In fact, many writers on creativity offer lists of ways that managers intentionally or unintentionally stifle creativity.  One such list can be found in Couger (1995) and is surely familiar to anyone who has tried to introduce a new idea within an organization.  Thus for every new idea offered up, there is a reasonable sounding argument against its implementation.  Why do organizations seem to discourage new ideas when often requesting more employee creativity?  First, organizational processes often discourage creative thought and encourage “getting along to get ahead.”  Many organizations systematically discourage the creation and adoption of new ideas without necessarily meaning to.  Osborn has pointed out that “a fair idea is better than a good idea kept on the polishing wheel.” (Osborne 1963).  Thus, organizations often fail to have the competence to implement ideas as originally proposed.  However, through countless ‘check points’ and ‘management review gates’ the organizations manage to alter new ideas into something that isn’t any better than the solution that was being employed previously. Why do organizations so often seem to resist any way of doing things that appears to be different from the present way?  Most of us naturally resist change associated with adoption of different ways of doing things.  Thus, the creative person with a new idea about how to do things often becomes frustrated by the unwillingness of the organization to adopt his or her new idea and ultimately gives up.   Although the person with a new idea may be completely committed to his or her idea, others are almost always reluctant to adopt different ways of doing things.  Why do people so often resist change or new ideas?   Miller (1987) has offered a list of reasons people resist change.  First, some may believe that the change is not for the highest good of everyone involved.  Second, it is natural for an individual to fear change based on a possible negative impact to them personally including threats to their job and status.  Third, a lack of understanding of the need for the change will cause people to become suspicious of it.  Fourth, people might resist change in the organization if they perceive they have suffered somehow from changes in the past.  Also, and this is particularly relevant for today’s workplace, people may be concerned that they may not have adequate time to prepare for the coming change. 


The Two-stage Optimal Matching Loan Quality Model

Chuan-Chuan Ko, National Chiao Tung University, Taiwan

Dr. Tyrone T. Lin, National Dong Hwa University, Taiwan

Chien-Ku Liu, Jin-wen University of Science & Technology, Taiwan

Hui-Ling Chang, Ming Chuan University, Taiwan



This study attempts to optimize the loan quality requirement objective of the depositor, financial institution and investment agent in a two-stage loan market. Assuming that the financial institution may completely or partially fail to discharge his/her responsibility for liability in which a loan claim occurs following each stage, mathematical analysis is employed to identify the threshold of required loan quality and optimize the allocation of loan amounts in this two-stage loan market. This study defines the financial institution as the enterprise that is heavily reliant on manipulating financial leverage via minimum capital investment, and whose operating profit mainly derives from the interest spread of making loans with deposit volume; meanwhile, the depositor makes all deposits to obtain a steady stream of interest income. However, because of different lending criteria between the financial institution and the depositor, they have conflicting interests with each other. The financial institution wishes to increase loan credit, but loan volume is actually the balance held by the depositor. Therefore, the depositor asks the financial institution to rise up the loan credit to better guarantee his/her deposit. Furthermore, the securitization of financial assets has also provided the investor with an alternative financial commodity. The manner in which the financial institution re-packages and offers this financial asset securitization and the manner in which the investor purchases this commodity will also generate different perspectives regarding the loan quality of assets securitization subsequently represented by the investment agent among the financial institution, the depositor, and the investor. Lockwood et al. (1996) found that when enterprises begin asset securitization, the wealth of automobile manufacturers is increased after securitization, whereas the wealth of banks is decreased, and the financial institution should improve its capital structure before securitization and promote its financial health. The financial institution attempts to offer secured loans to protect creditors. Dietsch and Petey (2002) designed an optimized capital placement and lending portfolio by calculating the value of small loans in the investment portfolio risk with the internal credit risk of loan model of medium and small enterprises in France. Stiroh and Metli (2003) identified a recent deterioration of loan quality in the US financial industry, mostly being restricted of loan volume by the borrower and lending of large scale banks and industries whereas credit defects are focused on small scale borrower industries.  Lin and Lo (2006) provided three credit risks (deposit account, financial institution, and rating organization) for evaluating different roles considering single term loans, and the required and matching loan quality models explain that developing a method of improving the risk management mechanism is the key point for the financial institution in controlling loan quality under the supervision of rating organization and depositors. Lehar (2005) modeled the measurement method and banking system risk, and estimated the dynamics and correlations among bank asset portfolios. The bank asset portfolios, including loans, tradable securities, and numerous other items, are refinanced by debt and equity. Capitalized banks increase equity capital and thus substantially reduce systemic risk. Stein (2005) designed the quantitative method as simple cut-off approach to make more flexible and profitable in lending decision. The framework can be used to optimize the cut-off point for lending decisions based on the cost function of the lender. Instefjord (2005) investigated the phenomenon of financial innovation possibly increasing bank risk in the credit derivative market, despite the importance of credit derivatives for hedging and securitizing credit risk. Commercial success determines the overall success of new credit derivative instruments.  This study extends the model of Lin and Lo (2006), describes the credit risk for the single term evaluated model and discusses the required loan qualities with multiple objectives for the deposit account, financial institution, and rating investment agent with the two-stage loan market. Suppose that the financial institution may be cleared, partially cleared, and  impossible to be cleared for debt at the end of each stage, and that the most suitable loan models are being sought for the participants in two-stage only.  During the numerical analysis, designing a two-stage loan ratio and initiating a discussion of the loan placement which is most suitable for two-stage loan market are also key points. One single financial institution exists in the loan market, one investment agent (the successor of financial asset securitization commodity) operates in this market, and a single depositor provides deposits in this financial institution. Loan decisions of portfolios held by the financial institution comprises two stages (assuming a fixed period in each stage), and the financial institution equity is not permitted to provide financing during the second stage of the loan market, but the loan operation may be completely executed for the first stage after the financial institution provides the deposit reserve.  The interest rate for the depositor during the two stages remains unchanged, and the depositor receives fixed deposit interest. The investment agent who purchases the financial asset securitization commodity (issued by the financial institution to guarantee loan credit) may obtain part of the warrant provided by the financial institution.


Study on the Motives of Tax Avoidance and the Coping Strategies in the Transfer Pricing of Transnational Corporations

Chen-Kuo Lee, Ling Tung University, Taiwan

Wen-Wen Chuang, Ling Tung University, Taiwan



As globalized production and management groups, transnational corporations tended to adopt related-party transactions (like transfer pricing) to reduce the overall tax burden, evade risks, and bypass control with tax avoidance as the main objective. Corporations could reduce the tax burden through related-party transactions, such as transfer pricing. Therefore, this paper conducted analyses by establishing a transfer-pricing model and verifying, with cases, the motives of tax avoidance in the transfer pricing of transnational corporations. Finally, we presented the coping strategies of tax avoidance in the transfer pricing of transnational corporations. After World War II, along with the unprecedented rapid development of business activities, the international trade of transnational corporations was increasingly highlighted in the global trading market (Clausing, 2001, 2003). Moreover, plenty of international trade occurred among the inner member companies of transnational corporations (i.e. the related parties). The data of UNCTAD in 2001 showed that until then, international trade of transnational corporations occupied more than 70% of the world trade; about one-third of the world trade took place among transnational corporations and about 80% technology transfer fees were paid to the same companies. With the large scale, and the unique ways and features, the inner trade of transnational corporations was influencing the host country, the home country and the world economy (UNCTAD, 2001). Thus, the international community was paying close attention. Some countries strengthened the management of transnational corporations and their inner trade, based on relevant regulations and policies in the framework of WTO multilateral trade systems and its regional economic organization (Barry, 2004, 2005). The price adopted in the inner trade of transnational corporations was usually called transfer price. Because transfer price played a core role in the inner trade of the complete transnational corporation, the functions of the inner trade could be achieved. Transfer pricing helped directly realize the adjustment of benefits among the inner member companies of transnational corporations and the related countries. It also ensured the maximization of the whole benefits of transnational corporations across the globe. Furthermore, it hastened the formation of the integrated management and economic globalization in transnational corporations, and ensured the possession of monopoly assets and the acquisition of monopolistic profits. Finally, transfer pricing had a direct effect on the economy and benefits of related countries. Transfer price was an effective mechanism to realize the functions of inner trade. In the attitudes to transnational corporations, developing countries were in a dilemma. On the one hand, economy could not develop without foreign capital, so great efforts should be made to attract foreign investment; on the other hand, the transfer pricing in the investment of transnational corporations deteriorated the environment of foreign capital in host countries. The reduction of state revenues would cause the loss of state-owned assets and damage the benefits of domestic enterprises. The weighing of pros and cons decided how the government in developing countries would treat the transfer pricing in the investment of transnational corporations. In other countries, there were many research results on transfer pricing, such as the existence of transfer pricing (Copithorne,1971; Horst,1971; Kant,1988), the relation between equity structure and transfer pricing (Svejar and Smith,1984; Al-Saadon and Das,1996; Konrad and Lommerud, 1999; Tommy and Guttorm, 1999), the methods of transfer pricing and its influential factors (Tang,1979, 1980; Wu and Sharp,1979; Bond, 1980;Yunker, 1982; Borkowski, 1992), and the choices of tax authorities in the adjustment rules of transfer pricing (Copithorne, 1971; Booth and Jensen, 1977; Horst, 1971; Itagaki, 1979; Guttorm and Alfons,;1999). However, nearly all the researchers thought about the problem from the angle of developed countries. These assumed conditions were different from the actual situation of the investment of transnational corporations in developing countries, so they could not be directly copied to solve the problems of inner transfer pricing of the investment of transnational corporations in developing countries. A developing country should not only create a good environment for investment, providing proper preferential tax policies for transnational corporations to attract foreign capital, but also carry out suitable adjustment rules of transfer pricing to ensure that the deserved interests of developing countries would not be damaged (Tommy and Guttorm,1999). The integration of global economy, transnational corporations played an important role as production organizer and rapidly grew along with the process of globalization. Transnational corporations organized resources, within the world, to realize production and exchange. In this process, their transactions were not completed according to fair market trade, while most of them belonged to the related-party transaction that is to say; the transaction was finished among related parties. The related-party transaction helped transnational corporations reach a series of objectives, such as reducing tax burden, transferring benefits, evading risks and bypassing control. It could be said that the related-party transactions had become a necessary business strategy for transnational corporations.


Target Costing: A Snapshot with the Granger Causality Test

Dr. Fernando Zanella, United Arab Emirates University, Al Ain, UAE



The target costing strategy takes the market price of a product and goes all way back to the initial costs of its production to achieve the desired profit margin. It lies in sharp contrast with the traditional cost-plus margin. In this article we use the Granger causality test to identify the price-cost directional vector. Most of the Brazilian firms we analyzed did not show an identifiable pattern between price and cost. Target costing is shown in 15% of the total firms studied, and in 37.5% of the electric and utilities sector. There are two main reasons supported here. First, the sector works with a single homogeneous product. Second, it is a regulated sector; once its price is set by the regulatory agency, they can work backwards to reduce costs and achieve certain profit margins. The test used here proves complementary to more common surveys and case studies. Target costing can be briefly defined as a strategy that takes the market price of a established product—or the estimated price of a would-be product—and uses it as a parameter that will define the feasible cost for a desired profit margin. It is meant to be used during the design and planning phases, i.e., prior to the manufacturing phase. Target costing has several interdependent dimensions that can be explored separately or simultaneously. The two main ones are: a) Target costing adoption. Target costing adoption, or lack of it, is indicative of the firm’s competitive strategy within the industry. During a target costing process, the vector runs from price to cost. This is the opposite of another very common price strategy, the cost-plus, in which the vector runs from cost to price. During the cost-plus process, a firm adds the desirable profit margin on top of the manufacturing cost. If the market doesn’t accept the final price, the firm might shrink its profit margin, try to re-do the manufacturing to cut costs or—depending on the re-manufacturing feasibility—fix and sunk costs, just stop producing the product or shut down operations. b) Institutional environment. The number of firms adopting target costing (or not) is an indicator of the institutional environment of the country. If a particular country shows evidence of a substantial number of firms following one particular strategy, it is indicative of the institutions surrounding the firm. For instance, if we observe a country with a significant portion of its industry operating and profiting within a cost-plus approach, this suggests that institutions are open to rent-seeking, i.e., rents obtained from engaging in extra-market activities or, at least, benefiting from someone who is involved in extra-market activities. A country with a significant portion of the industry operating by the principles of target costing may suggest a more competitive environment, possibly involving cartel controls (formal or informal), an open economy, and so on.  The main objective of this article is to assess the second dimension. The country chosen for the study is Brazil, a country that has evolved from a quite closed economy during the early nineties to a relatively open economy today. More precisely, this article tests the following hypotheses: 1. The selling price determines the production costs. That is, the relationship is single-directional from price to cost. This is the target costing hypothesis (H1). 2. The cost determines the selling price. That is, the relationship is single-directional from cost to price. This is the cost-plus hypothesis (H2). 3. Previous selling prices determine costs, and costs of production determine selling prices, i.e., there is a feedback mechanism. This is the hypothesis of bilateral causality or interdependence between price and cost (H3). 4. There is no significant statistical relationship between price and cost, inclusive of lagged values. This hypothesis (H4) suggests either independence or an undetermined relationship between the variables. This hypothesis does not suggest that there is no relationship between price and cost, but only that it was not possible to distinguish a statistical significant pattern.The next section briefly mentions some of the previous studies on target cost and describes the method—Granger causality test—and data used in this research. The following section presents all results with comments. The conclusion stresses the positive aspects of this research tool, as well as its limitations. Target costing, despite its underlying market-oriented foundation, has not been extensively studied by academicians. As mentioned during the introduction, its adoption—or omission—provides significant evidence of the competitive environment in a country. Studies have been conducted mainly with the following foci: a) theoretical and dedicated to stating the process of implementing the target costing and its advantages when compared with alternative systems—Cooper and Chew (1996); b) case studies that assess the targeting cost system—Hibbets, Albright and Funk (2003); and c) conducting surveys to assess the adoption of target costing—Dekker and Smidt (2003). 


Teaching Tip: Structuring a Rubric for Online Course Discussions to Assess Both Traditional and Non-Traditional Students

Dr. Ruth Lapsley, Central Washington University, Ellensburg, WA

Dr. Rex Moody, Central Washington University, Ellensburg, WA



Online courses have become increasingly popular.  Students, particularly non-traditional, appreciate online courses because of the flexibility, including learning outside the normal classroom schedule constraints.  This paper discusses how a rubric for an online course was developed to capitalize on the motivated learning style of these non-traditional students.  When the same rubric was used for traditional students taking the class, the assessment did not adequately discriminate among different levels of effort, so modifications were made to the rubric.  It was subsequently used on both traditional and non-traditional students and provided an adequate assessment for both types of learners. Non-traditional students, in particular, appreciate the flexibility of online courses that provide learning outside the normal classroom schedule constraints.  Non-traditional students are usually older, have job and family responsibilities, and prefer flexible curricula that allow them to use their computers and technology to enhance their learning skills (Nellen, 2003; Wooten, 1998; Wynd & Bozman, 1996).  In addition, they tend to be more motivated and produce higher-quality work than traditional students (Nellen, 2003; Wooten, 1998).  This paper discusses how an assessment rubric for an online course was developed to capitalize on the motivated learning style of these non-traditional students, while still being useful for traditional students as well.  Online course developers frequently concentrate on the technological issues surrounding the delivery method instead of the learning objectives and assessment tools (Su, 2005).  In developing an online course, the learning objectives should be the primary guiding factor.  Students in online courses must be able to easily understand the learning objectives, as they are more critical than the medium by which they are delivered (Su, 2005).  In the traditional classroom, the course expectations are spelled out in the syllabus and the instructor typically explains them to students, often times adding more detail throughout the term of the course.  Additionally, with an online course, students do not have ready access to the instructor and must rely more on what is available online to guide their learning.  Lemak et al. (2005) suggest that this type of learning can be considered "limited interactive learning" (p. 152) because it provides some two-way conversation between student and instructor but not the typical classroom interaction.  Students taking online classes are isolated and lose the advantage of interacting with other students.  Because of this isolation, it is important that the course developer emphasizes dialogue and feedback (Littlejohn, 2002), and develops methods to involve students in their own learning. One way for online students to interact on a limited basis with other online students is through discussion boards, either asynchronous or synchronous.  Synchronous discussions are real-time discussions similar to a chat line, and must be monitored to keep students from veering too far from the specified discussion subject.  While these synchronous discussions have the advantage of offering immediate feedback from the instructor and peers, they can be problematic: the major drawback is that not all students are available to participate at a specified time since many online students work or have other schedule conflicts. To overcome this, the instructor can schedule multiple times each week to synchronously "chat" with students, a method somewhat more appealing to students but not necessarily considerate of instructors' time constraints.  Asynchronous discussion, on the other hand, allows interactions of the students, but not on real time.  It is somewhat similar to sending an email message, and allows students to choose the time that works best for them to interact.  Hiltz (1986) found that allowing this personal learning time actually increased the effectiveness of learning by empowering students; this more active role creates an ownership in the learning process for the course (Duffy et al., 2004).  When using asynchronous discussions, a grading rubric becomes an important assessment tool for communicating clearly to the student whether learning objectives have been met.  Arbaugh and Hornik (2006) found that communicating high expectations to students resulted in higher perceived student learning and satisfaction with the course. When instructors communicate with students in a traditional classroom or through an online synchronous discussion, the instructor has the opportunity to directly involve students with questions and can help students formulate ideas and demonstrate that effective learning has taken place.  Furthermore, students in a traditional classroom are exposed to lectures, repetitious terms, and discussions centered around the important topics, and through this emphasis students come to realize what they are expected to glean as outcomes from the course materials.  With asynchronous discussions, the immediate feedback and information exchange is not available, so students need guidance as to whether their online responses are effective.  For effective learning to occur, instructors must spell out in precise detail what their criteria are for student discussion responses (Gopinath, 2004).  This means that, prior to offering an online course, the instructor must develop tools that clearly indicate to students what is expected of them.  An assessment rubric, a structured guide as to what is important in a student’s response and how responses will be graded, is one useful tool in this regard. 


The Exchange Rate Exposure of Chinese and Taiwanese Multinational Corporations

Luke Lin, National Sun Yat-sen University, Taiwan

Dr. David So-De Shyu, National Sun Yat-sen University, Taiwan

Dr. Chau-Jung Kuo, National Sun Yat-sen University, Taiwan



This paper studies the sensitivity of the cash flows generated by Chinese and Taiwanese firms to the movements in a trade-weighted exchange rate index, as well as to the currencies of their major trading partners. To overcome the deficiencies in previous researches using variations of the market-based model, this paper adopts the polynomial distributed lag (PDL) model to investigate the relative importance of transaction exposure versus economic exposure by decomposing exchange risk into short-term and long-term components. In contrast to the existing market-based model, we find that PDL model is better in detecting exposures. Furthermore, our empirical results also indicate that a considerable exposure difference between Chinese and Taiwanese corporations under two types of exchange rate regimes.   With China’s entry into WTO, more and more Chinese firms participate in international business, and their understanding of the exchange rate exposure of their business becomes more important. Meanwhile, now most Taiwanese firms investing in China desire to use cheaper input factors and sell manufactured goods back to Taiwan’s trading partners. Such an investment decision has been establishing a close link between the two markets. While China and Taiwan markets are inseparable, exchange rate systems in the two markets are very different. For example, China has officially maintained the pegging regime since 1994 and Taiwan now follows a floating exchange rate policy (Schena, 2005). Therefore, the features of exchange rate exposure in two markets are important for corporate managers to know before making risk management decisions as different types of exchange rate systems breed different sets of risk, especially for emerging market. Booth (1996) interprets that three types of exposures are identified in the literature as translation, transaction, and economic exposure. However, the impact of fluctuating exchange rates on cash flows excludes the translation exposure because this exposure does not affect cash flows (Martin and Mauer, 2005). Transaction exposure, which typically has a shorter-term time dimension, arises because the value of the foreign currency may change from the time a transaction is contracted and the time it is actually settled, and can in most cases be effectively hedged with derivative instruments. Economic exposure, which typically has a longer-term time dimension, arises mainly from changes in the sales prices, sales volumes, cost of inputs and the competitiveness of the firm, and we cannot be sure whether the hedging is useful. We argue that when studying the exchange risk, there are some differences to acquire the results analyzed by the stock return angle and cash flow angle. Using the real performance of operating income, this paper attempts to investigate the impact of fluctuating currencies on the values of Chinese and Taiwanese companies by decomposing a firm’s overall exchange rate risk into transaction and economic components. The major contribution of this study is the overcome of the deficiency that prior studies have with limited success in detecting significant currency exposure. We further bring forth the ways of measuring potential economic exposure that firms are confronted with. Comparing with the capital market approach, we find some evidence of the relative strength of cash flows to detect exposure from the two emerging markets. Meanwhile, the results indicate that a considerable exposure difference between Chinese and Taiwanese corporations under two types of exchange rate regimes. The existing capital market approach estimates the exposure as the sensitivity of stock returns to movements in a trade-weighted exchange rate index while controlling for market movements: where Rt is the stock return for time t; Rmt is the market portfolio return for time t; Xt is the percent change in the exchange rate factor for time t; βx is the foreign exchange exposure or residual exposure. Using the equation (1), Jorion (1990) proposes a two-step estimation procedure in which he first estimates exposure from time series regressions of firm-level stock returns against market returns and a trade-weighted exchange rate. Then he uses the coefficient of the exchange variable as the dependent variable in a cross-sectional regression to be explained by a firm’s characteristics. His results show that out of 287 U.S. multinational corporations only 5.23% (15 firms) exhibit significant exposure. Most of the succeeding studies follow this two-step estimation procedure to examine exposures of firms in different countries (see Table 1). For example, He and Ng (1998) examine the exchange rate sensitivity of 171 Japanese multinationals. They find that 26.32% (45 firms) of firms have significant response coefficient. Also, Schena (2005) studies for 70 Chinese firms with A and B shares. Disappointingly, he find that only 12.86% (9 firms) of the sample have statistically significant exchange rate coefficients at 10% level but none at 5% levels. Muller and Verschoor (2006) reveal 13.95% (114 firms) of 817 European multinationals have significant exposure. An alternative methodology, cash flow approach, estimates the effects of exchange rate movements on firm’s operating income. Because of using the real performance of operating income, economic exposure that firms are confronted with can be revealed more correctly.


Investment Under Uncertainty with Stochastic Interest Rates

Dr. Cherng-Shiang Chang, FRM, China University of Technology, Taipei, Taiwan



In recent years, the real options analysis developed to cope with an uncertain future is already having a substantial influence on corporate practice due to its offering new insights into corporate finance and strategy (Smit and Trigeorgis, 2004).  For the application of options pricing theory, unlike the works on the financial derivatives areas, most studies on the corporate finance and investment problems assume the underlying dynamics follow a geometric Brownian motion and the discount rates of the expected cash flows constant in order to obtain a closed-form solution.  In the real world context, however, the dynamics of the underlying usually track a product-specific lifecycle, in contrast to the one increasing versus time characterized by the geometric Brownian motion.  Further, it is obvious that the discount rates or the risk-free interest rates are not constant as well.  In this article, we relax the restrictions by employing the Ornstein-Uhlenbeck process for the underlying to meet the real product specific lifecycle.  Second, we setup the classical Vasicek (1977) model to describe the interest rate dynamics.  The derived partial differential equations (PDEs) are so complicated that a novel finite difference method is then selected and implemented to solve the problem numerically. Project valuation using real options has been a subject of much research during the last 15 years (Ingersoll and Ross, 1992, Dixit and Pindyck, 1994).  The real options analysis developed to cope with an uncertain future is already having a substantial influence on corporate practice due to its offering new insights into corporate finance and strategy (Smit and Trigeorgis, 2004).  Grenadier and Weiss (1997) develop a model of the optimal investment strategy for a firm confronted with a sequence of technological innovations.  Pyndick (1993) is probably the first to take the technical uncertainty exogenously for randomly advancing through stages of the project.  Panayi and Trigeorgis (1998) evaluate an IT infrastructure project by two stages: an initial stage in which the organization develops the information systems needed for its future operation and a second stage in which it proceeds to expand its network.  Brach and Paxson (2001) model investment in the drug development process using a Poisson real option analysis.  Schwartz and Zozaya (2003) employ a two- factor diffusion model to analyze investment in the IT industry both in acquisition and development projects.  Schwartz (2004) argues that patents and R&D projects can also be regarded as a complex option on variables underlying the value of the project. For the application of options pricing theory, unlike the works on the financial derivatives areas, most studies on the corporate finance and investment problems assume the underlying dynamics follow a geometric Brownian motion and the discount rates of the expected cash flow constant in order to obtain a closed-form solution.  In the real world context, however, the dynamics of the underlying usually track a product-specific lifecycle, in contrast to the one increasing versus time characterized by the geometric Brownian motion.  Further, it is obvious that the discount rates or the risk-free interest rates are not constant as well.  In this article, we relax the restrictions by employing the Ornstein-Uhlenbeck process for the underlying to meet the real product specific lifecycle.  Second, we setup the classical Vasicek (1977) model to describe the interest rate dynamics.  The derived partial differential equations (PDEs) are so complicated that a novel finite difference method is then selected and implemented to solve the problem numerically. Our basic model is similar to the model in Dixit and Pindyck (1994).  Consider a firm proceeds an irreversible investment by paying a sunk cost I (>0).  After the investment for the product is made the firm can receive the revenues and spend the costs during the product life cycle.  In the present paper, we employ the Ornstein-Uhlenbeck process for the revenues and costs to meet the real product specific lifecycle.  Let EP*[×] denotes the expectation with a risk-neutral measure, conditional on the information available at time t


A Research Study of Frederick Herzberg’s Motivator-Hygiene Theory on Continuing Education Participants in Taiwan

Dr. Ching-wen Cheng, National Pingtung University of Education, Taiwan



This study seeks to determine the factors motivating on-campus continuing education participants in Taiwan using Frederick Herzberg’s motivator-hygiene theory. Herzberg’s motivator-hygiene theory, also referred to as the two-factor theory, is commonly used in the academic area of organization management (Jones, George, & Hill, 2000). Due to costs involved and the study’s limitations, the research sample of the present study included students enrolled in the “2006 Human Capital Investment Plan” continuing education program at National Pingtung University of Education, a government plan that tries to improve Taiwanese laborers’ career competency by cooperating with higher education institutions (Bureau of Employment and Vocational Training, 2006). National Pingtung University of Education is a public education institution located in southern Taiwan offering bachelor, master, and doctoral programs. The purpose of this study is to construct a management opinion on adult learning motivation and provide the students’ motivators to the program administrators. The research determined that the major motivators of adult students’ participation are personal-advantage creation, personal-need recognition, learning enjoyment, program schedule, institution’s reputation, personal growth, and demand in the new economics. Furthermore, this study also discovered that information about hygiene needs included in organizational policy, new friends, relationships with subordinates, peer pressure, and workplace management authority also to be significant. Based on the research data analysis, no significant difference exists between male and female adult students’ motivation for learning. Finally, this study found no significant difference in motivation among adult students of different age groups. In studying adult learners’ motivation, many scholars have tried to develop a theory to explain why adult learners participate in continuing education programs on campus. Houle (1961) stated that adults return to school to learn based on three types of learning motivators: job-related reasons, activity-related reasons, and learning-related reasons. Houle’s typology became the first academic document to focus on adult learners’ motivations. Based on the field theory of psychological status, Miller (1967) developed another point of view regarding adult learners’ motivators, believing that an adult participates in education as the result of a social force influencing the individual’s mind. Meanwhile, Boshier (1971) used his congruence model to explain why adults return to school for continuing education programs, asserting that adult learners’ motivational strength relates to a congruence between the educational environment and the individual’s internal psychological status.  As the academic research on motivation expanded, so did the theories about adult learners. Tough (1980) constructed the anticipated benefits theory to explain adult learners’ participation in continuing education programs. Tough assumed that adult learners understand the reason for participating in such programs and expect specific benefits from the learning process. Meanwhile, Cross (1981) developed her chain-of-response model to describe how adults implement participation in education. According to Cross, an individual’s participation in continuing education programs is the result of a chain of responses to several events. Concerned about the lifespan of adult learners, Cookson (1986) developed the Interdisciplinary, Sequential specificity, Time allocation, and Lifespan (ISSTAL) model to describe the participation of adult learners. The major concept of the ISSTAL model is that an adult participates in an educational program as part of his or her social activities. Unlike earlier theories, the ISSTAL model not only explains the motivation of adult learners, but also tries to predict the future participation of adult learners. More recently, Henry and Basile (1994) built their decision model to explain why adult learners choose to participate in continuing education programs. According to this model, an adult decides to participate based on the influence of both the learning motivation and the participant block.  Although these theories are useful for understanding adult learners’ motivation to participate in continuing education programs, they focus on the viewpoints of adult learners, not program managers. For those who administer continuing education programs on campus, a need exists to focus on how to attract more adult students to participate in their programs. According to this demand, the current researcher tried to rethink the issue of adult learning motivation from another viewpoint. In the field of organizational management, the motivator-hygiene theory was developed to describe the relationship between employees and their organizations (Herzberg, 1968; Herzberg, Mausner, & Snyderman, 1959). According to the study hypothesis of significant similarity between two situations, the motivator-hygiene theory might be able to describe the relationship between adult learners and their institutions. Therefore, the purpose of this study is to construct a new opinion on adult learning motivation, attempting to discover adult learners’ motivators while helping program managers to attract more adult students to continuing education programs.  When discussing human needs, an unquestionable significant milestone is the famous hierarchy theory put forth by Maslow (1954), the father of humanistic psychology.


Tax Burden Convergence in EU Countries: A Time Series Analysis

Dr. Tiia Püss, Tallinn University of Technology, Tallinn, Estonia

Mare Viies, Tallinn University of Technology, Tallinn, Estonia



Taxes are an important fiscal policy instrument and the main source of revenue for any country, which are used to regulate and influence economic and social development in the country. The EU has harmonized standards and regulations in numerous areas; however, there has been a lower degree of harmonization in taxation. Significant measures towards the harmonization have been raised strategically in the EU agenda. The aim of this paper is to analyze and compare the trends of the tax burdens in the European Union countries and test for convergence in taxation using the time series approach. We use harmonized data on the tax revenue and tax burden in the European Union countries collected by OECD and Eurostat in the period 1970-2004. The issues of economic convergence have been in the focus of interest of many theoretical and empirical studies over the last two decades. Many concepts of convergence and different econometric methods for empirical analysis have been proposed. There are two main trends in the methodological approach: the cross-sectional data approach is based on the neoclassical growth model and studies convergence between countries or regions through relationships between the initial economic level and average growth (b-convergence); the time-series approach discusses convergence as a stochastic process and uses mainly unit root or cointegration tests in empirical analysis.  In an open economy tax policy of one country may affect economic activity and public revenue in another country. Although lower taxes can yield significant efficiency gains, there is a risk that the financing of public goods and social protection will be shifted to the least mobile tax bases – labor, or that the production of public goods and the welfare systems will be endangered, especially in these countries where income redistribution, social protection and public goods provision are given a high weight in social preferences. The tax harmonization process in the past decade was designed to meet the objectives for improving the economic environment and facilitating development, which are still relevant. Significant measures towards the harmonization have been raised strategically in the EU agenda.  Our previous analysis indicated that the tax burden increased in most of the countries over the period 1980-2003 (Püss et al., 2006). A particularly fast growth of tax burdens occurred in the 1980s, which in the 1990s slowed down and in 1999-2003 even diminished. The reasons for such tax burden developments have been different in different countries. EU-10 countries are characterized by much lower tax burdens than EU-15 countries. Our research also supported in general the notion of σ-convergence and β-convergence in tax revenue as a share of GDP in EU-15 and EU-10 countries. In this paper our analysis is based on the concept of stochastic convergence. We investigate the convergence of tax burden in time series framework and use several unit root tests. Using harmonized data on tax revenues and tax burdens collected by OECD and Eurostat mainly in the period 1970-2004, we provide an analysis of the main trends of the tax burdens in the European Union countries. As the data covering that period are available only for EU-15, we have discussed these countries. According to the economic theory of convergence, economic development level of less developed countries should approach the level of more advanced countries which have the same economic resources or fundamentals. Socio-economic convergence is mainly discussed in the context and on the basis of two main economic growth theories: neo-classical and endogenous. Two main concepts of convergence are used in the classical literature of growth theory: σ-convergence and b-convergence (Quah, 1996; Sala-i-Martin, 1996).  One of the simplest methods for estimating socio-economic convergence is calculation of σ-convergence, which is based on standard deviation. With this method it is possible to examine how the dispersion between national income levels (or other indicators) has changed, or how the differences of indicators inside groups of countries are changing compared to the average (Baumol, 1986; Dorwick and Nguyen, 1989; Barro and Sala-i-Martin, 1991, 1992a, 1992b). A reduction coefficient of variance (standard deviation/arithmetic mean) of indicators indicates a reduction of the difference, or the presence of σ-convergence.  The test for the presence of b-convergence (Baumol, 1986; DeLong, 1988; Barro and Sala-i-Martin, 1991, 1992a, 1992b; Sala-i-Martin, 1994; Boyle and McCarthy, 1997) posits that b-convergence exists if a poor economy tends to grow at a faster rate than a rich one so that the poor country tends to catch up in terms of per capita income or product. The literature makes distinction between absolute (unconditional) and conditional b-convergence. Absolute b-convergence pertains to the coefficient b of the bivariate equation. This is based on the assumption that all countries in the sample converge to the same steady state. Conditional b-convergence pertains to the coefficient b of the socio-economic level variable in an equation that includes additional explanatory variables reflecting differences across countries, which direct each economy to converge to its own steady state. In both cases, the convergence hypothesis is that the growth rate of a socio-economic indicator will be negatively related to the level of this indicator. A simple but unbiased measure of convergence that is consistent with Sala-i-Martin’s (1994) concept of β-convergence is interpretation of β-convergence, which is concerned with tracking the mobility of individual countries within the distribution of income levels over time (γ-convergence).


Synergy in Business: Some New Suggestions

Dr. Vojko Potocan, University of Maribor, Maribor, Slovenia

Bostjan Kuralt, University of Maribor, Maribor, Slovenia



In the global competitive environment, enterprises can only survive (in the long term) by permanently improving their business. They have limited resources and they face very harsh conditions, therefore they can (significantly) improve their business results, if they organize their working better, e.g. by implementing potential/possible synergies. The concept of synergetic working is based (also, or even primarily) on the application of the process approach and of the (dialectically) systemic understanding of the enterprise as a business system (BS). Synergetic working enables BSs to attain the best overall results by harmonizing activities of all BSs' parts and considering the needs (and requirements) of the BSs' environments. This contribution deals with two theses: 1) Synergy is encountered and studied in various sciences; it is equally important in business sciences; 2) The process approach and (dialectically) systemic consideration enables us to define synergy in BS requisitely holistically (in terms of both contents and mathematics) and to use it for further consideration of BSs in business sciences. The new global market pressure makes it increasingly difficult for enterprises to compete rationally efficiently and effectively, respectably (reasonably from their business behavior image aspect), ethically (morally appropriate and responsible attitude in harmony with their social and natural environment) and innovatively (novel service / products, and production systems and gaining additional benefits from these).  Important characteristics of many successful enterprises include using the process approach and systems understanding. This enables the effective creation and operation of their business. On this basis they define their operation as a relatively open and dynamic business system, which mainly depend upon co-operation and synergy of actions of all areas and levels of their functioning.  A brief comment about the process approach: For millennia work processes used to be simpler and less changeable than today; they were physically demanding, required little creativity, and provided little reward. Thus, bosses had the need to force subordinates to be productive and to coordinate / direct their activities. These actions require a favorable hierarchy to support / direct process aspects. In the next step, bosses were many and on many levels. This caused many of them to forget that (1) outcomes result from processes, (2) organizational hierarchy is supposed to facilitate processes and to be adapted / subordinated to process.  The brief comment about the concept of systems understanding: In Webster's dictionary (Gove, 1987) the notion system has fifteen groups of meaning/contents. Hence, it is unclear. The term "system" has many shades of meaning according to context in the systems theory, too (Bertalanffy, 1968; Wiener and Masani, 1976; Mulej, 1979, 2000, 2004; Checkland, 1981; Potocan, 1997, 2002, 2005; Flood, 1999). Our conclusion may encompass the following statements: In order to be understood by the reader, authors must never use the notion "system" without at least an adverb denoting the tacitly or explicitly selected viewpoint, hence - the resulting selection of attributes of the object under consideration. System is supposed to mean holism of considerations—of which, the requisite holism is the best attainable level, while a fictitious holism is more common because every human in unavoidably a narrow specialist.  An enterprise can be treated as a business, a social, natural and any another system, depending on authors' selection of viewpoint/s. Also, any organization or individual can be treated as a business system when a special interest is transformed into a business operation. Organizations as business systems (BSs) are more and more under the impact of synergistic functioning, which enables harmonized activity of all parts of the organization, e.g. concerning their environments (e.g. business, social, natural, etc.).The introduction of the synergetic functioning, instead of the one as a set of independent rather then interdependent divisions, requires the enterprises to reconsider and restructure their intra-business associations, their style and content of business objectives and realization, etc. This step may provide a way toward the requisite holism and may result in improvement of business processes, their management and outcomes. But this innovation requires more openness and cooperation from BS members.  The enterprise as business system (BS) is, therefore, faced with many challenges of how to set up their intra-business associations in their new business environment/s, full of interdependent and unavoidably narrow specialists, who have had very little education for interdisciplinary and inter-functional creative cooperation. Thus, synergy is a strange notion to most of them.  From the whole area of researches of synergy and its rules in business we will focused our attention on questions of: characteristics of present studies of synergy, role of synergy in business and a more holistic definition of synergy in business.  The word synergy originates from the ancient Greek, as a compound noun made up of “syn”, which means “with, together with, at the same time” and “ergon” which means “work, to work, to function” (Gove, 1987; Bowman and Faulkner, 1997; Black, 1997). The dictionary defines synergy as “to collaborate, mutual support and mutual supplementing of two or more forces or organs” (Gove, 1987; Black, 1998). Research has developed a series of different concepts of synergy investigated by numerous and different sciences (for example economic, technical, social, sociological, legal) (Wiener, 1956; Ansoff, 1965; Lange, 1965; Ashby, 1968; Wiener and Masani, 1976; Haken, 1977; Porter, 1985; Kajzer, 1996, 2000: Potocan, 1997, 2002). From business viewpoints, the concept of synergy was introduced by Ansoff (1965) in connection with strategy in his book “Corporate Strategy”.Ansoff states, that “synergy refers to the idea that firms must seek' product-market posture with a combined performance that is greater than the sum of its parts', more common known as “2+2=5”.


Cross Analysis on the Contents of Children’s Television Commercials in the United States and Taiwan

Yi Hsu, National Formosa University, Taiwan

Liwei Hsu, National Kaohsiung Hospitality College, Taiwan



The influence of television programs toward children cannot be overlooked, TV commercials included.  This study is designed to examine the factors that are decisive for the marketing effects of children’s TV commercials.  The discussion about how the contents of Taiwanese children’s commercials differ from those of U.S. commercials and to explain reasons that cause these differences are the main themes of this research.  Six research hypotheses are tested and the results of the content analysis show that four of the six research hypotheses are accepted. The hypotheses on uncertainty avoidance and masculinity/femininity are not accepted by the statistical examination.  Television is a powerful socializing force for children, reaching them daily with extraordinarily high levels of exposure.  Children between six and fourteen years old watch approximately 25 hours of television weekly and view approximately 20,000 commercials annually (Moore et al., 2000).  In November 1999, the U.S. Kaiser Family Foundation published reports on children and teenagers exposure to television.  This investigation indicated that American children watched TV for an average of 2 hours and 46 minutes daily.  Since commercials represent around 20 percent of the content of children’s television, television advertising is a pervasive presence in the lives of most American children.  A similar situation exists in Taiwan.  An investigation by a Taiwanese advertising magazine on children in the 4 to 14 year old age group found that television watching was a major recreational activity among this age group in Taiwan.  Furthermore, an investigation by the Foundation of Broadcasting and Television reported that two thirds of Taiwanese children watch TV every day.  Three children’s television channels, Yo Yo TV, the Disney Channel, and the Cartoon Network, broadcast children’s programs and cartoons 24 hours per day. Unlike adults, children are relatively uninformed about product quality and prices, and have comparatively less-developed awareness toward the influence of advertising.  Consequently, children are extremely receptive to the messages from television, including messages from TV programming and advertising (Oates et al., 2002).  Television advertising thus significantly impacts children’s values.  Advertisers know that TV advertising must reflect the values of the audience effectively.  Thus, for children, just as for adults, TV advertising must reflect and influence the cultural values of the audience.  These cultural values are dynamic, and can be associated with family structure, economic development, lifestyle, social mobility, and education (Hofstede, 1991). Content analyses of TV advertisements have generally been well researched (Shao et al., 1999, Callcott et al., 1994, Huang, 1995, Cho et al., 1999, Lin, 2001).  However, only a handful of related studies crack down on the contents of commercials for children.  Previous researches focused entirely on gender stereotyping (e.g. Smith, 1994, Furnham et al., 1997, Browne, 1998, Lin, 2001).  Gender values are not holistic concept on cultures, and thereby can only partially explain cultural values.  This study is designed to describe how the contents of Taiwanese children’s commercials differ from those of U.S. commercials and to explain reasons that cause these differences.  According to pertinent literature on communicational arts, communication content is the consequence of antecedent conditions or contextual factors, including cultures that have led to shape the construction (Riffe, Lacy, and Fico, 1998).  A basic hypothesis of this study is that children’s commercials are a reflection of the cultural conditions.  In other words, this type of commercials are developed and broadcasted in the society with culture embedded.  Therefore, differences between Taiwanese and U.S. children’s commercials can be predicted and explained by the dissimilar parts in the cultures of these two countries.  Under this assumption, this research discusses how Taiwan differs from the United States in each antecedent condition.  Then, the research hypotheses are proposed regarding how the contents of children’s commercials might be expected to differ between the two countries as a result of each antecedent condition and social environment.  To test the hypotheses, tape-record children’s commercials in the two countries for in-depth analyses is the major research method adopted in this study.   The following sections identify differences in cultural values related to children’s advertising in Taiwan and the U.S., describe the content analyses procedures, and discuss the findings, consider their marketing implications, and make suggestions for future research. Cultural variations have increasingly attracted attention from marketing scholars, who have recently begun to employ Hofstede’s culture model as a framework for studying cross-cultural differences (Ablers-Miller and Gelb, 1996; Ji and McNeal, 2001).  Hofstede’s (1980) four cultural dimensions, power distance, collectivism/individualism, uncertainty avoidance, and masculinity/femininity, were designed as work values, and have been applied to various marketing topics (Albers-Miller et al., 1996).  Hofestede’s dimensions offer a good basis for understanding the influence of national culture on organizations’ self-representation but miss the actual practice of social activities (Harvey, 1997).  Therefore, this study proposed the fifth dimension, social factor.  Social factor also influences the advertisings development and be a part of culture value. Power Distance. 


The Technology Disruption Conundrum

Von Johnson, Woodbury University, Burbank, CA

Pierre Ollivier, Ecole Polytechnique, Paris, France



During the 20th Century, the filmed entertainment business evolved from a regional studio factory system into the global media and entertainment industry we know today.  Once controlled by a handful of powerful and creative entrepreneurs (the studio ‘moguls’), the seven major studios are owned today by multi-national corporations, transformed into finance, marketing and distribution entities, content owners and licensors, network operators, recording companies and producers, Internet portals, game producers and publishers.  Some companies are vertically integrated in many or all of these businesses (e.g., Walt Disney, Time Warner, Viacom, NBC-Universal and News Corp.) Although they may differ in scale and corporate culture, the common thread among them is an economic model that relies on the ability to control levels of presentation quality, and when, where and how consumers access entertainment content.  Traditional peripheral constituents supporting this supply model are the post-production and distribution service providers, and the systems/equipment suppliers that comprise the industry ‘ecosystem’ for content production, preparation and distribution.  This paper presents an argument that evolving media consumption habits are fueling technologies and innovations that threaten the ecosystem by disrupting the studio’s control and empowering consumers.  Moreover, the authors contend that technology disruptions are becoming increasingly more insidious and occurring at faster intervals.  Information lags create missteps and confusion between the constituents that negatively impact the entertainment industry’s ability to manage change.   To eliminate the effects of the “Technology Disruption Conundrum”, the authors call for greater transparency and shared strategic planning between studios and their suppliers along the post-production and distribution value chain.  Historically, the entertainment industry enjoyed relatively long periods of business stability between short periods of upheaval.  In the early to mid 20th century, the economics of the filmed entertainment business were largely under the control of a few “moguls” who owned or controlled practically every component along the value chain, from script to theater.  The studio moguls built their own production factories staffed with creative and craft resources at controlled wages and terms.  Even the most popular (and profitable) actors were held to exclusive, long-term contracts that dictated terms and wages favorable to the moguls.  To complete the monopoly in the United States, the major studios also owned extensive national theater chains that bore their names (Warner, Paramount, Fox, etc.).  Over time, this “studio factory system” collapsed under the weight of government regulation, organized labor and disruptive technologies.  Two outstanding examples of government intervention are the 1948 ‘consent decree’ which divorced the major studios from their domestic theater chains (Aberdeen, 2000); and, the embracing of television production by the major studios in the 1970’s, when the original three U.S. networks (NBC, CBS and ABC) were banned by the Federal Communications Commission’s Financial Interest and Syndication Rules (Fin-Syn) from financial interest in their productions, beyond first-run network broadcasts.  [VWJ1] The rise of professional creative unions such as the Screen Actors Guild and the Director’s Guild of America fundamentally changed the economics of movie and television production and distribution.  Actors, directors and other creative artisans organized to negotiate performance and participation terms that significantly differed from the previous long-term individual contracts.  In the early days of the filmed entertainment industry, professional advancements to the technical tools of the trade (e.g., color, audio, optics) were commonly developed through industry-centric companies such as Technicolor, Deluxe, Todd AO and Panavision.  These improvements were not disruptive because they enhanced the movie going experience to the benefit of the studios. Technologies create disruption when they empower consumers with products that a) reproduce creative experiences of a quality similar to those offered by studios (e.g., video cameras, editing software) or, b) challenge the studio’s control over when, where and how consumers access content (television, VCRs, Internet).  It is interesting to note that professional innovations in the filmed entertainment industry have almost always migrated from expensive professional products into low-cost consumer products; for example, photography, phonograph, film projection, radio, computers, video cassette recording, etc.  


Improved Algorithms for Lexicographic Bottleneck Assignment Problems

Dr. Zehai Zhou, University of Houston-Downtown, Houston, TX



The lexicographic bottleneck problem is a variant of the bottleneck problem that is in turn a type of the traditional cost (or profit) minimizing (or maximizing) assignment problems. In this paper, the author presents two polynomial algorithms for the lexicographic bottleneck problem. The suggested algorithms solve the lexicographic assignment problem with 2n nodes by scaling and/or sequentially solving a set of classical assignment problems and both algorithms run in O( n5 ). These algorithms improve the previous ones, devised by R.E. Burkhard and F. Rendl [3], by a factor of log n. In the special case where all the cost coefficients are distinct, an algorithm with run time O( n3.5 log n ) is presented. A numerical example is also included in the paper for the illustration purpose. The cost (or profit) minimizing (or maximizing) assignment problem has been extensively treated in the literature and many polynomial algorithms have been devised to solve it [1, 8]. Several variations of the classical profit (cost) maximizing (minimizing) assignment problem have also been analyzed by researchers and efficient algorithms are readily available. For instance, Garfikel [4], Ravindran and Ramaswany [10] studied bottleneck assignment problems, Martello et al. [7] discussed balanced assignment problems, Berman et al. [2] studied several different constrained bottleneck assignment problems, Seshan [11] discussed the generalized bottleneck assignment problems, Geetha and Nair [5] presented a polynomial algorithm to solve assignment problem with an additional ‘supervisory’ cost, and Hall and Vohra [6] discussed an on-line assignment problem with random effectiveness and cost information, etc. In this paper, the author discusses yet another variant of the bottleneck problem, the lexicographic bottleneck problem. In the regular bottleneck problems, the maximum cost (or minimum profit) is to be minimized (or maximized). This is, however, sometimes too crude or the solution is not as desirable as some other alternatives in some real world applications. For example, when one tries to assign n jobs to n workers one may not only want the maximum completion time to be minimized but also may hope to have the second longest completion time, the third longest completion time, etc., to be minimized. There are many other real-life instances where one wishes to solve a lexicographic version of assignment problem instead of bottleneck assignment problems. This problem was first studied by Burkard and Rendl [3]. They presented two different algorithms for solving this problem. The first was based on scaling of the cost coefficients and the second was an iterative approach. The fundamental idea behind both algorithms is to reduce the lexicographic bottleneck problem as a traditional sum optimization problem by redefining the cost coefficients and then solve the sum optimization problem constructed. The computational complexity of both algorithms is of O( n5 log n ) in the worst case analysis.  In this paper, the author presents two polynomial algorithms for the lexicographic bottleneck problems. Our algorithms solve the lexicographic assignment problems with 2n nodes by scaling and/or sequentially solving a set of classical assignment problems and both run in O( n5 ). In the special case where all the cost coefficients are distinct, we present an algorithm with run time O( n3.5 log n ). Consider the traditional cost minimizing assignment problem (i.e. linear sum assignment problem or LSAP), where cij is the cost of assigning worker i to process job j. Let X be the set of all feasible solutions satisfying constraints (2) through (4) and be the n cost coefficient values (where  ) corresponding to xij=1. Obviously, the solution of LBAP is determined by the order of the cost coefficients rather than the actual values of these coefficients. Let m be the number of different values in the cost matrix C and t1<t2<<tk<tk+1<<tm denote these values. Instead of dealing with the original cost coefficients, the new cost coefficients are instead defined as   It is obvious to see that the new cost coefficients dij {0,1,...,k-1}. It is also clear that this transformation keeps the relative order of solutions of (LBAP) with cost matrix C. Now it is needed to show how to solve the LBAP with cost matrix E by an LSAP where E will be chosen appropriately.  For the ease of exposition, let us assume that all the cij are distinct. When all the cij are distinct, there are n2 different values in matrix C, which means that D consists of 0,1,...,n2-1. Set p1 = 1 and let us define  Proposition 1.  Let cijN be cost coefficients of a LBAP. If the cost matrix E is obtained by scaling the original cost cij as described above, then a solution F is optimal for LBAP if and only if F is optimal for LSAP with cost matrix E.  Proof.  In any feasible solution, there are n variables (among n2 variables) taking the value 1 and all other variables equal 0. The cost coefficients in E are obtained in such way that element pk is 1 more than the summation of the values of the n elements (which are immediately) smaller than pk (i.e. elements pk-n-1 through pk-1, where k-n-1>0). Now assume that F is an optimal solution for LSAP. First of all, the largest cost value α1 is minimized since it has the scaled value which is 1 more than the summation of any n elements whose orders are lower than the one corresponding to α1 in cost matrix D. Suppose that there exists a solution with lower α1 value, then F is not a optimal solution for LSAP, which contradicts the assumption. This argument also holds for all αi, 2 ≤ i ≤ n. So we conclude that F is an optimal solution for LBAP if it is an optimal solution for LSAP. On the other hand, if F is not an optimal solution for LSAP, then there must exist an optimal solution with lower total cost sum. This indicates that there exists solution with lexicographically smaller bottlenecks (α1, α2, …, αn) because cost coefficients in E are obtained in such way that element pk is 1 more than the summation of the values of the n elements (which are immediately) smaller than pk (i.e. elements pk-n-1 through pk-1, where k-n-1>0). This implies that F is not an optimal solution for LBAP, which completes the proof.


Do Employees Trust 360-Degree Performance Evaluations? (A research on the Turkish Banking Sector)

Dr. Harun Demirkaya, Kocaeli University, Kocaeli, Turkey



Recently performance evaluation has been emphasized and systematized in businesses because the strategic importance of human resources in creating a high-performance organization is well understood. However, this has led to arguments, since both the evaluated and the evaluator are humans. The 360-degree evaluation, which was designed to settle those arguments and to provide an objective evaluation, is now widespread in Turkey.  Trust is one of the determinants of the effectiveness of performance evaluation. It is even more crucial in the 360-degree performance evaluation because so many evaluators are involved. This study tests the degree to which the employees of a bank trust the 360-degree performance evaluation.  A business corporation is a socio-technical system in which many people of different abilities, dreams, and creative skills come together. Three types of behaviors are required to make the system work well. First, people must be convinced to join and remain in the organization. Second, employees must perform their job responsibilities reliably. Third, they must voluntarily dedicate their creative and innovative skills beyond any sense of duty (Werner, 2000:4). This third expectation is indispensable for organizations aspiring to high performance.  On the other hand, in these organizations, management becomes much more complex and multi-dimensional (Peterson, 2003:243). Performance evaluation provides input for almost all functions of HRM applications, and the system outputs are taken into consideration as objective data in decisions and applications.   Developed in the American army (Cadwell, 1995:23) and eventually spreading into businesses, a performance evaluation system evaluates employee performance as a sub-system of performance management (Milkovich and Boudreau: 1991:91). Measuring achievement and creating a high performance organization are crucial for businesses. They spend huge amounts of effort and money in them (Stiffler, 2006:17). Any improvement in individual performance may lead to much greater developments within an organization (Milkovich and Bouderau: 1991:92). Therefore, the first step in creating a high performance organization is recruiting high performance employees. This understanding adds strategic dimensions to performance evaluation and HRM. Despite the importance attributed to performance evaluation, this issue has unfortunately not been treated as a system in many corporations, and therefore a performance evaluation system that is suitable to the corporate strategy and culture could not be structured. Employees’ trust in the organization’s performance evaluation system plays a significant role in this respect. Trust directly influences the success of a performance evaluation system. The purpose of this study, conducted on a bank affiliated with a large presence in Turkey is to test the level of trust in the performance evaluation system in general and in the 360-degree performance evaluation in particular.  Performance means “achievement” or “effectiveness.” Performance evaluation measures achievement or effectiveness as a function of human resources, and is becoming indispensable to the public and private sectors (Clement and Stevens, 1984:43). In this context, performance evaluation includes activities that determine an employee’s efficiency. The employee’s work, activities, weaknesses, strengths, competences and deficiencies are thoroughly examined (Fýndýkçý, 2003:297).  Performance evaluation is “the process of evaluating job achievement of employees by measuring and comparing with predefined standards” (Palmer, 1993:9). This process must be managed well in order to be an effective tool of competition (Robertson, 2004:24).   However, this effectiveness is only possible if performance evaluation is integrated into the system. Otherwise, it will inevitably face certain problems. Performance evaluation is such a rapidly developing process that Kirkpatrick has increased the four levels that he initially proposed to seven (2006:5-8). Despite this rapid development, each organization has distinct conditions. Organizations determine efficiency of their own applications, conditions, purposes and expectations. Therefore, a well-designed performance evaluation system should be a process of managing, evaluating, waging, rewarding and developing performance in a way that contributes to the shared efforts to reach organizational targets (Barutçugil, 2002:125).


Factors Influencing Sales Partner Control and Monitoring in Indirect Marketing

Dr. Roland Mattmüller, International University Schloss Reichartshausen, Germany

Dr. Ralph Tunder, International University Schloss Reichartshausen, Germany

Dr. Tobias Irion, International University Schloss Reichartshausen, Germany



When selling their products, manufacturers of consumer goods are heavily reliant on the support of their sales partners as they act as gatekeepers for the end customers and thereby determine the extent and quality of the goods available to the customer. In order to ensure the desired market presence of their products, manufacturers are increasingly creating institutionalised structures for the continuous monitoring and control of their sales partners. Using a survey of 130 managers in various consumer goods sectors in Germany, the following investigation will clarify and empirically substantiate what the fundamental parameters are that shape the control of a consumer goods manufacture’s sales partners, and which factors influence the intensity of sales partner control.  In contrast to intraorganisationally orientated organisational and sales management literature, there are only a few empirical studies relating to interorganisational sales partner monitoring.   So even today, Frazier for example, still has to be endorsed, who in a meta-analysis of the contributions to knowledge in the field of distribution research comes to the conclusion:  “Despite the importance of monitoring, to the best of my knowledge, Bello and Gilliland (1997) are the first to explicitly examine it in a major channel study. Clearly much more has to be done. What needs to be monitored across different channel relationships and contexts is an important question. Behaviors as well as performance outcomes will need attention in many cases.” (Frazier, 1999). The indicated need for research is manifested particularly in a lack of theoretically founded and empirically demonstrated knowledge with regard to construct formation and measurement of the monitoring construct as well as with regard to central influencing factors on sales partner control and monitoring in the consumer goods industry. This study is the result of this research deficit. Its aim is to make theoretically founded and empirically demonstrated comments on the monitoring construct and the causal relationships between the central influencing factors (antecedence variables) and the intensity of sales partner monitoring (dependent variables). Initially the construct of sales partner monitoring is conceptualised and operationalised, and building on this, the study model is created in which the causal relationships between selected influencing parameters and the intensity of  sales partner monitoring are derived taking into account theoretical reference points and made specific in hypothesis form by way of a survey of 130 consumer goods manufacturers.  In order to derive precise hypotheses relating to the direction of action of the influencing factors, the existence of a clear understanding of the concept of dependent variables in the study model is necessary. Below, sales partner monitoring is defined as the process of gathering and processing information on the basis of formal principles by a manufacturer after concluding a contract with a sales partner in order to verify the extent to which the sales partner is meeting the obligations imposed on him. Apart from the required conduct (abilities and activities), the contractual obligations also include the resulting performance outcomes and are thus compared as normative specified values with the actual values achieved by the sales partner in order to be able to identify and analyse any discrepancies. In the literature on a theoretical level sales partner monitoring is in principle broken down into the two dimensions of outcome-based control and behaviour-based control (Mattmüller, 2004; Celly/Frazier, 1996; Anderson/Oliver, 1987; Oliver/Anderson,1994; Ouchi/Maguire, 1975). Whereas outcome-based control relates to the result of a completed implementation process and thus the level of achievement of quantitative specific targets, behaviour-based control covers the implementation process itself. In order to be able to precisely formulate the causal relationships between the identified influence factors and sales partner monitoring, the causal relationship between the outcome-based control and behaviour-based control is also important.  In connection with this, the findings of the literature analysis show an extremely non-uniform picture as with the substitutionality thesis (existence of a construct with two negatively correlated dimensions), the complementarity thesis (existence to two different, positively correlated constructs) and the independence thesis (existence of two different, uncorrelated constructs) there are three competing logically mutually exclusive modelling approaches present.  The substitutionality thesis was developed within the framework of intra-organisational sales management literature and relates to the formulation of an employee management system. In addition to employee monitoring, such a system includes reward and incentive formulation and exceptionally has neither an outcome-based nor conduct-based focus (Anderson/Oliver, 1987; Oliver/Anderson, 1994; Krafft, 1999). In an intraorganisational context this assumption of a one-dimensional continuum is fully justified as all conceivable possibilities of employee management can be shown on one continuum. However, the argument against substitutionality in an interorganisational context is the fact that neither of the two forms of monitoring can provide such comprehensive information about the sales partner that the other one could be dispensed with. Consequently the indicated formulation possibilities for sales partner monitoring cannot be shown on a one-dimensional spectrum.   Although the complementarity thesis presumes the existence of two different constructs, it sees these as positively correlated at the same time (Celly/Frazier, 1996). Modelling of this type implies that an intensification of outcome-based control is always concomitant with an increase in behaviour-based control and vice-versa. This assumption is not however plausibly justified in causal terms and is therefore also rejected.  Consequently the arguments in this study are based on the independence thesis.  The conduct and outcome-based control of a sales partner are modelled as two independent multi-factorial constructs which are assumed to have no systematic, reciprocal, significant causal dependences.  The outcome-based control factors cover cost and yield monitoring, while behaviour-based control is broken down into the factors capabilities and activities monitoring. Variations of the constructs ‘behaviour-based control’ and ‘outcome-based control’ are interpreted below as causal effects of variations of the relevant factors or their indicators.  


Perceptions Affecting Employee Reactions to Change: Evidence from Privatization in Thailand

Dr. Chaiporn Vithessonthi, University of the Thai Chamber of Commerce, Bangkok, Thailand



The focus of the present study is to test whether perceived participation in decision-making process and perceived change in power influence employee reactions to change (i.e., resistance to change and support for change) in a sample of 197 employees at a large state-owned organization in Thailand in the context of planned privatization. Using multinomial ordered probit regression, the results provide some support for the proposed hypotheses. More specifically, the level of perceived participation in decision-making process is negatively associated with the level of resistance to change whereas the level of perceived increase in power resulting from the change is negatively associated with the level of resistance to change and positively associated with the level of support for change. During the past decade most state-owned enterprises in both developing and developed countries have come under increasing pressure to significantly improve their performance. In addressing this challenge, many state-owned enterprises have sought to go through a process of privatization, necessitating the implementation of organizational change. It has been argued that employee reactions to change, e.g., resistance to change and support for change, have critical implications for the outcomes of organizational change (Kotter, 1995). In a broader context, the attitudes and behaviors of employees have an impact on strategic implementation and firm performance (Becker, Huslid and Ulrich, 2001). A central research question in this context is: How can organizations minimize employee resistance to change and promote employee support for change? The aggregate result of a series of actions made by the firm in the change processes tends to cause employee resistance to change (e.g., Judson, 1991; Kotter, 1995). The emphasis in the change management literature so far has been on what organizations should undertake to promote support for change and reduce resistance to change. This literature highlights that employees orient their reactions to change towards the actions of organizations. In spite of a large body of normative techniques for managing change, there is a lack of empirical studies of their application to suggest whether the techniques proposed in those models will in fact influence employee reactions to change. Dent and Goldberg (1999) challenged the conventional wisdom that people resist change and argued that people do not resist change, per se, but rather resist losses of status, pay or comfort. They posited that these are not the same as resisting change. This view has been supported by several studies suggesting that certain factors influence resistance to change, and these include, for example, fear of real or imagined consequences (Morris and Raben, 1995), fear of unknown consequences (Mabin, Forgeson, and Green, 2001), a threat to the status quo (Hannan & Freeman, 1988; Spector, 1989), and different understandings or assessments of the situation (Morris and Raben, 1995). Hence, it is plausible that employees react to the consequences of organizational change. During the last decade, significant attention has been devoted to understanding perceptions that are expected to influence the formation of employee reactions to the decisions of organizations (e.g., Eisenberger, Fasolo, and Davis-LaMastro, 1990; Eisenberger, Huntington, Hutchison, and Sowa, 1986). In this view, the tendency of employees to form reactions to change is greatly influenced by some perceptions. For instance, it has been argued that perceived organizational support is related to a wide array of work-related attitudes and outcomes (Eisenberger et al., 1986). The question that now arises is: Do perceived participation in decision-making process and perceived change in power resulting from organizational change affect an employee’s reactions to change?  The notion that certain perceptions influence the decisions of employees has been widely accepted in the research. Accordingly, this paper advances and tests an argument that perceptions influence employees’ reactions to change. Well-understood effects of perceptions may actually promote more comprehensive, effective and pragmatic change management models designed for promoting employees’ support for change and/or for reducing employees’ resistance to change. Taking a step in that direction, this paper attempts to fill a gap in current empirical research by empirically examining the links between employees’ perceptions about the change processes and the consequences of change and employees’ reactions to the planned privatization pursued by a large state-owned organization in Thailand.  Resistance to change has been an important construct in a number of fields, including, for example, psychology, organizational development, and organizational change. In literature on change management, researchers generally agree that resistance to change is a key variable affecting change decisions and outcomes, and it is a negative and undesired response for organizations because it might lead to a failure of organizational change (Regar, Mullane, Gustafson, and DeMarie, 1994). Hence, it is not surprising that much research has been devoted to examining the ways in which resistance to change can be minimized. What is resistance to change? Despite a large body of research on resistance to change, it is difficult to find a definition of resistance. According to Lewin (1951), who was one of the first authors to use the notion of resistance to change, the status quo represents the equilibrium between the forces supporting change and the barriers to change. Some difference between these forces is therefore required to generate the “unfreezing” that initiates change. To make the change permanent, “refreezing” at the new level is required. In this sense, resistance, which is a system phenomenon, is part of the change process. Many studies have posited that resistance to change is negative and should be removed or minimized. For instance, Coch and French (1948: 521) defined resistance to change as “a combination of an individual reaction to frustration with strong group-induced forces.” Agócs (1997: 918) defined institutionalized resistance as “the pattern of organizational behavior that decision makers in organization employ to actively deny, reject, refuse to implement, repress or even dismantle change proposals and initiative.” Interestingly, Lewin (1951) suggested that it is easier to lower resistance to change than to increase support for change. Consequently, one may argue that it is not necessary that a factor that lowers resistance to change will increase support for change. In this sense, resistance to change and support for change should not be the same construct.  Several researchers in management science have indicated that fairness of organizational policies and procedures exerts an impact on people in organizations (e.g., Adams 1965; Gopinath and Becker, 2000;


A Study of the Relationship of the Perception of Organizational Conflicts and Organizational Promises among Faculty and Staff Members in the Technical and Vocational Colleges

Hui-Chuan Tang, Far East University, Taiwan



Colleges under the pressure of educational revolution need to constantly do something within their inner organization to cope with the changes of the society. In the process of transformation, many organizational conflicts occur both in the instruction and administration units. The reasons of such conflicts may be positive or negative. The former is helpful to enhance the members’ promises to the organization. Therefore, the primary purpose of this study is to understand the perception of technical and vocational college faculty and staff members’ organizational conflicts and promises, their relationship, and differences. Subjects are college faculty and staff members from technical and vocational colleges in central and southern Taiwan. Totally 720 copies of questionnaires are sent out, with 462 copies returned, of which 51 copies are invalid. Therefore, the return rate is 64% with 411 valid copies. The collected questionnaires are analyzed through One-way ANOVA and Typical Correlation.  The findings in the study are:  1. Role duty conflict is perceived the most obviously, compare to the others when organizational conflicts are discussed. As for the organizational promises, promises to work hard get the highest value, compared with the value promises and position promises. 2. For those with different background variables, it is found that younger people, administration staff, and teachers with lower ranks perceive higher conflicts. It is also discovered that teachers with higher ranks, with director duties, and with age more than 40, will have higher loyalty to the organization. 3. The more conflicts are perceived by the members within an organization, the lower the position-stay promises will be for the organization; the higher perception of ideal organizational operation, the lower the effort promises and position-stay promises. The policy of opening the door of higher education brings a series of revolution for technical and vocational colleges which have to reform their inner organization to cope with the changes of the outer environment. In the transformational process, more conflicts will occur within the operation of instructional or administration units. Without understanding the conflicts and searching for solutions, there may be negative influences on the long-term school development and management.School organization revolution comes from the concept of organizational management and behaviorism. It is a long-term and continuous operation within which many different people of the school organization are involved, such as school president, administrative staff members, teachers, and students. In order to achieve the common goals, an organization is an organism produced by the members and structure and the interaction between them (Liao, 1995). The educational revolution may elicit inner conflicts of school organization, because of the different individual values, overloaded duties, and different role-play. These conflicts are not necessarily negative, however. They can also bring school energy, upgrade the instructional quality, open more space for question discussion and increase the problem-solving abilities of members in the organization. Through the discussion of problems, new ideas can be found to become the power of revolution and increase faculty and staff members’ loyalty so that they are willing to contribute more to the schools. Therefore, both the achievement motivation can be exerted and loyalty is increased. Hong,(1995) pointed out school organizational conflicts have both positive and negative influence on schools. For positive influence: (1) they can encourage members’ creativities and bring the organization revolution; (2) they can force more suggestions to be offered under the benign interaction; (3) they can inspire the members to pay more efforts for the organizational goals; (4) they build and improve the organizational structure and atmosphere. For negative influence: (1) more time and sources of the organization are consumed; (2) members of the organization will have a negative psychological feeling; (3) the confidence and support between members are destroyed; (4) members are unwilling to participate in the work and even go strike; (5) the flow rate is getting too high and the organization is loosely organized; (6) and the low instruction quality of a long period has a bad impact on the school reputation. Lin (2002) divided organization conflicts into four levels: objective realization conflict, role duty conflict, organizational operation conflict, and habit change conflict. These four levels can cover personality conflict, value conflict, inter-group conflict, and professional requirement conflict. Moreover, these four levels also involve the members’ psycho, objective, cognitive, affective, and behavior conflict. Lin continued to point out that organizational conflict will have direct or indirect negative influence on organizational promises. School presidents are the key role to push the educational reformation. His leading role should be an effective step to decrease teachers’ pressure and organizational conflicts and future enhance the organizational conflict. Lin & Chen (2005) found out that administrative staff in the technical and vocational colleges perceives that the occurrence frequency of organizational conflicts is above the average level. When compared to the different types of conflict, the most-frequently found is cognitive conflict. Value conflict, material conflict, benefit conflict, and feeling conflict are ranked in succeeding order after the cognitive conflict. However, cognitive conflict is found to be more often in the instructional units than the administrative ones. In Lan’s (2004) research findings, volunteer conflict cognition shows negative correlation with organizational promises. This means that the stronger of the volunteer conflict cognition, the lower of their organizational promises.


Mass Customization Manufacturing (MCM): The Drivers and Concepts

Dr. Muammer Zerenler, Selçuk University, Konya, Turkey

Derya Ozilhan, Selçuk University, Konya, Turkey



Today’s business environment is characterized with extremely tight competition. Companies are forced to constantly reduce costs and outperform when pursuing efficiency. At the same time, companies are struggling to reach effectiveness to retain customer loyalty. Combining these two aspects is difficult at best and requires reasonable trade-off between variety, functionality, and price of the products and services. Mass customization relates to the ability to provide individually designed products and services to every customer through high process flexibility and integration. Mass customization has been identified as a competitive strategy by an increasing number of companies. This paper surveys the literature on mass customization manufacturing. Enablers to mass customization and their impact on the development of production systems are discussed in length. Approaches to implementing mass customization are compiled and classified.  Market and technology forces that affect today’s competitive environment are changing dramatically. Mass production of identical products—the business model for industry in the past—is no longer viable for many sectors. Market niches continue to narrow. Customer preferences shift overnight. Customers demand products with lower prices, higher quality and faster delivery, but they also want products customized to match their unique needs. To cope with these demands, companies are racing to embrace mass customization, “the development, production, marketing, and delivery of customized products and services on a mass basis,” according to a definition popularized by Joseph Pine, a leading spokesman for the concept. Mass customization means that customers can select order and receive a specially configured product, often choosing from among hundreds of product options, to meet their specific needs. In today’s economy, continuous competition and the dynamic global market have pushed manufacturers to transition from mass manufacturing techniques toward flexible and rapid response methods, to enable them to deliver products rapidly while keeping costs down. This can mean embarking on an approach called .Mass Customization Manufacturing (MCM). The goal of MCM is to build customized products, even if the lot size is one, and to achieve a customization/costs balance (Pine 1993). Today’s customers won’t accept the Henry Ford’s dictum ‘You can have any colour car you want as long as it's black’ (Pine 1993). “Every customer is unique” -phrase has challenged manufacturing companies. Fulfilling every customer’s individual needs has been, if not impossible, targeted only to very solvent customers. Mass customization (MC) strategies have tried to achieve the goal to fulfil individual needs cost efficiently. The manufacturing trend of producing a smaller number but wider variety of products forces enterprises to adopt differentiation strategy to offer customers more choices of products. Such kind of variation strategy often makes the interwoven constraint relationship of products even more complicated, which is one of the characteristics of in a customization manufacturing environment (Jiao et al., 2003; Salvador & Forza, 2004). Fohn et al., once used computers as a case study and demonstrated that approximately 30–85% of product information was wrong and that this kind of mistake would causes in engineering design and substantial burden to an enterprise (1995). Mass customization, once considered a paradox to be resolved in the future, has become an everyday reality for many manufacturers.  The term “mass customization” was coined by Davis in Future Perfect (Davis, 1987) and then popularized with the publication of Pine’s Mass Customization: the New Frontier of Business Competition (Pine, 1993). In 1989 Kotler posited that market segmentation had progressed to the era of mass customization in which computer technologies and automation capabilities allow companies to produce cost-effective, individualized versions of products (1989). However, mass customization not only aims to address the customer requirements effectively but also efficiently. The costs of mass customized products should be low enough that the charged prices are not considerably different from the prices of comparable standard products, which are manufactured on the basis of mass production principles. Thus, mass customization is a strategy that contradicts the stuck-in-the-middle hypothesis postulated by Porter (1998). This hypothesis, which states that product differentiation and cost leadership are two incompatible strategies, has been the subject of a very long discussion within the scientific community. More contemporary researches suggest that the advances in manufacturing, information technology and management methods since the publication of Future Perfect in 1987 have made mass customization a standard business practice (Kotha, 1995; Pine, 1993).


Copyright: All rights reserved. No part of the material protected by this copyright notice may be reproduced or utilized in any form or by any means, including photocopying and recording, or by any information storage and retrieval system, without the written permission of JAABC journals.  You are hereby notified that any disclosure, copying, distribution or use of any information (text; pictures; tables. etc..) from this web site or any other linked web pages is strictly prohibited. Request permission / Purchase this article:


Contact us   *   Copyright Issues   *   Publication Policy   *   About us 

Copyright 2000-2018. All Rights Reserved