The Journal of American Academy of Business, Cambridge

Vol.  12 * Num.. 1 * September 2007

The Library of Congress, Washington, DC   *   ISSN: 1540 – 7780

Most Trusted.  Most Cited.  Most Read.

Members  / Participating Universities (Read more...)

All submissions are subject to a double blind peer review process.

 

Main Page     *     Home     *     Scholarly Journals     *     Academic Conferences     *      Previous Issues     *      Journal Subscription

Submit Paper     *     Editors / Reviewers     *     Tracks     *     Guideline     *     Standards for Authors / Editors

Members / Participating Universities     *     Editorial Policies      *     Jaabc Library     *     Code of Publication Ethics

 

The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Journal of American Academy of Business, Cambridge will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work. All submissions are subject to a two person blind peer review process.

The Journal of American Academy of Business, Cambridge is published two times a year, March and September. The e-mail: jaabc1@aol.com; Journal: JAABC.  Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via the e-mail address above. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above. Address advertising inquiries to Advertising Manager.

Copyright 2000-2017. All Rights Reserved

The Financial Management of Military Pilot Training Costs: An Assessment of U.S. Navy and U.S. Air Force Pilot Training Attrition Rates

Albert Joseph Sprenger Jr., Embry-Riddle Aeronautical University

Dr. James T. Schultz, Embry-Riddle Aeronautical University

Dr. Marian C. Schultz, The University of West Florida

 

ABSTRACT

Obtaining aeronautical ratings can be extremely costly expensive. The U.S. Air Force and U.S. Navy have elected to send their pilot candidates to civilian flight schools prior to starting military training to insure they possess the basic capabilities to pilot aircraft. The Air Force and Navy programs differ in the number of hours students receive prior to starting military training; the Air Force program is 50 hours including receiving their Private Pilots rating, while the Navy program is 25 hours. This study examined the differences in attrition rates of Navy student pilots at U.S. Navy Pilot training, and Navy student pilots at U.S. Air Force Pilot training. The research hypothesis was that Navy students who train with the Air Force will attrite at a greater rate than students undergoing training with the Navy. The hypothesis based on a lack of experience of the Navy student, as compared to Air Force students, who begin training with a private pilot’s license. The research utilized the causal-comparative methodology. The hypothesis was tested utilizing a two-dimensional Chi-Square nonparametric test of significance, at the .05 level of significance. The research hypothesis was not supported; there was no significant difference in attrition rates between Navy students at Air Force Pilot training as compared to those attending Navy Pilot training. Initial observations indicate that Navy students attending Air Force training are behind Air Force students based on the Air Force’s requirement that incoming flight students have completed a minimum of 50.0 hours of pilot instruction, and have received a private pilot’s license prior to starting joint specialized undergraduate pilot training. The Navy has no requirement for a private pilot’s license, and only allows student pilots a maximum of 25.0 hours of Navy sponsored piloting time prior to starting flight training, regardless of whether the student will attend training with the Navy or the Air Force.  The only location which Navy students attend Air Force training is Vance Air Force Base (AFB) in Enid, Oklahoma. Re-locating to Vance AFB removes the students from any peer groups that were formed during their Navy Aviation Preflight Introduction (API), which occurs at Naval Air Station, Pensacola, Florida. This in itself could be a de-motivating factor if the student did not volunteer to go to Vance. Not only is the Navy student removed from the Navy relationships formed in Pensacola, but thrust into an environment which is completely unfamiliar to them in regard to military formalities, base and squadron layout.  With the difference in flying experience prior to starting training, coupled with the faster pace of Air Force training, the researchers theorized that Navy student pilots have a difficult time competing against Air Force students with more flying experience,  and a knowledge of the protocols associated with Air Force organization and tradition. Examining the differences in attrition rates between Navy students at the two locations training allowed the researchers to determine if a lack of flying experience, as compared to their Air Force counterparts, was a factor in Navy flight students failing to complete Air Force pilot training. It has been observed that Navy student pilots in the Air Force T-37 program attrite at a higher rate than Navy student pilots in the Navy T-34 program. The purpose of this study was to determine if Navy student pilots who participate in the Air Force T-37 training program attrite at a greater rate, as compared to Navy student pilots that participate in Navy T-34 training. Failures from either the Navy syllabus or the Air Force syllabus can be the results of many reasons, such as inadequate flying skills, lack of adaptability, academics, and medical factors. It is assumed that these reasons for failure are shared equally by these two groups of Navy students. Therefore, the underlying problem was to determine whether a Navy flight student that went to Vance AFB for training was at a greater risk of not progressing to Intermediate Flight Training than a Navy student that trained with a Navy training squadron.  In this study it was assumed that the data for all of the students used in this study were accurately entered and displayed on the database for fiscal years 2002 to 2004. It was also assumed that all Air Force students started the flight program with at least 50.0 hours of flight time, and that all Navy students started with at least 25.0 hours of flight time. It was also be assumed that Navy students participating in Air Force primary training at Vance AFB either volunteer, or were chosen at random with no bias to disciplinary or academic problems during the Navy’s API.  This study was limited to the attrition rate results of fiscal year’s 2002 to 2004 using Navy students from Vance AFB, and Navy students at Naval Air Station’s Corpus Christi, Texas and Naval Air Station Whiting Field, Florida.  The United States Navy has historically been interested in screening future pilots for flight training to reduce attrition, enhance operational readiness and reduce costs. As early as 1939, over 30 different psychological tests were administered to Navy pilot students entering the flight training syllabus. Psychologist’s called this The Pensacola Project (Arnold, n.d.). Slowly, different tests began to stand out as high-quality predictors of success in training. Prior to World War II, it was hard for the United States and its allies to properly predict performance and attrition rates of aviators.  In the United States, during World War I, aviation psychologists were not as convinced as their European counterparts that physiological and sensory testing could adequately predict pilot performance (Griffin & Koonce, 1996). The Americans leaned more toward psychomotor and intelligence testing instead of the Barany chair, which had some success in France. The Barany chair was a chair-shaped device used to predict the performance of pilot trainees by inducing disorientation and motion sickness.  The first occurrence of extensive American pilot testing was in 1919 when psychologists in Texas were trying to validate their tests for pilot selection. These tests included measuring the prospective pilot’s emotional stability by looking at how much the candidate’s hand was shaking after firing a pistol and how sharp their mental alertness was in the Thorndike Intelligence Test (Griffin & Koonce, 1996). Other tests included measuring how much a candidate swayed when blindfolded and what their perception of tilt angle was while they were blindfolded.  By 1939, American aviation psychologists had multiple psychomotor tests in place, along with pencil and paper biographical inventories and cognitive tests (Griffin & Koonce, 1996). Experts also had a device that simulated stick and rudder movements of an airplane to better screen candidates. With war on the horizon, the United States Committee on Selection and Training of Aircraft Pilots (also known as the Civilian Pilot Training Program [CPTP]) suggested that the pencil and paper testing of cognitive abilities, biographical inventories and psychomotor ability was inadequate due to varying degrees of instructor ratings, low failure rates and inadequate methods for recording the information (Griffin & Koonce). There was very little standardization at the time but not a lot of aviation psychologists were worried about such things when the country was involved in full-scale world war. By 1942, two tests were in use for student pilot prediction of success through the pilot syllabus, the Aviation Classification Test and the Bennett Mechanical Comprehension Test. Arnold noted that these tests were used to identify those students that were most likely to succeed in the demanding mental and physical environment of flight training.

 

Liquidity Provision in Informationally Fragmented Upstairs Markets

Dr. Orkunt Dalgic, State University of New York at New Paltz, NY

 

ABSTRACT

The ability of upstairs market makers to observe certain important characteristics of their customers is a distinguishing feature that contributes to the quality of execution, i.e. liquidity, provided by these markets. One such characteristic is a customer’s likelihood of trading, also termed “unexpressed demand” by Grossman (1992). However, the existence of switching costs can lead to investors using the upstairs market to limit their trading relationships to a small number of market makers, and upstairs market search costs can reduce the potential number of trade counterparties contacted during searches. Such costs would cause at least some of the information about upstairs market investors’ trading likelihoods to remain privately known by individual market makers. This paper develops a general framework where information about investors’ trading likelihoods is split into private and public components. An increase (decrease) in the proportion of private information reduces (improves) upstairs market execution quality relative to Grossman's (1992) model, and relative to the downstairs market. Moreover, when the proportion of private information is larger, an increase in the competitiveness of upstairs market making may lead to a greater reduction in upstairs market liquidity.  Key words: brokers, dealers, brokerage firms, upstairs markets, liquidity, unexpressed demand, trading preferences, business relationships, market fragmentation. The term “upstairs” market refers to a market where buyers and sellers negotiate trades in designated “upstairs” trading rooms of brokerage firms. On the other hand, a market such as an exchange floor or its electronic equivalent, is known as a “downstairs” market. Upstairs markets are generally believed to have certain special properties, which Seppi (1990) and Grossman (1992), among others, have argued improve liquidity provision. One such property is that upstairs market makers know the identities, and other characteristics, of their customers. (1) Seppi (1990) argues that superior knowledge of customers allows upstairs market brokers to screen informed trades, which enhances upstairs market liquidity. Robust empirical support for Seppi’s (1990) screening hypothesis, by Smith, Turnbull and White (1999) among others, implies that superior knowledge of customers is a distinguishing characteristic of upstairs market brokers. Furthermore, Grossman (1992) argues that familiarity with investors' trading preferences and willingness to trade in certain states of the world allows upstairs market makers to provide enhanced market liquidity. For instance, customers may want an upstairs market broker to buy or sell a certain quantity of stock repeatedly in pre-specified intervals or whenever the price falls within a pre-specified range. If a large enough number of shares of a relatively illiquid stock will be traded, the order may need to be executed in multiple transactions over long periods. In such cases, upstairs market makers can learn information that will affect the future price of the security. Grossman (1992) refers to investors’ likelihood of future trading as the unexpressed demand of investors. Bessembinder and Venkataraman (2004) use data from the Paris Bourse, and Booth, Lin, Martikainen, and Tse (2002) use data from the Helsinki Stock Exchange to find strong evidence consistent with the hypotheses of Grossman (1992) and Seppi (1990).  In Grossman’s theoretical model (1992), the upstairs market is composed of identical market makers that observe the expressed and unexpressed demands of all investors using the upstairs market, i.e. the total upstairs market order flow. (2) However, faced with switching costs, investors are likely to concentrate trades with a small number of upstairs market brokers, especially for the most frequently traded, i.e. liquid, securities. Moreover, the search costs of upstairs brokers will limit the number of counterparties they contact during searches. These costs will make it likely that at least some portion of the unexpressed demand of investors will be hidden from the upstairs market. Although trading and professional relationships among upstairs market makers may enable the dissemination of information about investors’ trading likelihoods and preferences, potential losses from front-running practices are likely to reduce information sharing. (3) Furthermore, common practices like preferencing and internalization can also limit information diffusion. (4) Conversely, mergers of market making firms, or the hiring of individual brokers from competing firms, are likely to improve the information environment of the upstairs market. For example, an individual upstairs broker moving to another brokerage firm may transport some of the knowledge of the trading habits of the old firm's customers to the new brokerage firm. Furthermore, investors themselves may actively seek liquidity by shopping around in the upstairs market among different brokerage firms. During the search process investors are likely to reveal their trading habits and preferences, and thus their unexpressed demand, to many brokerage firms in the upstairs market. However, changes to the upstairs market’s clientele and to investors’ trading habits should prevent unexpressed demand from ever becoming fully public. This paper summarizes the set-up of the Grossman (1992) framework. It then extends the framework to capture varying levels of informational fragmentation in the upstairs market related to investors’ unexpressed demand. Expressions are derived for the upstairs market price, equilibrium number of market makers, and execution quality. In particular, the following contributions are made to the upstairs markets literature. The relative liquidity advantage of an upstairs market is found to vary inversely with the proportion of the private component of unexpressed demand, as characterized by the concentration of investor-broker trading and business relationships in the upstairs market. Furthermore, when some unexpressed demand is private, greater competition for upstairs market making, as indicated by a larger equilibrium number of market makers, may adversely affect liquidity. This is because, while reducing the price impact of investors’ expressed demands, an increase in the number of upstairs market makers also causes greater fragmentation of the information about unexpressed demand. Another finding is that given all else remains equal, the equilibrium price of a security in the fragmented upstairs market model carries a premium over Grossman’s (1992) model. The premium increases with the proportion of private unexpressed demand, and arises because in a fragmented upstairs market some information about unexpressed demand is unobserved and therefore not priced in equilibrium. This leads to an overvaluation of the security as compared to the informationally integrated upstairs market.   The paper is organized as follows. Section 2 presents the set-up of Grossman's (1992) framework. Section 3 introduces and derives a general version of the upstairs market that features the model of Grossman (1992) as a special case, and incorporates varying levels of informational fragmentation in the upstairs market, i.e. upstairs market makers observe both private and public information about unexpressed customer demand. A closed-form solution is found for the upstairs equilibrium price of a security, and for the measure of market execution quality. The expression for the upstairs market execution quality is then compared to its counterpart in Grossman (1992). Finally, Section 4 presents the concluding remarks. This section presents the basic set-up of the Grossman (1992) framework. Without loss of generality, the model assumes two time periods. At time 1, the following events take place: 1. Investors choose the market venue for their orders, i.e. upstairs or downstairs market. Investors then experience an exogenous liquidity shock, and either place an order in their chosen market, termed ‘expressed demand’, or inform all upstairs market makers of their willingness to trade at time 2, termed ‘unexpressed demand.  2. upstairs market makers observe all expressed and unexpressed demands of investors in the upstairs market, but not the expressed demands of investors in the downstairs market. The upstairs market is cleared when upstairs market makers choose demand schedules that maximize the utility of their future (time 2) wealth. While there are as many upstairs market prices as there are upstairs market makers, each price deviates from the average by an amount , so that an average upstairs clearing price, , is established at time 1.  3. downstairs market makers observe the expressed demands of investors in the downstairs market but neither the expressed nor unexpressed demands of investors in the upstairs market. A single downstairs market clearing price, , is established when downstairs market makers choose their equilibrium demands to maximize the utility of their future (time 2) wealth.  At time 2, the upstairs and downstairs market prices of the asset converge to a single price, , reflecting all public information about the asset, minus the cost of liquidity provision in both the upstairs and downstairs markets. The asset being traded is assumed to be a forward or a futures contract in zero net supply. This assumption simplifies the exposition, but has no qualitative effect on the analysis.(5) The time 2 price of the asset observed at time 1, , is therefore

 

Team Effectiveness and Leader-Follower Agreement: An Empirical Study

Dr. Susan D. Baker, Morgan State University, Baltimore, MD

Dr. Daniel A. Gerlowski, University of Baltimore, Baltimore, MD

 

ABSTRACT

The role of teams in organizations has become a dominant theme in theoretical, applied, and empirical research.   This paper is grounded in the literatures of leadership, followership, and team effectiveness.  It builds on the work of Sundstrom, McIntyre, Halfhill, and Richards (2000), which called attention to the role of team composition in determining team effectiveness in the workplace.  Our research attempts to determine whether leader-follower agreement about leader and follower characteristics effects team effectiveness.  The role of teams in organizations has become a dominant theme in theoretical, applied, and empirical research.   Further, team abilities and skills represents an area that has reached into most business program curricula as well as becoming a standard skill set required in many occupations at multiple levels.  A related literature focusing on leadership developed over time, and more recently, this literature has been extended to include research on followership. This paper is grounded in the team, leadership, and followership literatures.   It extends the work of Sundstrom, McIntyre, Halfhill, and Richards (2000), which called attention to the relationship between team composition and team effectiveness, into issues addressed in the leadership and followership literatures.   Our empirical analysis concerns team effectiveness as a function of team homo- or heterogeneity along leadership and followership dimensions controlling for socio-demographic differences among team members.   Our work relies on survey data from six sites of healthcare organizations in the mid-Atlantic region, drawn in the fourth quarter of 2005.  Respondents completed a questionnaire containing the Leadership Inventory Practices-Self (LPI) (Kouzes and Posner, 2003a), the Performance and Relationship Questionnaire (PRQ) (Rosenbach, Pittman, and Potter III, 1996), and questions about broad socio-demographic status.  The LPI and the PRQ instruments provided data on the respondent’s leadership and followership characteristics.  Survey distribution  ensured that each respondent’s team was identified.  Supervisors who normally evaluated all teams’ performance were asked to provide information on team effectiveness.   To determine whether leader-follower agreement about leader and follower characteristics impacts team effectiveness, we employed a variety of empirical tools.  In each case the null hypothesis states that agreement between team leader and team members on selected survey instruments dealing with leader and follower characteristics does not impact team effectiveness.  The alternative, or research hypothesis, states that homogeneity between leader and follower characteristics does impact team effectiveness.  We define key terms used throughout this research to ensure a common framework clarify the main constructs used and to place them in the context of their literatures.  Effective Team: a small task group that has a shared “common purpose, interdependent roles, and complementary skills” (Yukl, 2002).  It also satisfactorily meets the tasks standards and expectations of its organization and clients for “quantity, quality, and timeliness” (Hackman and Waltman, 1986).  Follower: an active, participative role in which a person supports the teachings or views of a leader and consciously and deliberately works towards goals held in common with the leader or organization  (Baker, 2006). Followership: a process by which a person fills the role of follower, supporting the views of a leader and consciously and deliberately working toward common goals shared with the leader or organization.  The active participation of both follower(s) and leader is essential to the process (Baker, 2006). Leader: a role in which a person leads, guides, commands, directs and supports the activities of another or others, who are commonly called followers, to achieve goals held in common with the leader or organization. Leadership: a process by which a person fills the role of leader, influencing another or others to achieve goals held in common with the leader or organization.  The other(s) are called followers, and his/their active participation is essential to the process (Baker, 2006). The three constructs examined in this study are team effectiveness, leadership and followership.  All three constructs are grounded in the literature of their respective fields. In today’s organizational milieu and behavioral literature, “teams” receive much attention, whether the team under discussion is called a work group, a work team, a high performing team, an effective team, a self-directed work team, or a leaderless team.  The growth of team literature has occurred in conjunction with changes that have occurred in the American workplace since the 1980's when the profits and market shares of hierarchical, vertically integrated American companies were challenged by streamlined global competitors (Orsburn and Moran, 2000).  In the 1990's work teams proliferated throughout industry, leading to the creation of classifications of teams.  Defining an effective team, though, proved to be a harder task.  As Hackman and Walton (1986) observed, "there is no single, unidimensional criterion of team effectiveness" (p. 79); team effectiveness requires more than "counting outputs" but must also consider "social and personal criteria."  The definition is made even harder because effectiveness is dependent upon on "system-specified (rather than researcher-specified) standards."  Theorists who explored the construct of team effectiveness included Vaill (1978), who examined high-performing systems; and Katzenbach and Smith (1993), who posited a team performance curve that defined five different types of work associations that were charted on the two axes of performance impact and team effectiveness.  Because effective teams can deliver many benefits to their organizations in the form of higher quality, better service, faster rollout of new services and products, and lower costs (Sundstrom & Associates, 1999), researchers continue to search for the answer to "what makes a team effective?"  After reviewing field research conducted in the twentieth century about teams, Sundstrom et al. (2000) called for further research into group composition, one of five groups of factors that they identified as being related to team effectiveness.  They described group composition factors as including the “mix of their [members] traits -- ability, personality, demographic characteristics -- and collective expertise, ability, diversity, heterogeneity, and stability or fluctuation or membership” (p. 56).   Ammeter and Dukerich (2002) issued a similar call for further research about the “interaction between leader characteristics and team-building/team member characteristics” (pp. 3-4). Burns’ (1978) theory of transformational leadership provides the framework for the definition of transformational leadership used in this study.  In extending Burns' work, Bass (1990) identified behavioral characteristics of transformational leadership and theorized that leaders who demonstrated these behaviors "transformed" followers, who could then achieve higher levels of performance than expected .  In the 1980's, Bennis interviewed 90 CEO's who had been nominated as exceptional leaders (Sashkin, 1995).  Bennis and Nanus analyzed the interview data and identified five common behavior patterns among the CEO's; Bennis called the patterns "competencies" (p. 6).   Kouzes and Posner (1987) began their leadership research in the 1980's with a “personal best survey” in which they posed 38 open-ended questions to middle- and senior-level managers in both public and private sectors.  The researchers used both quantitative and qualitative techniques, including "personal best case studies” and in-depth interviews, to triangulate data collected from over 1300 subjects.  Kouzes and Posner identified five common transformational leadership behaviors that occurred when leaders were experiencing their personal bests.  Each of the five behaviors had two strategies to achieve the behavior.  The five behaviors are called: challenging the process, inspiring a shared vision, enabling others to act, modeling the way, and encouraging the heart.  In his review of transformational leadership, Sashkin (1995) observed that “Bass’ ideas were crucial for moving beyond Burn’s groundbreaking concepts, but Kouzes and Posner take a step beyond Bass toward a much clearer behavioral explanation of transformational leadership” (p. 7).

 

Improving IT Service Delivery Quality: A Case Investigation

Dr. Jihong Zeng, New York Institute of Technology, Old Westbury, NY

 

Abstract

In the e-commerce environment, business has increasing dependency on information technology (IT) to deliver services to customers. IT service availability has dramatic influence on customer satisfaction and corporate reputation of the enterprise. Consequently, the demand for 24 x 7 service availability is greater than ever. Information systems which provide information infrastructure for business applications have become a critical and integral component for business service delivery. System downtime means lost of revenue and competitive advantage for the business. The business requires IT service providers and information system managers to ensure that service-affecting incidents do not occur, or that efficient and effective remediation must be taken to provide high-availability services. A lot efforts and improvement have been made to ensure high-availability in each individual technology industry. However, not enough focus has been given on how to improve the overall end-to-end IT service availability from end-user’s perspective. Without visibility into the overall availability of underlying components including information systems, applications and operational processes, it is impossible to make informed business decisions about IT resources. This paper introduces ITIL availability management concept and presents how to apply ITIL best practices to decompose service delivery into components or subsystems. Block diagram modeling technique is deployed to assess the overall service availability. This holistic approach helps pinpoint the bottlenecks to the required service level. It also demonstrates the capability to help provide cost-effective solutions to improve service delivery for existing as well as future application and infrastructure design and implementation in a highly competitive e-business environment. Service availability has become one of the most important aspects of service delivery in the highly visible e-business economy. Consequently, the demand for 24-hour a day, 7 days a week operation is greater than ever. Over the past decade, information technology (IT) has transitioned into a critical role in the enterprise, which not only supports business service delivery but also help business to constantly drive innovation and improvement in order to gain edge over other competitors.  IT service downtime imposes huge loss of revenue for large enterprise. As an example, table 1 lists service availability, equivalent downtime and average annual revenue loss for various industries based on the research survey by Meta Group (Meta Group, 2000). Service availability also has dramatic impact on customer satisfaction and corporate reputation. It is particular true while your customers are just a mouse click away from your competitor’s offerings in the highly competitive e-business environment (Fisher, 2000).  High availability is not new in IT industry. A lot efforts and improvement have been made to ensure high-availability in each technology industry. However, risks to service availability may be caused by technology, process as well as human error throughout the whole IT infrastructure and within every management process (Pope, 1986). There is not enough research and focus given on understanding and improving the overall end-to-end IT service availability from end-user’s perspective. Without visibility into the overall availability of underlying service delivery components including information systems, applications and operational processes, it is impossible to make informed business decisions about IT resources investment to provide cost-effective solutions to address the service level requirements from customers. Over the years, organizations are becoming increasingly dependent on IT to fulfill their corporate objectives. This increasing dependence has resulted in a growing need for IT service of a quality corresponding to the objectives of the business, and which meet the requirements and expectations of the customer. The Information Technology Infrastructure Library (ITIL) was developed as a framework of best practice approaches intended to facilitate the delivery of high quality IT services (van Bon et al., 2002). ITIL outlines an extensive set of management procedures that are intended to support business in achieving both quality and value for money in IT operations. ITIL best practice on availability management is responsible for ensuring that service-affecting incidents do not occur, or that timely and effective action is taken when they occur. Availability is related to reliability and maintainability. The basic availability concepts are outlined in the following sessions. Mean Time Between Failure (MTBF), is the average time between failure of a component. For instance, MTBF for hard­ware component can be obtained from hardware vendor based on its configuration. Mean Time To Repair (MTTR), is the time taken to repair a failed module. For instance, repair generally means replacing the hardware component in an operational environment. Thus hardware MTTR could be viewed as mean time to replace a failed hardware module. Availability is the percentage of time when system is operational. Availability of a component can be obtained by the formula: Based on the above formula, high availability would be expected if mean time between failure (MTBF) is very large compared to the mean time to repair (MTTR). Likewise if mean time to repair is decreased, availability will be high. As system reliability decreases (MTBF decreases), better maintainability (such as shorter MTTR) is needed to sustain the same level of availability. Figure 1 illustrates the relationships between MTBF and MTTR. It is worth noticing that in order to support its service delivery, the business demands high availability with both high MTBF and low MTTR. Therefore, those shaded areas in the diagram represent the regimes that are either technical challenged or unacceptable by business.

 

Relationship Between the Use of Internet Information for Health Purposes and Medical Resource Consumption for an English-Speaking Sample

Dr. Hager Khechine, Laval University, Canada

Dr. Daniel Pascot, Laval University, Canada

Dr. Pierre Prémont, Laval University, Canada

 

Abstract

Many researchers in the fields of information systems and medical sciences are showing special interest on Internet use for health-related matters because the Internet is becoming an important source of information for patients and clinicians. Indeed, statistics reveal that almost 113 million U.S. citizens looked for health information on the Internet in 2006. The purpose of this research is to study the relationship between the use of Internet information by English-speaking patients and their consumption of medical resources. We perform a quantitative study based on a ten-item questionnaire. The sample is made of 120 patients suffering from a long-term disease and accustomed to the use of the Internet for health-related issues. Construct validity and reliability were ensured. Most items have loadings greater than 0.5. The path coefficient between the variables is significant and high. We conclude that the use of health information by patients is contributing to increase their healthcare resource consumption. This result can be explained by the fact that patients may misunderstand, be overwhelmed, or be confused by the poor quality of the information obtained from the Internet. We expect this study to have a theoretical and practical impact on the fields of management information systems and medical sciences. Indeed, we believe researchers should be concerned about the role that Internet information can play in the management of medical systems and about the design of health-related Websites.  During the last decade, the number of scientific meetings and studies about the use of online health-related information by patients has dramatically increased. Many topics have been investigated or sometimes theoretically treated. For instance, some studies have focused on the effects of the Internet on the "Patient-clinician" relationship (Hjortdahl et al., 1999; Anderson et al., 2003). Researchers have try also tried to understand the impact of the Internet on the quality of the healthcare services (Eysenbach et al., 1999). A survey of Pew Internet & American Life (2003) concluded that Internet information helps patients improve their health state, prepare the meetings with physicians, and decide if other medical consultations are necessary. The topic of this research deals with the field of ″cybermedicine″. In particular, we are interested in studying the distribution and use of health-related information online by the patients. Numerous health professionals are opposed to the growth of cybermedicine due to its harmful effects on patients. This research attempts to raise concern about the use of medical information displayed on the Internet. This paper is organized as follow: we first present the background related to the use of Internet for health purposes. Next, we explain the objective of the research and the research model. Methods for data collection and analysis are detailed in the following section. We end the paper with the results, some topics of discussion, and the conclusion. A growing number of websites are dedicated to health. Those websites offer medical information that helps patients make decisions about their health (Mittman and Cain, 1999). Some surveys claim that the number of websites related to health issues was over 15000 in 1998 (Miller and Reents, 1998). This number reached 20000 in 2000 (Bush et al., 2000). To our knowledge, there is no exact estimation of the number of websites specializing in healthcare. Due to its dramatic growth of this Internet sector, no more statistics have been carried out on this particular topic since 2000. Mittman and Cain identify two driving forces that contribute to this growth. The first one is the ″pull″ factor, which deals with the growth in consumer demands for more health products and services. The second force is the ″push″ factor related to the market pressures aiming to meet patients’ needs and to create new ones (Mittman and Cain, 1999).  According to Greene (2000), many patients spend more time surfing a healthcare website than they do with their physician. In 1998, 40% of Internet users looked for information about health (Elsberry, 2000). In 1999, the number of individuals who used the Internet for health purposes reached 40 millions (Weber, 1999). In 2002, a study by Pew Internet & American Life (2003) concluded that 93 million U.S. citizens used the Internet for health purposes. This number reached 110 million according to the Harris Interactive poll (2002). A recent survey of Harris Interactive (2005) found that the percentage of U.S. adults looking for health-related information on the Internet has dropped (72% in 2005 compared to 80% in 2002). However, since the number of individuals going online has increased, the number of U.S. adults using the Internet for health purposes has grown too. These findings echo a 2006 analysis by Pew Internet & American Life (Fox, 2006) which found that 113 million U.S. adults have used the Internet to find health information in 2006. In this same study, the author argues that the percentage of Internet health-related users has been stable over the past four years.  As suggested in the literature, patients are prompted to use the Internet. First, the ease of use and access to a vast wealth of global information make the Internet a highly requested information and communication medium (Edwards, 1999). In addition, some services on the Internet provide access to full-text articles and peer-reviewed journals thus eliminating the need to the library anymore. This trend is strengthened by the shortage of information easily retrieved from traditional channels (Miller and Reents, 1998). Secondly, most doctors are overworked and do not have enough time to devote to their patients (Miller and Reents, 1998). Thirdly, the Internet offers a wide range of health providers. Indeed, the abolition of geographical borders allows patients to reach specialists wherever they are at any time. Patients use the Internet to obtain health-related information, products and services. Many studies conclude that information about illnesses, especially chronic or long-term illnesses such as cancer, heart disease, diabetes, and epilepsy is in demand the most. Patients also look for information about diet, nutrition, or pharmaceuticals and seek support from online discussion groups (Fox, 2006). This last activity is appreciated by most clinicians as they believe that it helps patients recover. Purchasing drugs on the Internet is also a widespread practice. Patients mainly show an interest in purchasing vitamins, supplements, and prescription drugs (Miller and Reents, 1998). Many researchers and clinicians think that patients with access to medical information on the Internet are better prepared to seek healthcare services. Indeed, that information updates patients on recent research and medical developments. Educated patients are better equipped to take part in their own healthcare since the availability of medical information on the Internet can facilitate the establishment of a constructive dialogue between clinicians and patients. According to the Pew Internet & American Life poll, 54% of U.S. adults using the Internet for health purposes said that the information obtained led them to asking a doctor new questions or seeking a second opinion.

 

Entropy in a Social Science Context

Dr. Joseph L. Moore, Arkansas Tech University, Russellville, AR

 

ABSTRACT

The paper will give an overview of the Second Law of Thermodynamics, entropy, and relative entropy.  There will be a listing of areas where the latter two concepts are being employed today. The principle discussion will be in terms of the social sciences. The author will give brief examples drawn from prior research in economics.  However, the emphasis will be on the research technique employed. The hope is that others will be motivated to try the technique in their endeavors. Entropy is a concept that is drawn from physics. In recent years, the notion has been applied to other areas, including most of the social sciences. Starting about 25 to 30 years ago, some economic research was done to employ this concept. Since that time, there has been a smattering of articles in economics. Nevertheless, in the opinion of the current author, the concept is not well understood. This paper has two purposes: (1) to educate more people on the concept, as applied in the social sciences (2) to question an interpretation of the concept as employed in another piece of research, done by a different author. In equilibrium, energy tends to flow spontaneously from being concentrated to becoming spread out. The word tends implies that the energy can remain concentrated for long periods of time. The second law says nothing about when or how much. The word spontaneously means that only the energy in the closed system is available; outside energy can impact the operation of the second law. In other words, equilibrium corresponds to a disordered distribution of the sets. This is not always true when the sets are influenced by extreme forces. The word equilibrium implies an end state.   Some scientists believe that the second law of thermodynamics does not apply to living organisms. Although, hindering the law is necessary for us to be alive. The second law of thermodynamics is frequently referred to as “time’s arrow”. It points to how we think time goes. This implies that it is what we have seen and, more importantly for us, in this paper what we think is going to happen.  Entropy in a closed system must remain constant or increase. The notion of entropy is being employed today in the fields of psychology, sociology, engineering, mathematics, statistics, economics, and information theory. The use of the word in this paper will be drawn from psychology, economics and information theory.  Some would suggest that thermodynamic entropy and information theory entropy are not the same concepts. However, they are related in that both measure randomness.  Claude Shannon is generally regarded as the father of entropy theory and information theory. Shannon believed that entropy did not apply to the social sciences. Nevertheless, psychologists have attempted to use the notion of entropy to define “cognitive” concepts. Here the word cognitive is being read as “the thing perceived” If we assign specific postulates or claims to be analyzed, then entropy relative to maximum entropy can be defined as the “degree of belief” in the proposition. This is called relative entropy.  Maximum entropy can be defined as: P(event 1) + P(event 2) + P(event3) = 1  A model should be chosen that is consistent with the facts and is as uniform as possible in terms of assigning the probabilities. The second law is often read as leading to a state of maximum entropy. Prior Studies.  Over the past 30 years, there have been numerous articles published in the American Economic Review that address the state of consensus among economists.  The initial work was done in 1976 by Kearl, extended in 1979 by Kearl et al, subsequently extended in 1992 by Alston, Kearl, and Vaughn.*  Numerous jokes about the consensus among economists have been heard for years.  At a more serious level, “a second, common perception of economists is that there is a widespread and serious disagreement about important issues and hence that economists can contribute little to an analysis, solution, or understanding of these issues” (Kearl et al. 1979).  The real significance of this was expressed subsequently in the article where the same authors state: “the perceptions of irrelevance and/or disagreement may, unfortunately, be used by policy makers to justify the abandonment of analysis and the adoption of simplistic and perhaps superficial answers to complex problems where potential insights might be obtained with economic analysis” (Kearl et al. 1979). A recent study entitled “Consensus Among Economists Revisited”, by Dan Fuller and Doris Geide-Stevenson updates and adds to earlier research.  In this study, 24 of the original propositions were retained and others were refreshed.   These authors found consensus among American Economic Association members to be greatest in declining order for international issues, microeconomics, and macroeconomic propositions.  The author has conducted two studies with undergraduate students where he sought to: 1. Determine the degree to which certain beliefs are widely shared by some economic students at one university; 2. Explore using the propositions and responses as a tool to improve the instruction in the principles of economics course; and 3. Delve into using propositions and responses as a tool to improve the assessment of outcomes. Recently, there has been an impetus to change from input to output measurement in higher education.  It is believed that the frequency of output measurement will continue to increase.  The author of this article found the referenced material while searching for instruments to measure output.Attention to learning outcomes is not new to those concerned with AACSB accreditation.  As far back as 1973, AACSB established the Accreditation Research Committee (ARC) to respond to these concerns.  One of the objectives of this project was to: “Identify and classify the knowledge, skills, abilities, attitudes, personal characteristics, and values that should be possessed by graduates of business schools.” (Outcome Management, p. 1)

 

Emergence of Customer-Centric Branding: From Boardroom Leadership to Self-Broadcasting

Dr. Mohammed M. Nadeem, National University, San Jose, CA

 

ABSTRACT

With increased global competition, it has become essential for leaders in every industry sector, from commodities to consumers packaged goods, to understand the new and emerging theory and practice of customer service for successful deployment of brands. The brand has become a strategic business concern for every senior corporate executive and board member. This research explores how a customer-centric approach makes a brand not only stronger but also on a path to profitability. This research mainly examines how successful boardroom leadership connects the customer to the brand through its motivated associates and all of its stakeholders. The purpose of this study is to demonstrate how emerging self-broadcasting customers become devotees of a brand by experiencing it on a deeply emotional level over time, and cementing their loyalty to the products and services of their choice. The final sections discuss the limitations of the exploratory study by providing conclusions, and ideas for future research for branding effectiveness. Companies involved in brand creation or transformation should pay as much attention to their internal reality as they do to their customers. The goal should be maximum relevance and alignment with the employee audience. As consumers spend more time controlling, uploading, downloading, filming, recording, and sharing their own personal experiences with products, services, and brands, marketers are expected to figure out to be relevant and credible. Brands are also expected to embrace the consumer’s desire to create, control, and share and empower them with simple creation tools that allow them to self-express over, and over and over. (Broddy, 2006).  People buy products, but they choose brands. So the ultimate marketing goal for any company is to create a brand identity that separates it from everyone else. The strongest identity is that of a leader. To build a brand leadership identity there are four main components: brand awareness, brand perception, brand icons and brand loyalty. Not surprisingly, brand leaders have the best brand awareness, the best quality perception, the best-known brand-boosting icons and the strongest brand loyalty. The secret is to create marketing programs that build up all four dimensions simultaneously. The key to building and maintaining brand leadership is a visionary strategy, brilliant execution, and a totally integrated marketing plan. The powerful brands also understand how to build strong brand loyalty, using interactive media, direct response, promotions, web marketing and many other devices that provide relationship-building experiences. Retaining a few more customers can boost much higher profits than expected, and it’s so much easier if a company is seen as a leader (BLM, 2007). Brand differentiate standardization, customization, reduce risks, and complexity and communicate the benefits and value a product or service can provide.  This is just as true in business-to-business as it is in business-to-consumer (Pfoertsch, 2006).  We are seeing a consolidation of brands by companies as they try to leverage their promotional and advertising dollars across fewer brands (Chinta, 2006).  Brand must deliver a distinctive benefit. Brands will have to standout, assert uniqueness, and establish identity as never before (Kotler, 2005). Companies are being forced to react to the growing individualization of demand. At the same time, cost management remains of paramount importance due to the competitive pressure in global markets. Thus, making enterprises more customer-centric and efficiency is a top management priority in most industries. Mass customization and personalization are key strategies to meet this challenge. Companies such as Procter & Gamble, Lego, Nike, Adidas, Lands End, BMW, or Levi Strauss, among others, have started large-scale mass customization programs (Tseng and Pillar, 2003). In addition to the usual marketing channels, the visual branding of the software consumers use to interact with retailers and service providers as well as with their employers is an increasingly important tool in the endeavor to promote the relationship between companies and individuals (Simon, 1998).  Brand equity is a set of assets (and liabilities) linked to a brand’s name and symbol that add to (or subtract from) the value provided by a product or service to a firm and /or that firms customers (Aaker, 1996). Research conducted by Harvard Business School shows that the longer a customer is with a company, the greater the annual profit generated from that customer, Fig.1. These increased profits come from a combination of increased purchases, cost savings, referrals and a price premium (Cutler, 2005): As consumers upload clips on YouTube, they are becoming brand producers, creators, recommenders, collaborators, marketers and more depending on where they happen to participate in the market ecosystem at any given point in time. It is the beginning of what the value potential of the actions and thought product might be. When consumers can become passive influencer simply by placing a digital widget on their personal website or their mobile profile, the exponential implications are enormous and the possibilities are endless for branding. Over four hundred years ago, Sir Francis Bacon stated, “Knowledge is power”. In today's rapidly shifting economy and branding landscape, where entire industries could evolve overnight, it is appropriate to paraphrase Bacon and say, “Knowledge of the customer-centric branding is power”. To put it plainly, customer-centric branding is driving shareholder value by emphasizing and elevating the necessary strategic boardroom discipline (Roll, 2007). Customer-centric branding leads companies to grow and transform by getting more out of their brands, marketing investments, and people. Research for this paper was conducted via published articles, surveys, case studies, and the Internet.  The analysis provided in the paper used the data from the collected information to show how strong brands are built by delivering consistent, memorable experiences to the customers. The purpose of the paper is to highlight the proliferation of multi-sided communication possibilities of the branding that created three dimensional relationships. Instead of the one way flow of mass marketing, it has become many to many network paradigm of participation leaving no channel untapped for reaching consumers. Mobile marketing, ring tones, electronic signs, Wi-Fi, and countless other innovations allow boardroom leadership and employees as brand ambassadors to better research, better reach, and better delight their markets to provide customers with an exceptional and satisfying experience.  Successful corporations know that in today's competitive marketplace it's not enough to supply customers with a good product or service. Companies need to know their target audiences - what they need and value, what they want and how they think so that a company not only wins their business, but their hearts and minds as well.

 

Downsizing, Corporate Survivors, and Employability-Related Issues: A European Case Study

Dr. Franco Gandolfi, Regent University, Virginia Beach, VA

and Central Queensland University, Rockhampton, Australia

 

ABSTRACT

This research article examines the accounts of survivors of reorganization and downsizing processes of a large car manufacturer in Europe. It looks at how corporate downsizing survivors adjusted to meet the new reality and dynamics of the corporation and how individuals developed new skills and competencies for their new roles and responsibilities within the reorganized firm. The study also reflects upon issues relating to the motivation and attitudes towards employability and learning aspects of individuals. The research highlights the onus upon individuals to take responsibility for their own training and development needs and to initiate learning opportunities. The advancement of self-development skills was shown to be of particular importance in transforming a corporation successfully. The occurrences of major organizational change, including restructuring and downsizing, represent some of the most profound (Gandolfi, 2006) and problematic issues facing modern-day corporations, non-profit organizations, governmental agencies, and global workforces (Carbery & Garavan, 2005). Corporate restructuring, or simply ‘restructuring’, is a relatively broad concept. Black and Edwards (2000), for instance, define restructuring as a major change in the composition of a firm’s assets combined with a major change in its organizational direction and strategy. The change management literature distinguishes between various types of restructuring. Heugens and Schenk (2004) present three forms of corporate restructuring, namely portfolio, financial, and organizational restructuring. This research paper is concerned mainly with organizational restructuring which is defined as a dimension with significant changes in the structural properties of an organizational entity (Carbery & Garavan, 2005). Multitudes of reasons have been put forward to justify the adoption of restructuring (Carbery & Garavan, 2005). Bowman and Singh (1993) assert that the desire to increase an organization’s levels of efficiency and effectiveness is generally at the core of managerial thinking and action. Prechel (1994) contends that organizational restructuring is not a primary strategy per se, but occurs as a “by-product” (Carbery & Garavan, 2005: 489) of portfolio or financial restructuring. This is mainly due to the fact that changes in the strategic and financial capital structures of an organization are likely to call for corresponding changes in an organization’s authority hierarchies (Prechel, 1994) and decision-making processes (Carbery & Garavan, 2005). Organizational downsizing or ‘downsizing’, on the other hand, constitutes a particular category or form of corporate restructuring (Carbery & Garavan, 2005). Downsizing generally involves the reduction in personnel (Cameron, 1994) and frequently results in the redesign of work processes to improve organizational productivity, efficiency, and effectiveness (Kozlowski, Chao, Smith, & Hedlung, 1993). Since the early 1990s, downsizing has generated a great deal of interest among scholars and managers alike (Gandolfi, 2007). As a consequence, a considerable body of literature on the phenomenon of downsizing has emerged (Gandolfi, 2006). Carbery and Garavan (2005) view downsizing as “a deliberate strategy designed to reduce the overall size of the workforce” (p 489). Downsizing is distinguished from non-intentional forms of organizational size reductions and a variety of downsizing techniques has appeared, including natural attritions, hiring freezes, early retirements, and, more frequently, layoffs (Gandolfi & Neck, 2005). Downsizing is used reactively in order to avoid bankruptcy and secure survival (Fisher & White, 2000) or proactively in order to increase productivity and enhance competitiveness (Gandolfi, 2007). Some research points out that downsizing is commonly adopted after large investments in labor saving technologies have been made by the organization (Carbery & Garavan, 2005). De Vries and Balazs (1997) deem downsizing an inevitable outcome and manifestation of globalization where organizations are continually forced to make adjustments to strategies, products, services, and the cost of labor. At its core, downsizing has regenerative purposes (Carbery & Garavan, 2005), yet empirical evidence suggests that the overall consequences of downsizing are persistently negative (Gandolfi, 2006, 2007). A substantial amount of scientific and anecdotal research has been generated on survivor illnesses, or the so-called “survivor syndrome” (Gowing, Kraft, & Quick, 1998; Carbery & Garavan, 2005). Cross-sectional and longitudinal data suggest that downsizing survivors exhibit a plethora of symptoms and illnesses, including decreased levels of commitment, loyalty, motivation, trust, and security (Gandolfi & Neck, 2005). A considerably less researched area concerns the extent to which downsizing survivors adjust to the new realities and dynamics of the organization, develop new skills and competencies, and take on new roles and responsibilities within the organization (Gandolfi, 2006). Carbery and Garavan (2005) point out that there is an underlying expectation that downsizing survivors are “the cream of the crop” (p 489) and thus considered critical to the organization’s overall success (Gandolfi, 2005). Armstrong-Strassen (1998) contends that the overall outcome and success of a downsizing endeavor is largely contingent upon the reactions of the downsizing survivors. Scientific research has demonstrated that the “breaking of the implicit psychological contract” (Carbery & Garavan, 2005: 489) considerably challenges those that remain with the organizational system following a downsizing activity (Rousseau & Wade-Benzoni, 1995; Gandolfi, 2006). The majority of research on change management views major change as taking place incrementally (Carbery & Garavan, 2005) and based upon consensus, collaboration, and participation (Quinn, 1980). In one sense, this view implies that the change process is ‘owned’ by the individual employees (Carbery & Garavan, 2005). The incremental view of change has received a lot of criticisms due to a lack of contextual elements and the difficulty in explaining the pervasiveness of “coercive reorganizations” (Carbery & Garavan, 2005: 490) in the 1990s and in the early days of the new millennium. This has resulted in the rise of a so-called “transformatory perspective on organizational change” (Carbery & Garavan, 2005: 490). Carbery and Garavan (2005) assert that change strategies are traditionally classified into four types along two dimensions, that is, incrementalism versus transformation and collaboration versus coercion (Dunphy & Stace, 1990). The first dimension refers to whether the change is implemented in a small, linear, and continuous manner or in large, erratic, and a discontinuous fashion. The second dimension determines whether employees are empowered to participate in the planning and implementation stages of a major change. Hinings and Greenwood (1989) propose a typology where transformational change results in the emergence of an alternative interpretive knowledge framework where prevailing ideas lose legitimacy and a new structure emerges. This may entail a reformed mission statement, newly defined core values, and an altered distribution of power (Kleiner & Corrigan, 1989). Gersick (1991) combined the incremental and transformational perspectives into the punctuated equilibrium model of organizational transformation. This approach, which is growing in prominence and pervasiveness (Carbery & Garavan, 2005), recognizes that organizations evolve through long periods of stability (incremental view) that are punctuated by short bursts of revolutionary periods (transformational view), which subsequently establish the basis for new periods of equilibrium (Romanelli & Tushman, 1994; Carbery & Garavan, 2005). Carbery and Garavan (2005) assert that change and change processes pose unique challenges to individuals. Huy (1999), for instance, points out that the key challenges for individuals in times of major change are receptivity, motivation, and learning. Receptivity is concerned with the individual’s willingness to accept and embrace change. Motivation, on the other hand, refers to the capacity to implement the change which in turn depends upon the existence of various components, including resources, systems, support structures, and skills (Carbery & Garavan, 2005). It has also been shown that individuals learn from experiences in an organization which will further impact their willingness to embrace change. Thus, learning involves both emotional and skill components (Carbery & Garavan, 2005). Dodgson (1993) claims that individual learning is at the heart of organizational learning. In this sense, individuals are the primary learning entities in organizations. Carbery and Garavan (2005) add that it is individuals who create organizational forms that enable learning in ways which facilitate organizational transformation.

 

The Move Towards Convergence of Accounting Standards World Wide

Dr. Consolacion L. Fajardo, National University, CA

 

ABSTRACT

This paper will discuss the theoretical bases for the move towards convergence of international accounting standards. It will look into the efforts of the U.S. Financial Accounting Standards Board, the International Accounting Standards Board, and the European standards setters to achieve the convergence of accounting standards world wide.  The benefits and the problems accompanying implementation will be addressed. The expectations is that establishing a common standards of accounting internationally will be beneficial to users and preparers by improving consistency, comparability, reliability, and greater transparency of financial information reported by companies around the world.  As a consequence, it is expected to increase cross-border investments, deepen international capital markets, and reduce costs of multinational companies who must currently report under multiple national accounting standards. Many corporations are multinationals having business operations in different countries around the globe.  However, the problem is that accounting standards differ from country to country due to differences in the legal system, levels of inflation, culture, degrees of sophistication and use of capital markets, and political and economic ties with other countries (Spiceland et al, 2007).  These differences cause huge problems for multinational companies.  Companies doing business in other countries experience difficulties in complying with multiple sets of accounting standards to convert financial statements that are reconciled to the GAAP of the countries they are dealing with.  As a result, different national standards impair the ability of companies to raise capital in the international markets.  The financial crisis in Asia and the accounting scandals in the U.S. and other countries during recent years have accentuated the fact that reliable financial reporting is vital to the effective and efficient functioning of capital markets and the productive allocation of scarce economic resources.  The failures of Enron, WorldCom, and Parmalat demonstrate the high costs of “window dressed” financial statements not only to particular companies but also to the global economy as a whole. Markets penalize uncertainty--continued investors’ concern on the quality of financial reporting and corporate management will be an impediment to economic growth, job creation, and personal wealth (Tweedle and Seidenstein, 2005).   The Sarbanes-Oxley Act of 2002 is the immediate response of the U.S. to curtail unethical accounting and business practices that imposes monetary penalties and/or jail terms for violators.  But that is not enough--it is expected that rigorous, improved, and uniform accounting and reporting standards would lessen the risk of corporate scandals, reduce losses and costs to investors/creditors, and restore public confidence world wide. Accounting standards differ from country to country which is causing problems to multinational companies.  A company doing business in more than one country will have to prepare financial statements based on those countries’ accounting standards to make the financial information consistent in terms of standards and thus comparable for economic decision-making.  It is costly and time consuming to prepare financial statements that are reconciled to the GAAP of the various countries that companies are dealing with around the globe.  Consequently, different national standards may become an impediment for companies desiring to obtain capital or make investments in international markets.  Currently, subsidiaries of multinational companies must comply with different national standards; the parent company must consolidate different national financial reports into single statements in accordance with its own home country’s accounting rule.  This process called reconciliation is very costly, time-consuming, and a waste of scarce resources. This paper will include a review of literature in an attempt to find answers to three questions: (1) what are the theoretical bases for the move to converge accounting standards globally? (2) What is the process of convergence by the FASB, IASB and European standard setters? (3) What are the benefits and problems in implementing the convergence of accounting standards world wide?  FASB (1976) defines conceptual framework as a constitution, a coherent system of interrelated objectives and fundamentals that can lead to consistent standards that prescribe the nature, function, and limits of financial accounting and reporting.  The fundamentals are underlying concepts of accounting that guide the selection of events to be accounted for, the measurements of those events, and the means of summarizing and communicating them to interested parties.  In 1973, the Financial Accounting Standards Board, in an effort to provide a set of cohesive objectives and fundamental concepts on which financial accounting and reporting can be based, issued seven Statements of Financial Accounting Concepts (SFACs).  SFAC 1 deals with the objectives of financial reporting, SFAC 2 contains the qualitative characteristics of financial reporting divided into primary (relevance and reliability) and secondary (comparability and consistency) and also included exceptions to the general principles called constraints (cost effectiveness, materiality, and conservatism).  SFAC 6 discusses the elements of financial statements, and SFAC 5 and 7 include recognition and measurement concept assumptions (economic entity, going concern, periodicity, monetary unit) and general principles (historical cost, realization, matching, and full disclosure).  SFAC 4 deals with objectives of financial reporting for nonprofit organizations and SFAC 3 was superceded by SFAC 6.  To date SFAC 1,2,5,6 and 7 are the conceptual frameworks that provide structure and direction to financial accounting and reporting, but are not considered Generally Accepted Accounting Principles (GAAP) (Spiceland, et al, 2007).  Relevance means that the information provided will make a difference in the decision-making process by having predictive value and/or feedback value, and are provided in a timely manner.  For instance, let us assume that the Net Income in 2004 is used to predict the net income in 2005. When the actual net income reported in 2005 is close to the predicted amount, the net income information predicted for 2005 using 2004 amount has feedback value in 2005, as well as predictive value in year 2006 and future years.  If the net income confirms investor expectations about the future cash-generating ability, the net income has feedback value for investors which then can be used in predicting future cash-generating ability as expectations are revised. Information is timely when it is available to users early enough to use it as one of the bases in the decision making process. Reliability means that the information is verifiable--there is a consensus among different users based on objective and documented evidence, has representational faithfulness--there is agreement between a measure or description and the phenomenon it purports to represent which in simple term means recording and reporting the truth about the company’s operations as it is, without “window dressing”, and neutral--information provided is the same regardless of who the user of the information is; there is no bias in reporting with respect to the parties potentially affected. Two secondary qualitative characteristics that are important to decision usefulness are consistency and comparability. Consistency is the use of the same accounting practices over time that permits valid comparisons among different accounting periods. Comparability is the ability to help users see similarities and differences between events and conditions, to compare information across companies to make their resource allocation decisions.

 

Human Resource Management and Strategy in the Lebanese Banking sector: Is there a fit?

Dr. Fida Afiouni, American University of Beirut, Beirut

 

ABSTRACT

This article investigates the nature of Human Resource Management (HRM) practices applied in the Lebanese banking sector, examines the strategic nature of the HRM function, and sheds light on current problems that hold the human resource department back from properly implementing its practices. The case study method has been applied on 10 banks in Lebanon from different sizes and nationalities with the resource-based view (RBV) as a main theoretical framework. The dominant findings indicate that, out of the 10 banks studied, seven banks have in place HRM practices that are not aligned with the bank’s strategy. In those banks, the absence of top management’s support, the lack of cooperation of line management, and the low credibility of the HR function hinder the proper implementation of HR practices and keep the HR department from playing a strategic role. The role of the HR department in many organizations is at a crossroads. On one hand, the HRM function is in crisis, increasingly under fire to justify itself (Schuler, 1990). On the other hand, organizations have an unprecedented opportunity to refocus their HRM systems as strategic assets.  Many scholars (Huselid, 1995; Huselid & Becker, 1996; Huselid, Jackson, & Schuler, 1997) agree that a strategic approach to human resource management requires the development of consistent human resource management practices that ensure that the firm’s human resources will help achieve a firm’s objectives. This strategic approach to human resource management requires top managers’ awareness that a firm’s performance can be affected by human resource management practices. Some empirical studies support this statement (Arthur 1994; Huselid, 1995; Huselid & Becker, 1996).  While these studies have been useful for demonstrating the impact of strategic human resource management on a firm’s performance, they have revealed very little regarding the proper implementation of those practices. The aim of this article is to identify HRM practices applied in the Lebanese banking sector, examine the strategic nature of the HR department, and investigate the factors that impede the proper implementation of those practices. The literature on HRM and organizational strategy is critically examined with the resource-based view as a main theoretical framework. The research methods are then described. Finally, major conclusions and avenues for further research are proposed. Traditionally, the human resource management function played a role in strategy implementation, but rarely in strategy formulation. Often viewed as an expense-generator or an administrative function, the HRM function is currently imposing itself as a value-added partner. Over the years, many scholars and practitioners (Hall 1993; Huselid et al., 1997; Ulrich, 1997; Barney & Wright, 1998; Wofford, 2002; Hatch & Dyer, 2004; Ordonez de Pablos, 2004) placed the emphasis on making HR managers strategic business partners and making people a value-added source within organizations. The role of the HR function, however, has evolved throughout the years at paces that differ from an organization to another and from a country to another. This creates heterogeneities in human resource management systems and practices across organizations and across countries. While in some organizations the human resource management function is well developed and plays a strategic role, the personnel department is still prevalent, with its focus on administrative and legislative issues in others. Thus, we observe a large diversity in the conception of the human resource management function in its practices, roles, and objectives. Within the strategic human resource management literature, some scholars adopt a universal approach and recommend the implementation of the “best practices” for a strategic human resource management (Arthur, 1994; Pfeffer, 1994; Huselid 1995; Becker & Huselid, 2000). This paradigm uncovers a generic set of high-performance work practices. According to Becker and Huselid (2000), seven programs can improve a firm’s performance: employability, selective recruitment, teamwork and decentralization, high remuneration, intensive training, eliminating inequalities and boosting team spirit, and extensive information sharing. Other scholars adopt a contingency perspective and are prone to find a “best fit” between the company’s strategy and HR practices (Wright, 1998; Gratton & Truss, 2003). These authors state that there are no good or bad HR practices, but practices that “fit”.  Becker and Gerhart (1996) have a valuable contribution in this field by claiming that the two approaches are complementary and elaborate the “bundles and firm-specific configuration”. This approach seeks to have a horizontal fit (among the HR practices) and a vertical fit (between the HR function and firm’s strategy). Another study conducted by Huselid et al. (1997) distinguishes between human resource management’s technical and strategic efficiency. Technical efficiency results from well-elaborated human resource practices: recruitment, selection, performance appraisal, training and development, compensation, and benefits occur when human resource professionals have the required expertise and specialization. Strategic efficiency occurs when human resource practices ensure that the firm’s human resources help achieve organizational strategies and objectives.  In order to provide a framework for this study, we will use the resource-based view of the firm, an approach that has assumed a much greater significance in the analyses of HRM in recent years (Lado & Wilson, 1994; Wright, McMahan, & McWilliams, 1994; Boxall, 1996, 2003). Our research draws on case study evidence from 10 banks in Beirut, Lebanon. Before analyzing the data, we outline the main issues from the literature on the resource-based view in HRM, and provide background information on the Lebanese banking sector as well as on the sample of banks studied. Initiated by the work of Penrose (1959), the resource-based view was articulated into a coherent statement of theory by Wernerfelt (1984) and largely popularized by the seminal article of Barney (1991). It states that organizational resources and capabilities that are rare, valuable, non-substitutable, and imperfectly imitable form the basis for a firm's sustained competitive advantage. This resource-based view of organizational strategy and competitive advantage engendered a great deal of theoretical and empirical efforts (Hansen & Wernerfelt, 1989; Teece, Pisano, & Shuen, 1990; Barney, 1991, 2001; Conner, 1991; Rumelt, 1991; Mahoney & Pandian, 1992; Amit & Schoemaker, 1993; Wright et al., 1994; Boxall, 1996, 2003) The resource-based view of the firm provides a conceptual basis for asserting that key human resources are sources of competitive advantage. Taxonomies of resources always include human capital (Barney, 1991) or employee know-how (Hall, 1993), and resource-based theorists stress the value of the complex inter-relationships between the firm's human resources and its other resources: physical, financial, legal, informational, and so on (Penrose, 1959; Grant, 1991; Hunt, 1995).  Although there have been a number of publications examining the RBV, there have been few field studies using it as a framework. Boxall (2003) believes this might be due to methodological problems and the difficulty of analyzing concepts that are hard to observe. Early attempts to use the RBV in the area of HRM used questionnaires (Koch & McGrath, 1996). The authors asked executives in charge of business units a series of questions about their HR practices. In particular, Boxall and Steeneveld (1999) were the first to use an RBV-informed, longitudinal case-based study when examining a strategic group of firms in the engineering consultancy sector in New Zealand. In summary, we believe that the RBV can offer a useful framework for analyzing and theorizing how human resources are managed within individual firms.

 

Social Structure Characteristics and Psychological Empowerment: Exploring the Effect of Openness Personality

Dr. Sarminah Samad, Universiti Teknologi MARA Shah Alam Malaysia

 

ABSTRACT

The purpose of this paper is to determine the influence of social structure characteristics on employees’ psychological empowerment and whether openness personality plays a role in moderating the above stated relationship among Customer Marketing Executives of a telecommunication firm in Malaysia. Hierarchical regression analyses of 482 responses in the study revealed that all aspects of social structure (self-esteem, power distribution, information sharing, knowledge, rewards, transformational leadership and organizational culture) are important in determining employees’ psychological empowerment. Further, openness personality variable was found to be a moderator to the relationship between social structure characteristics and employees’ psychological empowerment. Theoretical and managerial implications of the findings and future research are discussed. Increased global competition, the advent of technological innovation, globalization and changes in both workforce and customer demographics have given rise for organizations to be more efficient and productive. Consequently, to maintain a competitive edge in the service industries, considerable emphasis has been placed on providing quality services for customers. This is no exception in Malaysian telecommunication industry. Therefore Customer Marketing Executives in this sector are considered as front-line employees that provide the face to face service for the organization. Front-line employees according to Daniel et al. (1996) have direct, influential customer contact that could influence customer’s perception on service quality. Conduit & Mavondo (2001) suggested that motivation of front-line employees in a service company is crucial to the service delivery process. Literature have documented that the positive attitude and behaviors of employees have been found to be related with the experience of service by customers (Chebat & Kollias, 2000). Therefore, improving the motivation of employees has become an important area of concern among managers in service organizations. Additionally, dynamic business environment has been forcing most organizations to change their traditional approach of management. This is due to the traditional management techniques used in business organization has become obsolete. Further, rapid change of technological ages has created the new millennium with a competitive landscape that demanding customers with individual needs come out from the changing environment. Therefore, adapting new approach of management to boost up organization performance and high quality of services as well as maintaining high level of motivation is priority to managers. One of the newer techniques used by organizations and that has attracted great interest from scholars and practitioners is psychological empowerment. Empowerment in workplace is important as it is related to personal outcome variables, such as perceived burnout, autonomy, feelings of job satisfaction and commitment to the organization (Hatcher & Laschinger, 1996). Further, according to Conger & Kanungo (1988), the practice of empowering subordinates is a principal component of managerial and organizational effectiveness. Attributed this concept as a dynamic and complex phenomenon, (Staw & Epstein, 2000) stressed that this technique did have significant effect on firm performance and reputation. The empowerment topic has received a great deal of interest in the past decade and numerous studies have been directed at determining its casual antecedents (for example Spreitzer, 1996 and Thomas & Velthouse, 1990). This topic has also received substantial attention in past research due to its significant impact on work attitude such as effectiveness, strain and satisfaction (Hatcher & Laschinger, 1996). Employee empowerment has been defined in several different ways due to diverse definitions in the scholarly literature (Heller, 2003). Scholars have distinguished two main views on empowerment mainly structural and psychological perspectives. Structural empowerment focuses on empowering management practices such as delegation of decision making from upper to lower levels of organization (Heller et al., 1998) and increasing access to information and resources among individuals at the lower levels (Rothstein, 1995). Accordingly, the main idea of structural empowerment is that it entails the delegation of decision -making prerogatives to employees, along with the discretion to act on one’s own (Mills & Ungson, 2003) From the psychological approach Conger & Kanugo (1988) define empowerment as the motivational concept of self-efficacy. Thomas & Velthouse (1990) describe empowerment as intrinsic task motivation and cannot be captured by single concept. They define psychological empowerment in a set of four cognitions reflecting an individual’s orientation to his or her role in term of meaning, competence (almost similar with Conger and Kanungo’s self-efficacy), self-determination or choice and impact or influence. Meaning concerns to when employees experience their job as having value or importance (May et al., 2004). In other words employees feel that their work is important and care deeply about what they do when with they value of a work goal, mission or purpose, of activities they are engaged in are congruent with their own value system, own ideals and standards (Quinn & Spreitzer, 1997). In short, if employee’ hearts are not in their work they will not empowered. Competence is an employees’ belief in his or her capability to perform based on their skill (Thomas & Velthouse, 1990) and is analogous to agency beliefs, personal mastery or effort performance expectancy (Bandura, 1989). It refers to the knowledge that the individual has the skill required to successfully perform the task in a specific area or for specific purposes (Thomas & Taymon, 1994). According to Conger & Kanungo (1988) without a sense of confidence in their abilities, employees will likely feel inadequate and less empowered.  Self-determination refers to employees’ perception on the autonomy in the initiation and continuation of work behaviors and processes (Deci et al., 1989). According to Deci et al. (1989) choice is consistent with concept of self-determination which means to experience a sense of choice in initiating and regulating one’s own actions. If employees believe they are simply following the orders from people in the higher hierarchy they will not feel empowered (Wagner, 1995). Self-determination involves causal responsibility for a person’s actions. The impact or influence dimension reflects the degree to which an employee can influence strategic, administrative or operating outcomes at work (Ashforth, 1989). Impact refers to the extent to which behavior is perceived as making a difference in terms of accomplishing the purpose of tasks or producing intended effects in one’s task environment (Thomas & Velthouse, 1990). In other words, employees are more likely to feel empowered with a sense of progression toward a goal or a belief that their actions are influencing the system. Social structure characteristics refer to environmental events that impact the task assessment individuals make, influencing the level of perceived empowerment and thus, influencing behavior (Thomas & Velthouse, 1990). The environmental events provide information to individuals about the effects of their behavior and about conditions relevant to future behavior. The social structure can be classified into formal and informal characteristics of the work environment and objective characteristics of the environment are posited to influence perception of empowerment (Sigler, 1997). The environmental characteristics like power sharing, power distribution, information sharing, knowledge, rewards, self-esteem, transformational leadership and organizational culture can be a powerful influence on cognitions of empowerment (Spreitzer, 1995).

 

The Impact of the Asian Tsunami Attacks on Tourism-Related Industry Stock Returns

Dr. Chih-Jen Huang, Providence University

Dr. Shu-Hsun Ho, Providence University

Chieh-Yuan Wu, Providence University

 

ABSTRACT

The Indian Ocean experienced a devastating tsunami in Asia on the morning of 26 December 2004. This study utilizes a market-adjusted returns model of event study to analyze abnormal stock returns in Thailand’s tourism industry. The study differs from previous studies of market reactions to unanticipated events in terms of cross-country analysis. We investigate reactions of tourism-related industry stocks in the following markets after the Asian tsunami event: Taiwan, Hong Kong, New Zealand and Australia, from June 2004 to March 2005. In addition, this research compares differences of the abnormal returns in tourism and leisure, transportation and logistics, insurance, construction materials and construction development industries in Thailand from June 2004 to March 2005. We examine the stock market reaction for 135 days prior and examine four days and 15 days following the Asian tsunami event.  Results of analysis show partial, significant negative stock abnormal returns for the tourism and leisure industry in Thailand. On the other hand, there are also partial significant positive stock returns in the construction development and construction materials industries after the tsunami occurred. There is no significance found for the tsunami’s critical influence on Taiwan, Hong Kong, New Zealand and Australia. The Indian Ocean experienced a devastating tsunami on the morning of 26 December 2004 that had significant impact across Asia; the fourth largest of the super strong shock of scale since the 20th century. The massive earthquake, which posted 9.0 on the Richter Scale caused serious destruction across fourteen countries. The most serious incidence occurred in Indonesia, Sri Lanka, India, and Thailand. In this study, we focus on the injured country, Thailand, in order to analyze whether the tsunami event benefited other countries or not. Recent reports indicate that high numbers of tourists have visited tsunami-impacted countries, such as Taiwan, Hong Kong, New Zealand, and Australia since the Asian tsunami. The study mainly explores stock prices in the above countries. Because of difficulties with collecting stock prices from South Asian countries and also because tourism received the most serious hit to the Thai economy, we choose to investigate Thailand because of the massive market reaction in this country, and also its benefit to other countries, which include: Taiwan, Hong Kong, New Zealand, and Australia. Previous studies discuss the Asian tsunami crisis artificially, which is unusual in exploring a natural crisis. This research expects to confer the psychological impact of the Asian tsunami damage in terms of its stock market reaction. In this research, the purpose of context discusses whether the Asian tsunami influences enterprise stock price and emphasizes the tsunami’s influence across different industries in Thailand and in other countries, when a disaster occurs. According to a recent report by Fidelity, an investment institution, the disasters in South Asia wounded many industries, which include tourism and leisure, transportation and logistics, insurance, while construction materials and construction development industries may have benefited from the disaster. Therefore, we suppose that these industries’ stock market prices may be affected by the disaster. Further, we offer analysis of the Asian tsunami incident in terms of how it affected companies or industries, at a common date. In this investigation, we establish research study paper on the effects of the Asian tsunami of December 26, 2004 on its stock price. We use the econometrics method of event study analysis.According to extant research, event study is generally divided into two kinds, either types of events, or a single event. This study’s research object is the Asian tsunami as a single event, so it cannot adopt traditional event study methodology. Therefore, we use a non-traditional event study method that relates to the estimation model of investment combination suggested by Grace, Rose and Karafiath (1995), Shen and Lee (2000). Standardized methods by Patell (1976) and Boehmer, Musumeci, and Poulsen (1991) are shown to outperform traditional, non-standardized tests in event studies. However, standardized tests are valid only if there are cross-sectional, uncorrelated observational returns. In this paper, we propose simple corrections to these test statistics to account for such correlation.  Accordingly, we examine different accumulating abnormal returns on individual companies in the tourism industry in Thailand, Taiwan, Hong Kong, New Zealand, and Australia. In addition, we utilize cross-sectional analysis to examine the effects of important financial factors on abnormal remunerations.  In this study, our primary challenge is to select public companies in which the company’s name indicates that it involves tourism, because there is no global SIC code tourism in Hong Kong. The sample is retrieved from an historical stock price database for the 2004-2005 period, with the criterion that the firm has sufficient data for each variable in this study’s program model.  The identification of time parameters is then divided into two parts that define event day, estimate period and event period. Detailed content is as follows: I. Define Event Day: After deciding on the event to examine, we must confirm when it took place. The tsunami in South Asia occurred on December 26, 2004, and on the next day, Sunday, the stock market closed. The event day is definite in that the effect of an event is presumed to happen on or about the date around which a diffused effect is presumed to be distributed. The event day for this study is December 28, 2004 (t=0).  II. Estimate Period and Event Period: Once the event day is assigned, two distinct time periods relative to the event day (Day 0) are defined as follows. The security is adopted 130 days prior and drawn out 15 days after the incident happened. Accordingly, the estimation period is defined as Day -130 through Day -4, and the event period is defined as Day -4 through Day +15 and the amount is 20 days. To be included in the sample, the security must have available data on the stock exchange of individual countries during each of these periods. III. The Expectancy Model of Stock Returns: Karafiath (1998) indicates that abnormal returns are evaluated by appending a set of (0,1) dummy variables to the right-hand side of the single index market model in the event parameter approach. Designating the event day (December 26, 2004) as day zero, the model estimates over a 150- day interval from Day -150 to Day 0. For each day in the event period (Day 0 to Day +15), there is a (0,1) dummy variable that is equal to one on the event day and is zero otherwise; thus prediction errors are estimated directly as regression parameters. The research adds dummy variables during the event period to evaluate the event parameters.  The market model of equations by Grace, Rose and Karafiath (1995) is as follows: After reducing for influence of interference to returns of individual firms, we calculate the abnormal returns of all samples, the purpose of which is to dispel the interfere effect of stock price depending on specific company. We assay abnormal returns and accumulate abnormal returns whether different from zero or not. The abnormal returns of t days are named coefficient r, and dummy parameters are named d during the incident, and the accumulation returns of any window of event in which coefficient r in total is added. The equation is calculated as follows:

 

The Relationship between Leadership Behavior and Organizational Performance in Non-Profit Organizations, Using Social Welfare Charity Foundations as an Example

Dr. Ruey-Gwo Chung, National Changhua University of Education, Taiwan

Chieh-Ling Lo, National Changhua University of Education, Taiwan

 

ABSTRACT

Although the main mission of an NPO is “not for profit,” it must still pay attention to effective management practices.  The current study took social welfare charity foundations as subjects and used a questionnaire to explore the effects of top managers’ leadership behavior on organizational performance.  We found that in all of the 77 valid samples, leadership behaviors in 10 social welfare charity foundations were “high transactional-low transformational,” 23 were “low transactional-low transformational,” 35 were “high transactional-high transformational,” and 9 were ”low transactional-high transformational.”  In addition, different leadership behaviors had obvious differences on internal communications and management and on finance structure.  From the perspective of full-time employees, top managers’ leadership behavior tends towards “low transactional-low transformational,” while the volunteers regard it as “high transactional-high transformational.” The recent trend in Taiwan towards a well-developed society and a high standard of living has forced the gradual appearance of the Non-Profit Organization (NPO).  Taiwan’s democratization and the proclamation of related laws and regulations have also enhanced the advances of NPOs.  Even so, compared with for-profit businesses which emphasize the necessity of innovation, efficiency and institutionalization for their survival, the lack of organized management could lead to trouble in many NPOs.  To solve these problems, efficient human resource management is a priority, since the ability of the NPO to provide services is related to the quality of its personnel.  NPOs employ both full-time employees and volunteers, who require different management approaches.  Based on the literature review, it was found that previous studies mainly focused on the volunteer side, with few discussing the actual management of an NPO.  In addition, appropriate leadership behavior is important for maintaining members’ devotion to the organization.  Therefore, this study will use social welfare charity foundations (SWCF) as an example and focus on the following purposes: (1) Understanding which leadership behavior is adopted by the top managers in NPOs; (2) Exploring whether or not the leadership behavior of top managers that is suitable for full-time employees is also suitable for NPO volunteers; (3) Investigating the effects of different top managers’ leadership behavior on organizational performance in NPOs. The Non-Profit Organization:The Non-Profit Organization (NPO) is labeled as “the third sector,” which is different from the business and government sectors.  Based on the main economic activities in NPOs, Salamon and Anheien (1997) generalized NPOs into 12 different groups: culture and entertainment, education and research, health care, social service, environment, residence and development, law and politics, charity, international activities, religion, business, and others (Yeh, 2000).  Social welfare charity foundations (SWCF) have the following characteristics: (1) the SWCF is not based on profit, so its organizational performance neglects profit; (2) the funding sources are from public donations, government subsidies, and revenues from the services it performs.  As a result, all of those entities – the SWCF itself, donors, government agencies, and the public – are concerned about the performance of SWCFs; (3) the SWCF is less formalized and centralized, since the contributions of both its professional employees and volunteers play an important role in it; (4) the main task of a SWCF is to provide services for the disadvantaged, where the measurement of service quality depends on the personal perceptions of those being served.  Thus, the use of subjective indices to evaluate organizational performance is problematic (Lin, 2000). Leadership Behavior: One definition of leadership is, the process whereby one person tries to influence others to attain the expected objectives in a group of more than two persons.  Generally speaking, leadership behavior can be categorized as either transformational or transactional.  Bass (1985) defined transformational leadership as the behavior of inspiring members to create performance above expectations, i.e., by enhancing members’ confidence and upgrading the value of working results to inspire members’ extra efforts.  Transformational leadership comprises the following dimensions: (1) Idealized influence: this dimension focuses on the leader’s personal characteristics, which provide his/her mission and vision and enhance their subordinates’ self-respect to win their respect and trust; (2) Individualized consideration: this dimension focuses on the leader’s concerns about every employee’s development and differences.  This type of leader not only satisfies employees’ current needs, but also assists them in fulfilling their potential; (3) Intellectual stimulation: this dimension focuses on the leader encouraging subordinates to use their experience and knowledge to solve problems.  Employees are also encouraged to perceive reality from different viewpoints in order to modify their beliefs and values; (4) Inspirational motivation: this dimension focuses on the leader influencing others through the processes of motivating subordinates to pursue success, sharing mutual objectives with them, and gaining consensus for the important issues in the organization.  Moreover, Bass (1985) defined transactional leadership as a continual negotiation process between the leader and the subordinates, which includes two other dimensions: (1) Contingent reward focuses on setting goals and providing rewards at the right time and in the contingent way.  According to Bass and Avolio (1997), when subordinates give an excellent performance, the leader should publicly affirm or praise them; (2) Management-by-exception includes two aspects: a. Active management-by-exception, which Bass and Avolio (1990) regarded as setting standards to monitor subordinates’ performance, so that if something happens, the leader take some action.  b. Passive management-by-exception is when the leader delegates to his/her employees and only takes action when something of concern happens. Two differences between NPOs and other organizations are the NPO’s sensitivity and ability to deal with events in today’s society (Drucker, 1990).  Successful NPO leaders should be devoted to reforming society’s inefficiencies and to winning their members’ identification with and dedication to the organization.  According to the NPO research conducted by Chiang (1995), 21.7% of the leadership behaviors reflect transactional leadership, 45.2% reflect service leadership, and 21.7% reflect transformational leadership.  Additionally, charisma leadership was emphasized by only 10.9% of the leaders in NPOs.  Moreover, Langley and Kahnweiler (2003) took 102 African-American pastors as subjects to study the relationship between their leadership behavior and the African-American churches’ involvement in social-political issues in the community.  They found that those churches that had adopted transformational leadership behavior participated in social-political activities more frequently. Performance, which can be defined as the results of the operations performed by the members of an organization, is important for NPOs because their resources must be used efficiently.  To measure organizational performance in NPOs, Lin (2000) took 250 SWCFs as subjects and used factor analysis to extract six factors for performance measurement, including: business operation and management, organization structure and regulations, outcome of service, various funding source, service quality, and finance structure.

 

A Critical Process for Methods Selection in Organizational Problem Solving

Chia-Hui Ho, Far East University, Taiwan

 

ABSTRACT

This paper aims to explore a critical process for evaluating management methods. This paper also aims to discuss, from a critical systems perspective, how world views (which necessarily have ideological aspects to them) will influence method-users to choose particular methods for organizations. Thus, a new process called Participative Method Evaluation (PME) is established. PME is founded on the idea that a person's understanding of a method is influenced by his/her social ideology. The basic concern of the evaluation of method needs to be how method-users and organizational/environmental stakeholders can examine their ideological differences through processes of critique in order to make more informed choices.  PME embraces three stages: Surfacing, Triangulation and Recommendation. Surfacing aims to expose and explore the various assumptions about, and views on, the candidate method and the organizational situation. Triangulation compares and contrasts the various perspectives, and if possible an accommodation of views is sought. Recommendation provides practical suggestions to stakeholders as to the likely effects of using the method being evaluated, and where appropriate highlights possible modifications and/or alternatives.  Human beings follow a pattern of behavior based on their knowledge. It is claimed that knowledge is necessarily derived from individual experience combined with social and cultural influences (e.g. Gregory, 1992), and this knowledge can be seen as a basis for the individual's value judgment. From Burrell and Morgan's (1979) point of view, individuals always hold a particular world view (a so-called 'paradigm'), according to which they perceive reality. This world view is derived from their learning experience and personal belief. Although an individual's world view might shift, he/she cannot hold two different world views at the same time. Thus, at a particular point in time, an individual can only interpret anything according to his/her current state of awareness. The question therefore arises, how can we escape from our own value assumptions (ideological traps) and socio-cultural judgments? Moreover, what can we do to deal with different social judgments and individuals' personal assumptions, in order to handle social conflict? Commonly, the people affected by decision to use particular methods are not involved in the intervention process. Those who are affected are often unable to tell the method-users which method they think will be suitable. This means that we should not predetermine what method will be applied without first understanding the current situation, especially who is included and excluded from the method choice procedure. Many critical systems thinkers (e.g. Ulrich, 1983; Midgley, 1992, 1997a) have already acknowledged this problem, as have the authors of Total Systems Intervention (Flood and Jackson, 1991; Flood, 1995). This paper is concerned with the underlying assumptions made by method-users, candidate methods (expressed in the writing of their authors), and stakeholders in and beyond the organization. It argues that methods should not be classified into fixed categories. Instead, a method should be interpreted according to the current organizational context and method-users' assumptions. The process of interpretation should be critical, in that assumptions should be subject to review and, as far as possible, be made transparent to and open to change by, those who will be affected by intervention.  The significant question that needs to be addressed is, who should be considered as stakeholders of a method evaluation process? Answering this question will indicate whose views (and associated ideologies) might need to be considered when it comes to applying the method for method evaluation. The stakeholder concept "enables an organization to identify all those other organizations and individuals who can be or are influenced by the strategies and policies of the focus organization." (Fill, 1995, p.23). This paper firstly discusses the nature of participation before identifying three groups (and sub-groups) of stakeholders who are involved in, or affected by, intervention, and so need to contribute their views about the candidate method. It then argues that the three (or more) perspectives on the candidate method that are provided by these stakeholders provide a more complete picture of the suitability of the candidate method than a method-user could generate without stakeholder participation. Having reviewed some key assumptions concerning the need for ideology-critique, and the importance of considering the perspectives of the method-user, the candidate method and both organizational and environmental stakeholders, it is now possible to draw these assumptions together to create a new method for method evaluation. This is to be called Participative Method Evaluation (PME), and it provides a framework to review and evaluate the suitability of a candidate method for intervention in a particular social circumstance. PME provides a learning process which allows participants, and particularly method-users, to recognize and appreciate other world views. This paper also aims to introduce the main ideas in PME, which is a method in the sense defined by Midgley (1995b, 1997b). Midgley clearly distinguishes between method and method, saying that the former means "a series of techniques applied to some end", while the latter is "a theory of research practice that explains why particular method(s) should or should not be considered valid or appropriate for given circumstances." A method is a set of underlying value-judgments which guide method-users to choose a set of methods to gain understanding and knowledge, or to solve social problems.  Participation is an important issue in organizational problem solving because, as Churchman (1979) argues, the more perspectives that we brought to bear, the more comprehensive a view of the problem we have. There is an enormous literature on participation: e.g., Arnstein (1969), Oakley (1991) and Mumford (1993). They all emphasise different levels or types of participation. According to Arnstein (1969), there are three types of participation: citizen power, tokenism and non-participation. Arnstein's theory of participation shows that some levels of participation involve people participating in working procedures, but they are not invited to share ideas. People at these levels are seen merely as tools. However, Arnstein (1969) also realises that full participation that involves everyone is not always possible: representative participation is sometime necessary and more realistic. This will depend on practical circumstances and resources available to projects. Oakley (1991) argues that one major form of differentiation is to distinguish between participation as a means or an end. Participation as a means is to use participation to achieve some predetermined goals or objects; participation as an end is, on contrary, a dynamic form of participation which enables people to play an increasing role in development activities. Oakley (1991) argues that participation improves development projects in terms of efficiency, effectiveness and self-reliance. In his view, participation in a development project means understanding what the affected people need rather than what the designer desires the project to be. Thus, participants need to share different values and find the solutions through the participation process. Mumford (1993) argues that a participative approach helps people to decide their own destinies and produce organizational commitments to avoid moral and job satisfaction problems. Mumford (1993) also indicates that traditional participation is concerned with decision making processes and the representation of different interests and points of view in this process.  Clearly, the aim of participation is to promote the involvement of many relevant stakeholders in projects on different levels. However, it might be difficult and unrealistic to involve every relevant stakeholder in every situation. Moreover, the question can be asked whether participation is just a means to achieve predetermined goals or whether it is an end to sweep many interests into decision making/problem solving processes. This paper is primarily concerned with the latter and intends to create a forum for various stakeholders to express their views on the evaluation of method(s).

 

Are Real Estate and Stock Markets Related? The Evidence from Taiwan

Dr. Ning-Jun Zhang, Southwestern University of Finance and Economics (SWUFE), P.R. China

Dr. Lii Peirchyi, Tamkang University, Taiwan, Republic of China

Yi-Sung Huang, Southwestern University of Finance and Economics & Ling Tung University, Taiwan

 

ABSTRACT

This paper studies the long-run relationship between real estate and stock markets, using both standard cointegration tests of Johansen and Juselius (1990) and Engle-Granger (1987) and fractional cointegration test of Geweke and Porter-Hudak (1983), in the Taiwan context over the 1986Q3 to 2001Q4 period.  The results from both two kinds of cointegration tests indicate that these two markets are not cointegrated with each other.  In terms of risk diversification, two assets should have been included in the same portfolio. Knowing and testing the long-run relationship between real estate and stock markets are very important for portfolio investors who want to diversify in these two assets markets.  As we know that, if assets markets are found to have a long-run relationship then this would suggest that there may be little long-run gain in terms of risk reduction by holding such assets jointly in a portfolio.  Previous empirical studies have employed cointegration techniques to investigate whether there exist such long-run benefits from international equity diversification (see Kwan et al., 1995; Masih and Masih, 1997).  According to these studies, asset prices from two different efficient markets cannot be cointegrated.  Specifically, if a pair of asset price is cointegrated then one asset price can be forecast (is Granger-caused) by the other asset price.  Thus, these cointegration results suggest that there are no gains from portfolio diversification, in terms of risk reduction.  This study attempts to make some contributions to this line of research by exploring whether there exist any long-run benefits from asset diversification for investors who invest in Taiwan’s real estate and stock markets.  In this study, we test for cointegration using both standard cointegration tests of Johansen and Juselius (1990) and Engle-Granger (1987) and fractional cointegration test.  The results from three tests all suggest that these two asset markets are not pariwise cointegrated with each other.  The finding of no cointegration can be interpreted as evidence that there were no long-run linkages between these two asset markets and thus, there exist potential gains for investors from diversifying in these two asset markets over this sample period.  These results are valuable to investors and financial institutions, holding long-run investment portfolios in these two asset markets.  The remainder of this study is organized as follows.  Section II presents the review of some previous literature. Section III presents the data used.  Section IV presents the methodologies used and discusses the findings.  Finally, Section V concludes. The relationship between stock prices and real estate prices has been the subject of substantial debate in both the academic and practitioner literature.  The current literature on the relationship between equity and real estate markets tends to show conflicting results.  Much of the empirical evidence seems to support the notion that two markets are segmented.  For example, Goodman (1981), Miles et al., (1990) and Liu et al., (1990) and Geltner (1990) have documented the existence of segmentation within various real estate markets and stock markets.  However, Liu and Mei (1992), Ambrose et al. (1992) along with Gyourko and Keim (1992), have produced contrary results in that real estate and stock markets are integrated.  It is apparent that it is unclear whether the real estate and stock markets are segmented or integrated.  The primary objective is to ascertain whether any significant relationship exists between these markets and what implications this may have for active market traders.  A simple motivation for our study is that it can yield a number of insights that may aid investors and speculators to forecast future performance from one market to the other. The data sets used here consist of quarterly time series on stock price index (lstkp) and real estate price index (lresp) covering the 1986Q3 to 2001Q4 period.  To avoid the omission bias, we also incorporate real interest rate (liret) into our study.  Stock price index and real interest rate were obtained from the AREMOS database of the Ministry of Education of Taiwan.  Real estate price index was collected and constructed by Hsin-Yi Real Estate Inc. Descriptive statistics for both real estate and stock markets returns are reported in Table 1.  We found that the sample means of the real estate price returns are positive (1.67%), however, stock price returns are negative (-0.161%).  Both the skewness and kurtosis statistics indicate that the distributions of both markets returns are normal. The Jung-Box statistics for 4 lags applied to returns and square returns indicate no significant linear and non-linear dependencies exist in both markets. Studies have pointed out that the standard ADF test is not appropriate for the variables that may have undergone structural changes.  For example, Perron (1989, 1990) have shown that the existence of structural changes biases the standard ADF tests towards nonrejection of the null of a unit root.  Hence, it might be misleading to conclude that the variables are nonstationary just on the basis of the results from the standard ADF tests.  Perron (1990) developed a procedure to test the hypothesis that a given series  has a unit root with an exogenous structural break occurs at time .  Zivot and Andrews (1992, hereafter ZA) criticized this assumption of an exogenous break point and developed a unit-root test procedure that allows an estimated break in the trend function under the alternative hypothesis.  In this study, therefore, it seems most reasonable to treat the structural break as endogenous and test the order of integration by the ZA procedure. A allows for a change in the level of the series, Model B allows for a change in the slope of the trend function, while Model C combines changes in the level and the slope of the trend function of the series.  The sequential ADF test procedure estimates a regression equation for every possible break point within the sample and calculates the t-statistic for the estimated coefficients.  This tests the null hypothesis of a unit root against the alternative hypothesis of a trend stationarity with a one-time break () in the intercept and slope of the trend function at unknown point in time.  The null of a unit root is rejected if the coefficient of is significantly different from zero.  The selected break point for each series is that for which the t-statistic for the null is minimized.  Since the choice of lag length k may affect the test results, the lag length is selected according to the procedure suggested by Campbell and Perron (1991).  Start with an upper bound for k.  If the last included lag is significant, then choose k =.  If not, reduce k by 1 until the last lag becomes significant.  We set = 4 for our quarterly data series.

 

The Effect of Convertible Debt Issuance on Product Market Competition

Jie Yang, Huazhong University of Science and Technology, Wuhan, PRC

Dr. Xinping Xia, Huazhong University of Science and Technology, Wuhan, PRC

 

ABSTRACT

This paper investigates the effect of convertible debt issuance on Cournot game outcome. With the assumption of no default risk, compared with standard equity or debt financing, the conversion feature of convertible debt serves as a committing device for a conservative stance under the normal case of return, thus encourages an aggressive stance of its rival firm. This strategic disadvantage of convertible debt can explain the long-run underperformance after issuance. This paper investigates the effect of convertible debt issuance on strategic output market behavior. The relationship between firm’s financing policy and its strategy in product market has been recognized since the innovative study of Brander and Lewis (1986). They point out that the Cournot firms subject to some market uncertainty will use the limit effect of debt to commit to increase output in an attempt to gain a strategic advantage. The basic point is that shareholders will ignore reductions in returns in bankrupt states, since bondholders become the residual claimants in this states. After that, the research in this area has been a focus. Maksimovic (1988) extends Brander and Lewis' model of strategic effects of the limited liability of debt by considering multiple periods of interaction. Showalter (1995) analyses the optimal strategic debt choice in Bertrand (price) competition. Glazer (1994) distinguishes between short and long-term debt at the time of analyzing the relationship between capital structure and product markets. These work contributed to establish such a principle that firm’s financial decisions and product market strategy interacts. The previous studies, however, have restricted attention to a subset of the feasible instruments, such as the simple mix of debt and equity. Then we will ask how more sophisticated financial instruments, such as various kinds of convertibles, affect the product market outcome. In this work, we analyze how conversion feature of convertible debt, change market strategy of the rival firms in Cournot game. We point out that with the presumption of no default risk, under normal case of return (that is, better states of natural world lead to higher marginal profits), any aggressive stance of issuing firm to increase output will induce convertible debt holders to convert to common stock so as to share earnings with shareholders in good states and keep it as straight debt to get back the fixed amount of repay in bad states. Thus managers in behalf of current shareholder’s wealth maximization won’t take an aggressive stance, and the conversion feature included in convertible debt serves as the commit device of conservative output stance in product market competition. The foresighted rival firm anticipate it, which encourage its adoption of an aggressive strategy to increase output. As a combination of straight debt and contingent equity, convertible debt is usually interpreted as a hybrid security to reduce the information and agency costs of external finance. Green (1984) shows a mix of convertible securities and debt is superior to straight debt because the conversion option reduces the inclination of the entrepreneur to engage in risky projects. Myers (1998) suggests that convertible debt can mitigate both the over-investment problem and the underinvestment problem at the same time based upon the conflict between the shareholders (owners) and management. Isagawa (2000) proves that convertible debt is superior to common debt and equity in controlling managerial opportunism under certain conditions. These researchers predict that an appropriately designed convertible debt will help restore investment incentives so that managers will make efficient capital expenditure decisions. In these studies the role of product market in which the firm operates is to provide an exogenous random return that is unrelated to financial policy. However, recent empirical evidence finds poor long-run operating performance and stock price underperformance following convertible debt issuance (Lee and Loughran, 1998; Lewis et. al., 2001), which does not support these theoretic predictions. Our work suggest that while the conversion feature of convertible debt can help reduce the agency costs of outside finance, its negative effect on product market behavior may be neglected by previous study and thus leads to inconsistency with models.  The contribution of the paper to the literature is that it points out the conversion feature effect of convertible debt on market outcome. When some firm issue convertible debt to avoid agency costs, its negative effect on market strategy may be foreseen by its rival firm, and thus constitutes strategic disadvantage for issuing firm. Therefore, an issuing firm may make its financial decision based on the tradeoff between agency costs and strategic disadvantage on product market competition. Those who finally decide to issue convertible debt might be the ones with extreme agency problems. The rest of the paper is organized as follows. The model is outlined in Section 2, while the results of the choices of conversion decision, output and convertible price are discussed in Section 3. A summary follows in Section 4.For simplicity, the model used here is similar to that of Brander and Lewis (1986) where convertible debt, not straight debt, is included in issuer’s financial structure. Rival firms i and j produce competing products in a Cournot game. There are three time periods (0, 1, 2): At time 0, firm i issue convertible debt, which can be converted to common stock at the conversion price p, to finance a new project. It is the common knowledge that p can be observed by i and j simultaneously. At time 1, the rival firms select output levels qi and qj respectively taking as given the conversion price determined in time 0. At time 2, the rival firms respectively realize the return Ri(qi, qj, z) and Rj(qi, qj, z) (in the following analysis, superscripts denote firms, and subscripts i, j, z denote partial derivatives with respect to qi, qj and z). The random variable z reflects the effect of an uncertain environment on the fortunes of firm i, with z ∈( according to density function f(z). The convertible debt holders choose whether to convert to common stock or not according to the realized returns of the issuing firm. From the above description, we can see the equilibrium concept is the sequentially rationally Nash equilibrium in conversion price, output levels, and the conversion decision of convertible debt holders. The model for this paper is based on the following assumptions.  Assumption 1: the outside investors are risk-neutral, and the discount rate is zero. Assumption 3: the issuing firm i is an all-equity company with initially n shares of common stock. The new project that i owns requires a fixed amount of capital I which is financed exclusively with convertible debt (This assumption is not as restrictive as it sounds since straight debt and common stock are special forms of convertible debt, as will be shown shortly). The face value of convertible debt, which will be paid back at the end of the period unless it is voluntarily converted to common stock, is $1 and can be issued with discount or premium. Assuming the total number of shares that the convertibles can be converted to is a fixed amount denoted by k, then the total number of convertibles issued, denoted by D, can be presented as D=kp. Let  denote the percentage of current shares in the whole shares after conversion, then.  Assumption 4: for simplicity, there is no default risk. Assumption 5: there is no conflict between managers and current shareholders. Then the purpose of the firms is current shareholder’s wealth maximization.

 

Creating and Sustaining Competitive Advantages of Hospitality Industry

Hui-O Yang, Swinburne University of Technology, Melbourne, Australia

Dr. Hsin-Wei Fu, Leader University, Tainan, Tiawan

 

ABSTRACT

This study provides a meaningful framework for the assessment on competitive advantage.  The main purpose of this study is exploring how to create and sustain competitive advantages of hospitality industry by scanning their business environment, including external environment and internal environment.  External environment factors contain country, industry, stakeholder, competitor, strategic networks, differentiation, and branding.  Internal environment factors include resource-based, human resource, and information technology. Hospitality is the welcoming of strangers as guests into one's home to dine or lodge.  It provides both tangible and intangible goods to customers, such as products and services.  Adding values for customers, employees, and owners has become a central theme in strategic management for hospitality companies.  To create values for these stakeholders, a firm should achieve a competitive advantage over its competitors by adapting itself to the uncertain industry environment, understanding the changing needs of customers, and responding to new market entries (Byeong and Haemoon, 2004).  Achieving competitive advantage has been recognized as the single most important goal of a firm.  Without achieving competitive advantage, a firm will have few economic reasons for existing and finally will wither away (Porter, 1980).  It is generally accepted in the strategic management literature that those executives who are able to scan their business environment, including external environment and internal environment, more effectively will achieve greater success (Olsen, Murthy and Teare, 1994).  This success will come if they are able to match the threats and opportunities in that environment with appropriate strategies.  Hospitality executives must analyze both external factors and internal resources to develop a strategic plan and obtain competitive advantages (Harrison, 2003a and 2003b).  Figure 1 is a framework of environmental scanning which is used to explore the external environment and internal environment to investigate how to create and sustain competitive advantages for hospitality industry. Country analysis is a kind of general macro-environment analysis.  Firm must analyze large amounts of demographic, economic, cultural, social, political, religious, and legal data to determine the markets that are most receptive to their product and service offerings.  Country analysis could be used to identify an appropriate location and to tailor the offerings as much as possible to the tastes of people in that location (Crook, Ketchen and Snow, 2003). Porter(1980) provides a framework that models an industry as being influenced by five industry forces and it is called Porter’s five-forces approach (PFA).  The PFA adopts an outside-in approach in understanding competitive advantage in that it views competitive advantage as stemming from these five industry forces.  This approach is based on an assumption that firms within an industry possess identical or similar resources.  As a result, a firm’s success depends greatly on how to react to market signals and how to accurately predict the evolution of the industry structure (Byeong and Haemoon, 2004). The threat of new entries refers to the prospect that new players will enter an industry.  New entrants generally lead to an erosion of industry profits if the entry barriers are low.  However, the likelihood of new entry is low if the entry barriers are high, which include anything that discourage new competitors from entering the industry, such as product differentiation, threat of severe retaliation against newcomers, exclusive contracts, high capital requirements, saturated distribution channels, large economies of scale, and restrictive government regulations (Crook, Ketchen and Snow, 2003; Harrison, 2003a and 2003b).  When entry barriers are high, existing firms enjoy a measure of protection that can inhibit rivalry and enhance profits.  In the hospitality industry, entry barriers are not particularly high (Harrison, 2003a and 2003b).  Firms must consider the viability of substitutes.  The threat of substitutes is one of the major factors that intensify competition in the lodging industry (Byeong and Haemoon, 2004).  For example, teleconferences using video equipment or telephone can affect lodging operators by reducing opportunities of business travelers’ room nights.  When close substitutes are available, firms must devise ways to make their products or services more attractive than the substitutes (Crook, Ketchen and Snow, 2003). Competitors have economic power based on their ability to compete.  Competitors with disproportionately strong resource bases can be aggressive and create a strong rivalry (Smith, Ferrier and Ndofor, 2001).  It is important to define the nature of rivalry in each market, as well as the industry as a whole (Harrison, 2003a and 2003b).  When the intensity of competitive rivalry is high, profits suffer.  Rivalry is enhanced when industry growth is low, because growth-minded companies must steal customers from other firms to meet growth objectives.  Also, if customers can easily switch among providers, or if there is lack of differentiation among providers, firms must compete on price to attract customers (Crook, Ketchen and Snow, 2003).  However, competitive pricing is the least desirable type of competitive strategy for the hospitality industry.  This is because it is of real benefit only to the lowest cost producer and can be easily copied, resulting only in short-term gains (Wong and Kwan, 2001).

 

Backward Integration and Risk Sharing in a Bilateral Monopoly: Further Investigation

Dr. Yao-Hsien Lee, Chung-Hua University, Taiwan

Yi-Lun Ho and Sheu-Chin Kung, Chung-Hua University, Taiwan

Tsung-Chieh Yang, Chung-Hua University, Taiwan

 

ABSTRACT

This paper investigates implications of the first-order conditions a la Lee et al. (2006) to show that the principal’s ordered quantity and profit-sharing ratio (i.e., backward integration) can affect the agent’s cost-reducing effort.  We also state the intuitions behind the propositions in the paper. A considerable agency-theoretic literature has developed recently that addresses procurement of goods and services as often being characterized by bargaining and contracting between the government (principal) and a single supplier (or several suppliers, i.e. agent(s)). Papers focusing on this theme (see Baron and Besanko (1987,1988), Laftont and Tirole (1986) and McAfee and McMillan (1986)) study the purchase of a particular good within the framework in which uncertainty, asymmetric information, and moral hazard are simultaneously present.  In the context of bilateral monopoly contracting practices with uncertainty and asymmetric information, Riordan (1984) establishes necessary and sufficient conditions for the existence of contracts that are efficient and incentive compatible. Most recently, Riordan (1990) shows that some backward integration by the risk-neutral principal (downstream firm) is optimal if it increases the risk-neutral agent's (upstream firm) production and that backward integration increases with the sunkeness of the agent's investment.  Although risk sharing, moral hazard, and asymmetric information have been studied extensively in the above models, there has been almost no investigation of the extent or precise nature of their effects on a bilateral monopoly that maintains a long-standing relationship, for instance, business partners.  Lee et al. (2006) extend Riordan's (1984) bilateral contracts model to include moral hazard and backward integration in a framework of long-term business partner structure of stable and mutual relationships among trading partners.  Their model moves toward the study of uncertainty, asymmetric information, moral hazard, and risk sharing in a procurement contracting framework by introducing backward integration into the model of vertical shareholding interlocks previously examined in the above models.  Unfortunately, they did not go further to explore implications first-order conditions, which can be used to examine the responsiveness of the agent’s cost-reducing effort to changes the principal’s ordered quantity. The main purpose of this paper is to use the model of Lee at al. (2006) to discuss implications of first-order conditions obtained in their model.  In the process, we will demonstrate the impacts of changes in the principal’s ordered quantity on the agent’s cost-reducing effort.  The remainder of the paper is organized as follows. Section 2 reviews the basic results in the model of Lee at al. (2006).  Section 3 analyzes implications of first-order conditions.  Section 4 concludes the paper. In what follows, we call  an effort subsidy if it is positive and  an effort tax if it is negative.  It is easy to see that Assumption 1 is satisfied as long as we choose the proper specification of parameters. Stated another way, the problem concerning determination of what quantities to be produced can be solved. Although this is an essential problem for the principal, most previous studies have ignored this aspect.  This also allows us to analyze the effect of fluctuations of quantity ordered from the principal on the principal's backward integration and effort subsidy and on the agent's cost-reducing effort.  Assumption 2 simply puts a positive upper bound on the agent's marginal information cost (or hazard rate).  This also indicates that the marginal information cost for the agent to overstate its true cost cannot be too large.  Now, solving the system of equations by a simple algebraic calculation yields.  It is easy to see that(the agent’s cost-reducing effort), (the principal’s profit-sharing ratio), and (the principal’s effort subsidy) are all positive.  Since for expositional ease, we shall refer it as an effort subsidy. Furthermore, implies that it is always best for the agent to exert cost-reducing effort activity. This is consistent with the implications of individual rationality and incentive compatibility.  Equation (13) suggests that regardless of whether the agent has truthfully reported its production cost, the principal should choose profit- sharing and effort subsidy strategies to enforce its contracting mechanism, although the agent will be better off if it uses a truthful reporting strategy because of the property of double separation.

 

An Analysis of Contingent Contracting in Acquisitions

Dr. David R. Beard, University of Arkansas at Little Rock, Little Rock, AR

 

ABSTRACT

The literature has identified various motives for the use of earnout contracting in the acquisition of target firms.  In particular, Kohers and Ang (2000) and Datar, Frankel, and Wolfson (2001) contend that earnouts are relegated to mergers where problems of informational asymmetry and agency are so detrimental that this costly type of contacting must be employed to protect the interest of bidder shareholders and target firms.  This research examines a sample of acquisitions in which earnouts are used and contrasts that to a sample of “traditional” acquisitions to explore specific hypotheses concerning agency, informational asymmetry, and the use of an earnout as a means of financing. Numerous contracting technologies have evolved to reduce some of the problems inherent in merger transactions.  For example, each party has incentives to propose a contract that overvalues itself and undervalues its opponent, thereby gaining a larger share of any benefits to the merger.  Another possible problem is that informational asymmetries between the two parties may be such that a quality target may not be identified or if identified may not be able to credibly reveal its value to the bidding firm.  Among the contracting solutions to these conflicts are the joint venture, the partial acquisition, and the earnout.  The third technique mentioned, an earnout, mitigates informational asymmetries by shifting some of the risk of misvaluation to the target firm.  Briefly, in an earnout, the bidder agrees to pay the target an initial amount for the acquisition plus predetermined future payments contingent on the target’s achievement of performance milestones within a specified time period.  In earnouts, the acquired assets can be those of either an entire firm or a subsidiary of a firm.  If a bidder misvalues a target, the contingent payment portion of this deal will be reduced, possibly to zero.  The earnout contract also provides the target with the ability to signal its quality.  Only high quality targets will agree to have a larger portion of the deal to be paid as a contingent claim based upon future milestones of the combined firm.  An earnout is a relative newcomer to contracting technologies in mergers and acquisitions.  The literature contends that the use of this technique in acquisitions leads to a mitigation of bidder misvaluation resulting from informational asymmetries between the parties and alleviates adverse selection problems associated with the significant informational asymmetries and agency problems in these transactions.  Yet another reason for the use of this acquisition vehicle is that it facilitates retaining valuable human capital from the acquired firm.  The contingent nature of this type of contracting method can be arranged such that owner/operator knowledge is retained, non-compete constraints are placed on these individuals, and the retained human capital has the incentive to put forth optimal effort in order to maximize the contingent payments associated with an earnout.  On the other hand, earnouts impose the costs of inefficient risk sharing, increased contractual complexity, increased administrative costs, and litigation risk potentially offsetting any informational benefits.  Nonetheless, the use of contingent payments in mergers and acquisitions is growing.  The increased use of earnouts despite their costs and complexity implies that the benefits associated with this acquisition vehicle outweigh its costs.  That is, the gains an earnout creates or the problems it solves must be of some significance in order to outweigh the pitfalls that the use of this contracting technology entails.  The relevance of this study stems from this idea. Bidders propose earnout contracts for a variety of reasons, ranging from reduction of the problems associated with asymmetry of information, to reduction of problems associated with agency.    It is well known that successful bidders in competitive auctions, including mergers, are likely to overbid, whether due to overoptimism and hubris (Roll, 1986) or as a form of winner’s curse resulting from incomplete or uncertain information (Eckbo, et al., 1990).  The latter is especially likely when the target is a private firm, a firm with few assets in place, or when the value of the target is dependant upon the knowledge of the managers or clientele relationships that can easily be “pocketed” and taken to another firm.  In the absence of competition (explicit or implicit) for the target firm, however, the bidder is likely to protect itself against overbidding resulting from incomplete information about the target and offer a lower price. In cases when the target firm is informationally opaque, managers of the target firm are unable to credibly convey their favorable private information to the bidder.   The earnout mitigates this problem through the contingent payments associated with the contract.  The bidder will be able to adhere to its valuation of the target by structuring the upfront payment and the contingent payments in such a way that its valuation is verified if the target performs as the bidder predicts.  The bidder and the target agree on contingent payments tied to various milestones concerning future performance and structured to reflect the payoffs each believes appropriate to compensate the target.  If the future milestones are met and exceeded, the target owners will receive higher payouts, which will compensate them in such a way that is more in line with their own valuation. The target can also use the earnout agreement as an opportunity to signal their quality to the bidding firm.  The proportion of the transaction value that the target is willing to take contingent on future performance serves this purpose.  In effect, the situation is the same as the model presented by Leland and Pyle (1977).  In their model, an entrepreneur signals the quality of his future opportunities by the amount of ownership he retains in his firm.  By a target accepting a deal that has a greater proportion of the transaction value contingent on future performance, the target is signaling a high quality of future prospects to the bidding firm.  This signal is costly to replicate for low quality firms due to the fact that these firms will not be able to achieve the future performance milestones required for the contingent payments to be made.  Knowing this, the low quality firms would want to receive the highest upfront payout possible.  The earnout, as mentioned earlier, also helps to mitigate problems associated with agency.  If a target firm is in a service or hi-tech industry, for example, proprietary knowledge and the existing human capital are necessary to the continued success of the firm.  Existing clientele relationships are extremely portable, and these relationships are also necessary to the firm’s success.  The contingent payments are, in effect, an equity claim on the post-merger performance of the target (the earnout is not necessarily an equity claim on the combined firm, however).  The contingent payoffs should be based on the post-merger performance of the target only (Slovin, Shuska, Polonchek, 2003). The earnout can also facilitate financing the acquisition.  If a high growth firm is acquiring another high growth firm, the bidder can use an earnout agreement in order to postpone some of the payment necessary to secure the deal.  This type of agreement is superior to an issuance of stock to finance the deal, due to the fact that the target will not be able to share in the future prospects that the bidding firm already has in place prior to the acquisition.  With the earnout, the only future prospects that the target will share in are those that result from the target’s operations.  I expect that earnout contracts will be used in deals that involve targets that operate in multiple industries, have few assets in place, have low information disclosure, high growth opportunities, and valuable human capital relative to firm assets.  These firms are difficult to value.  By using an earnout, some of the risk associated with misvaluation is shifted from the bidder to the target.  These types of target firms tend to be found in the hi-tech and service industries.  Also, bidders acquiring private and subsidiary targets, which have little or no publicly disclosed information, will also benefit from employing an earnout agreement in the transaction.  An earnout enables the managers of these types of targets (hi-tech, service, private, subsidiary, and multi-line) to credibly signal their quality to the bidding firm.  I also expect bidders who have had prior experience in acquisitions would have a greater expertise in target valuation.  Also, if a target were within the same industry classification as the bidding firm, the acquirer would also have greater accuracy in the valuation of the target firm.  Therefore, as the expertise of the bidding firm’s management increases, as measured by the number of prior acquisitions and intra-industry mergers, I expect to observe a decrease in the use of earnout contracting.

 

Capital Structure in Taiwan’s High Tech Dot Companies

Dr. Hsiao-Tien Pao, National Chiao Tung University, Taiwan, ROC

 

Abstract

This study investigates the important determinants of capital structures of the high tech dot companies in Taiwan using a large panel from the year 2000-2005. Three time-series cross-sectional regression models (variance-component model, first-order autoregressive model, and variance-component moving average model) and one multiple regression model  with 10 independent variables (seven corporation’s factors and three macro-economic factors) are employed. The variance-component model has the smallest root mean square error. This indicates that the time-series and cross-sectional variations in firm leverage are very important factors in model fitting. The major difference of determinant in high tech dot companies is business risk. It has positive and significant impact on capital structure. Because high tech is the more speculative industry, more speculation is associated with high risk and high investment opportunity. Firms with higher investment opportunity have higher demand for capital to sustain their investment. Therefore, business risk is positively related to debt ratio. Managers can apply these results for their dynamic adjustment of capital structure in achieving optimality and maximizing firm’s value. Regarding the qualitative aspects of capital formation within the high tech dot companies of the 90s, we find that beginning about 1995 a mob mentality set in within the investment community. Essentially, no rational reason could be quantified for the ability of the dot coms to attract large amounts of investment capital. That is, on the surface, there seemed to be an irrational behavior within the investment community. If we mine the information deeper, it would be quite rational for the venture capitalists to fund the dot coms to the extent that they did.  Examining the phenomenon of the high tech dot coms, several factors come into play. Firstly, the general economy was doing well and the allure of high tech business was irresistible to stock purchasers. The thought that much of the world business would be internet/computer orientated took root and became the glamorous hot issue of the day. Venture capitalist read the fervor and proceeded to fund startup companies in record numbers. As a result, the capital structure or the determinants of the capital structure of the high tech industry seems to be significantly different from that of the other industries. Ever since Myers article (1984) on the determinants of corporate borrowing, literature on the determinants of capital structure has grown steadily. Part of this literature materialized into a series of theoretical and empirical studies whose objective has been to determine the explanatory factors of capital structure. Titman and Wessels’ article (1988) on the determinants of capital structure choice took such attributes of firms as asset structure, non-debt tax shields, growth, uniqueness, industries classification, size, earnings, volatility and profitability, but found only uniqueness was highly significant. But Harris and Raviv (1991) in their similar article on the subject pointed out that the consensus among financial economists is that leverage increases with fixed costs, non-debt tax shields, investment opportunities and firm size. And leverage decreases with volatility, advertising expenditure, the probability of bankruptcy, profitability and uniqueness of the product. Moh’d, Perry, and Rimbey (1998) employed an extensive time-series and cross-sectional analysis to examine the influence of agency costs and ownership concentration on the capital structure of the firm. Results indicated that the distribution of equity ownership is important in explaining overall capital structure and managers do reduce the level of debt as their own wealth is increasingly tied to the firm. Moreover, Mayer (1990) indicated that financial decisions in developing countries are somehow different. Rajan & Zingales (1995) took asset structure, investment opportunities, firm size and profitability as the determinants of capital structure across the G-7 countries. They found that leverage increases with asset structure and size, but decreases with growth opportunities and profitability. Again firm leverage is fairly similar across the G-7 countries. Booth, Aivazian, Demirguc-Kunt, and Maksimovic (2001) took tax rate, business risk, asset tangibility, firm size, profitability, and market-to-book ratio as determinants of capital structure across ten developing countries. They found that long-term debt ratios decrease with higher tax rates, size, and profitability, but increase with tangibility of assets. Again the influence of the market-to-book ratio and the business-risk variables tends to be subsumed within the country dummies. Otherwise, In time-series test, Shyam-Sunder and Myers (1999) showed that many of the current empirical tests lack sufficient statistical power to distinguish between the models. As a result, recent empirical research has focused on explaining capital structure choice by using time-series cross-sectional tests and panel data. Recently, some studies have explored capital structure policies using different models on different countries (Francisco 2005; Dirk, Abe & kees 2006; Fattouh, Scaramozzino & Harris 2005; Chen 2004; Pao & Chih 2006). Furthermore, Kisgen (2006) examined credit rating and capital structure, and Jan (2005) developed a model to analyze the interaction of capital structure and ownership structure. Though the achievement is rich, but little articles explore the capital structure with different industries. The focus of this study is on answering three quantitatively oriented questions and proposing a qualitative comment on the rise and fall of high tech dot com companies: 1. Are those determinants of capital structure important in the high tech dot companies? 2. Are those determinants not consistent among financial economists? 3. Is the time-series cross-sectional regression with panel data analysis better than multiple regression model? The rest of the paper is organized as follows. Section 2 presents the data used in the investigation and four linear models for debt ratio.  Section 3, presents an empirical results of the determinants of capital structure choices and an attempt to rationalize the observed regularities. Section 4 offers concluding remarks. The high tech corporations include electronics, telecommunications, computer hardware, software, networking, information systems, and other related corporations. Leading one hundred corporations with sound financial statements are selected to create a database in high tech industry. The data set includes a total of 720 firm-year panel data of public trading high tech corporations in Taiwan from 2000 to 2005. Each corporation contains one dependent variable and ten independent variables. The Taiwan Economic Journal (TEJ) compiles all variables.  Four linear models, one multiple regression and three time-series cross-sectional (TSCS) regression models, are used to explain the firm’s debt in the high tech industry. The total debt ratio (TDR) is treated as the dependent variables, and firm size (SIZE), growth opportunities (GRTH), profitability (ROA), tangibility of assets (TANG), non-debt tax shields (NDT), dividend payments (DIV), and business risk (RISK) are treated as the independent variables of firm’s feature. There external macro-economic factors: capital market factor (MK), money market factor (M2), and inflation level (PPI), are control variables in each model. In order to test the relationship between capital structure and its determinants, the following multiple regression equation is proposed for the panel data. where N is the number of cross sections (N = the number of corporations) and T is the length of the time series for each cross section (T = the number of months in time period). The estimation procedure involves two steps. In step one, each variable is normalized by subtracting its mean value and divided by its standard deviation to have zero mean value and unity variance for all variables.  As a result, we will not have an intercept in our results and we can determine the relative importance of each independent variable in explaining variations of the dependent variable based on its estimated coefficient. Variance inflation factor (VIF) is estimated for each independent variable to identify causes of multicollinearity. Pending on the results of step one, model one is re-estimated in step two by deleting variables with insignificant coefficient or significant VIF value one at a time (stepwise) (VIF>20 implies Rj2 > .95, i.e. independent variable j is highly correlated with other independent variables of the model). Based on the error structure ui t in Eq. 1, there are three time-series cross-sectional models discussed in the following section.

 

“Using Consumer Panel Participants to Generate Creative New Product Ideas”

Jenny Clark, TNS

Clive Nancarrow, University of the West of England, UK

Dr. Lex Higgins, Murray State University, Murray, Kentucky

 

ABSTRACT

Organizations continue to attempt to identify new products and product features that will provide competitive advantage.  Globalization and technology development continues and many companies struggle to keep up with this rapid rate of change.  Creativity is often cited as a prerequisite to organizational success and management of most organizations realizes the present environment is one of ‘innovate or die.’  One’s cognitive style or ‘creativity’ style has been used many times as a tool to help categorize and better understand creative thinking.   We report the use of creativity style in a large consumer panel in Europe to generate new creative ideas.  Participating panel members tended to adopt one of four cognitive styles when assigned creative problem solving tasks.  We report a description of these cognitive styles and describe a categorization of the styles we identified in panel participants. Virtually everyone agrees that organizations must continually improve products and services to compete in today’s dynamic business environment.  Brook and Mills (2003) have pointed out that successful innovation within the organization requires an “aggressive and relentless thrust” toward new ideas for products and services.  However, the manner in which organizations can systematically envision and develop ideas for new products is often not clear.   It is well established that ideas for new products or services that appear within organizations are often ignored or unintentionally suppressed by organizational culture.  In fact, many writers on creativity offer lists of ways that managers intentionally or unintentionally stifle creativity.  One such list can be found in Couger (1995) and is surely familiar to anyone who has tried to introduce a new idea within an organization.  Thus for every new idea offered up, there is a reasonable sounding argument against its implementation.  Why do organizations seem to discourage new ideas when often requesting more employee creativity?  First, organizational processes often discourage creative thought and encourage “getting along to get ahead.”  Many organizations systematically discourage the creation and adoption of new ideas without necessarily meaning to.  Osborn has pointed out that “a fair idea is better than a good idea kept on the polishing wheel.” (Osborne 1963).  Thus, organizations often fail to have the competence to implement ideas as originally proposed.  However, through countless ‘check points’ and ‘management review gates’ the organizations manage to alter new ideas into something that isn’t any better than the solution that was being employed previously. Why do organizations so often seem to resist any way of doing things that appears to be different from the present way?  Most of us naturally resist change associated with adoption of different ways of doing things.  Thus, the creative person with a new idea about how to do things often becomes frustrated by the unwillingness of the organization to adopt his or her new idea and ultimately gives up.   Although the person with a new idea may be completely committed to his or her idea, others are almost always reluctant to adopt different ways of doing things.  Why do people so often resist change or new ideas?   Miller (1987) has offered a list of reasons people resist change.  First, some may believe that the change is not for the highest good of everyone involved.  Second, it is natural for an individual to fear change based on a possible negative impact to them personally including threats to their job and status.  Third, a lack of understanding of the need for the change will cause people to become suspicious of it.  Fourth, people might resist change in the organization if they perceive they have suffered somehow from changes in the past.  Also, and this is particularly relevant for today’s workplace, people may be concerned that they may not have adequate time to prepare for the coming change.  Finally, all of us can understand that change often brings increased stress to one’s job performance.  Thus, it is easy to understand why people resist change in general and often this resistance provides almost insurmountable barriers to adoption of new ways of doing things.  However, one excellent method for an organization to avoid internal ‘idea killing’ is having the organization subscribe to an existing consumer panel.  Thus, many of the troublesome challenges of internally producing new product ideas are avoided.  Growing in popularity, recruited internet samples are a research tool used by many organizations (McDaniel and Gates, 2006).  Consumer members of these panels are able to provide valuable information to the sponsoring organization due to the organization’s ability to select respondents for tasks based on specific demographic or psychographic characteristics.   Use of consumer panels is a well established research method in marketing research and has been advanced considerably through use of new information technology associated with the internet.  TNS, a Global Leader in Marketing Research and Consumer Panels has been able to recruit large numbers of consumers to serve on panels across several countries in Europe.  Our data were drawn from the TNS ‘creative minds’ panel that contains approximately ten thousand consumers in several different European countries (see www.tns-global.com and query creativity). The authors wish to thank TNS for their support of this research.  We particularly note that the “Four P’s” model of creative effort seems to be an important consideration in design of any studies exploring cognitive style and personal creativity (Shouksmith 1970, p. 103).  The Four P’s Model identifies four dimensions that surround the creative act.  One “P” is the person.  That is, the one that engages in the creative act.  The panel members are the ‘creative persons’ in our study.   Secondly, we know that engaging in a creative act includes some creative process.  The Creative Problem Solving Techniques that we are assigning to the panel members could be considered a framework for the creative process in our study.  The third “p” is the product or output of the creative activity.  The product is the result of engaging in the creative process.  In our study, we are defining ‘idea fluency’ as the creative product.  That is, we are assessing creative output of our panel members (Intuitives, Analyticals, Integrators, and Moderates) by comparing the number of ideas produced when using different Creative Problem Solving Techniques.  Finally, the environment in which the creative act occurs can be called the fourth ‘P’ of creativity.  ‘Press” is used in the field of education to describe the learning environment and thus, press would constitute the fourth ‘P’ (see Rhodes 1961).

 

The Two-stage Optimal Matching Loan Quality Model

Chuan-Chuan Ko, National Chiao Tung University, Taiwan

Dr. Tyrone T. Lin, National Dong Hwa University, Taiwan

Chien-Ku Liu, Jin-wen University of Science & Technology, Taiwan

Hui-Ling Chang, Ming Chuan University, Taiwan

 

ABSTACT

This study attempts to optimize the loan quality requirement objective of the depositor, financial institution and investment agent in a two-stage loan market. Assuming that the financial institution may completely or partially fail to discharge his/her responsibility for liability in which a loan claim occurs following each stage, mathematical analysis is employed to identify the threshold of required loan quality and optimize the allocation of loan amounts in this two-stage loan market. This study defines the financial institution as the enterprise that is heavily reliant on manipulating financial leverage via minimum capital investment, and whose operating profit mainly derives from the interest spread of making loans with deposit volume; meanwhile, the depositor makes all deposits to obtain a steady stream of interest income. However, because of different lending criteria between the financial institution and the depositor, they have conflicting interests with each other. The financial institution wishes to increase loan credit, but loan volume is actually the balance held by the depositor. Therefore, the depositor asks the financial institution to rise up the loan credit to better guarantee his/her deposit. Furthermore, the securitization of financial assets has also provided the investor with an alternative financial commodity. The manner in which the financial institution re-packages and offers this financial asset securitization and the manner in which the investor purchases this commodity will also generate different perspectives regarding the loan quality of assets securitization subsequently represented by the investment agent among the financial institution, the depositor, and the investor. Lockwood et al. (1996) found that when enterprises begin asset securitization, the wealth of automobile manufacturers is increased after securitization, whereas the wealth of banks is decreased, and the financial institution should improve its capital structure before securitization and promote its financial health. The financial institution attempts to offer secured loans to protect creditors. Dietsch and Petey (2002) designed an optimized capital placement and lending portfolio by calculating the value of small loans in the investment portfolio risk with the internal credit risk of loan model of medium and small enterprises in France. Stiroh and Metli (2003) identified a recent deterioration of loan quality in the US financial industry, mostly being restricted of loan volume by the borrower and lending of large scale banks and industries whereas credit defects are focused on small scale borrower industries.  Lin and Lo (2006) provided three credit risks (deposit account, financial institution, and rating organization) for evaluating different roles considering single term loans, and the required and matching loan quality models explain that developing a method of improving the risk management mechanism is the key point for the financial institution in controlling loan quality under the supervision of rating organization and depositors. Lehar (2005) modeled the measurement method and banking system risk, and estimated the dynamics and correlations among bank asset portfolios. The bank asset portfolios, including loans, tradable securities, and numerous other items, are refinanced by debt and equity. Capitalized banks increase equity capital and thus substantially reduce systemic risk. Stein (2005) designed the quantitative method as simple cut-off approach to make more flexible and profitable in lending decision. The framework can be used to optimize the cut-off point for lending decisions based on the cost function of the lender. Instefjord (2005) investigated the phenomenon of financial innovation possibly increasing bank risk in the credit derivative market, despite the importance of credit derivatives for hedging and securitizing credit risk. Commercial success determines the overall success of new credit derivative instruments.  This study extends the model of Lin and Lo (2006), describes the credit risk for the single term evaluated model and discusses the required loan qualities with multiple objectives for the deposit account, financial institution, and rating investment agent with the two-stage loan market. Suppose that the financial institution may be cleared, partially cleared, and  impossible to be cleared for debt at the end of each stage, and that the most suitable loan models are being sought for the participants in two-stage only.  During the numerical analysis, designing a two-stage loan ratio and initiating a discussion of the loan placement which is most suitable for two-stage loan market are also key points. One single financial institution exists in the loan market, one investment agent (the successor of financial asset securitization commodity) operates in this market, and a single depositor provides deposits in this financial institution. Loan decisions of portfolios held by the financial institution comprises two stages (assuming a fixed period in each stage), and the financial institution equity is not permitted to provide financing during the second stage of the loan market, but the loan operation may be completely executed for the first stage after the financial institution provides the deposit reserve.  The interest rate for the depositor during the two stages remains unchanged, and the depositor receives fixed deposit interest. The investment agent who purchases the financial asset securitization commodity (issued by the financial institution to guarantee loan credit) may obtain part of the warrant provided by the financial institution.

 

Study on the Motives of Tax Avoidance and the Coping Strategies in the Transfer Pricing of Transnational Corporations

Chen-Kuo Lee, Ling Tung University, Taiwan

Wen-Wen Chuang, Ling Tung University, Taiwan

 

ABSTRACT

As globalized production and management groups, transnational corporations tended to adopt related-party transactions (like transfer pricing) to reduce the overall tax burden, evade risks, and bypass control with tax avoidance as the main objective. Corporations could reduce the tax burden through related-party transactions, such as transfer pricing. Therefore, this paper conducted analyses by establishing a transfer-pricing model and verifying, with cases, the motives of tax avoidance in the transfer pricing of transnational corporations. Finally, we presented the coping strategies of tax avoidance in the transfer pricing of transnational corporations. After World War II, along with the unprecedented rapid development of business activities, the international trade of transnational corporations was increasingly highlighted in the global trading market (Clausing, 2001, 2003). Moreover, plenty of international trade occurred among the inner member companies of transnational corporations (i.e. the related parties). The data of UNCTAD in 2001 showed that until then, international trade of transnational corporations occupied more than 70% of the world trade; about one-third of the world trade took place among transnational corporations and about 80% technology transfer fees were paid to the same companies. With the large scale, and the unique ways and features, the inner trade of transnational corporations was influencing the host country, the home country and the world economy (UNCTAD, 2001). Thus, the international community was paying close attention. Some countries strengthened the management of transnational corporations and their inner trade, based on relevant regulations and policies in the framework of WTO multilateral trade systems and its regional economic organization (Barry, 2004, 2005). The price adopted in the inner trade of transnational corporations was usually called transfer price. Because transfer price played a core role in the inner trade of the complete transnational corporation, the functions of the inner trade could be achieved. Transfer pricing helped directly realize the adjustment of benefits among the inner member companies of transnational corporations and the related countries. It also ensured the maximization of the whole benefits of transnational corporations across the globe. Furthermore, it hastened the formation of the integrated management and economic globalization in transnational corporations, and ensured the possession of monopoly assets and the acquisition of monopolistic profits. Finally, transfer pricing had a direct effect on the economy and benefits of related countries. Transfer price was an effective mechanism to realize the functions of inner trade. In the attitudes to transnational corporations, developing countries were in a dilemma. On the one hand, economy could not develop without foreign capital, so great efforts should be made to attract foreign investment; on the other hand, the transfer pricing in the investment of transnational corporations deteriorated the environment of foreign capital in host countries. The reduction of state revenues would cause the loss of state-owned assets and damage the benefits of domestic enterprises. The weighing of pros and cons decided how the government in developing countries would treat the transfer pricing in the investment of transnational corporations. In other countries, there were many research results on transfer pricing, such as the existence of transfer pricing (Copithorne,1971; Horst,1971; Kant,1988), the relation between equity structure and transfer pricing (Svejar and Smith,1984; Al-Saadon and Das,1996; Konrad and Lommerud, 1999; Tommy and Guttorm, 1999), the methods of transfer pricing and its influential factors (Tang,1979, 1980; Wu and Sharp,1979; Bond, 1980;Yunker, 1982; Borkowski, 1992), and the choices of tax authorities in the adjustment rules of transfer pricing (Copithorne, 1971; Booth and Jensen, 1977; Horst, 1971; Itagaki, 1979; Guttorm and Alfons,;1999). However, nearly all the researchers thought about the problem from the angle of developed countries. These assumed conditions were different from the actual situation of the investment of transnational corporations in developing countries, so they could not be directly copied to solve the problems of inner transfer pricing of the investment of transnational corporations in developing countries. A developing country should not only create a good environment for investment, providing proper preferential tax policies for transnational corporations to attract foreign capital, but also carry out suitable adjustment rules of transfer pricing to ensure that the deserved interests of developing countries would not be damaged (Tommy and Guttorm,1999). The integration of global economy, transnational corporations played an important role as production organizer and rapidly grew along with the process of globalization. Transnational corporations organized resources, within the world, to realize production and exchange. In this process, their transactions were not completed according to fair market trade, while most of them belonged to the related-party transaction that is to say; the transaction was finished among related parties. The related-party transaction helped transnational corporations reach a series of objectives, such as reducing tax burden, transferring benefits, evading risks and bypassing control. It could be said that the related-party transactions had become a necessary business strategy for transnational corporations. Therefore, it would be greatly meaningful to analyze further the lowdown of the related-party transactions of transnational corporations, which is the main motive of this paper. Because the theoretic research of economists on the transfer pricing of transnational corporations was usually based on the analysis of concrete data, it was difficult to differentiate theoretic research from empirical research of the transfer pricing of transnational corporations. Thus, this paper, from the angle of practice, would combine the conclusions obtained by western scholars in theoretic and empirical research to establish the model analysis for transfer pricing. Meanwhile, the motive to avoid taxes in transfer pricing of transnational corporations would be verified with cases. Finally, the coping strategies that transnational corporations avoided taxes by transfer pricing would be presented.  In the field of economics, it was universally recognized that the economic research on transfer pricing was started from the article, “On the Economics of Transfer Pricing” by Hirshlefer (1956). According to the principle of profit maximization, Hirshlefer utilized the basic analytical method of neoclassical economics — marginal analytical tools to do quantitative research on transfer pricing and establish the pricing models in the conditions of competitive external market and uncompetitive external market. Hirshleifer pioneered a new field in which economic methods were used to analyze transfer-pricing behavior. However, his assumed conditions for transfer pricing, largely, didn’t accord with actual economic life. He studied the equilibrium-pricing model in the ideal state (complete information, free of transaction costs, tariffs and corporate income tax). In fact, what he studied was the problem of transfer pricing among the domestic related parties in the condition of complete information. Subsequent scholars applied new theoretical methods and results of economics (such as econometric model, transaction costs, asymmetric information and game theory) to the research about transfer pricing of transnational corporations, and gradually relaxed restrictions. Thus, the research was closer to the actual situation and more practical. The focus of research was transferred from the analysis of economic behavior to other fields. In respect of the research results, there were more research results on transfer pricing in management and accounting subjects while the research in economics field also gradually attracted attention.

 

Target Costing: A Snapshot with the Granger Causality Test

Dr. Fernando Zanella, United Arab Emirates University, Al Ain, UAE

 

ABSTRACT

The target costing strategy takes the market price of a product and goes all way back to the initial costs of its production to achieve the desired profit margin. It lies in sharp contrast with the traditional cost-plus margin. In this article we use the Granger causality test to identify the price-cost directional vector. Most of the Brazilian firms we analyzed did not show an identifiable pattern between price and cost. Target costing is shown in 15% of the total firms studied, and in 37.5% of the electric and utilities sector. There are two main reasons supported here. First, the sector works with a single homogeneous product. Second, it is a regulated sector; once its price is set by the regulatory agency, they can work backwards to reduce costs and achieve certain profit margins. The test used here proves complementary to more common surveys and case studies. Target costing can be briefly defined as a strategy that takes the market price of a established product—or the estimated price of a would-be product—and uses it as a parameter that will define the feasible cost for a desired profit margin. It is meant to be used during the design and planning phases, i.e., prior to the manufacturing phase. Target costing has several interdependent dimensions that can be explored separately or simultaneously. The two main ones are: a) Target costing adoption. Target costing adoption, or lack of it, is indicative of the firm’s competitive strategy within the industry. During a target costing process, the vector runs from price to cost. This is the opposite of another very common price strategy, the cost-plus, in which the vector runs from cost to price. During the cost-plus process, a firm adds the desirable profit margin on top of the manufacturing cost. If the market doesn’t accept the final price, the firm might shrink its profit margin, try to re-do the manufacturing to cut costs or—depending on the re-manufacturing feasibility—fix and sunk costs, just stop producing the product or shut down operations. b) Institutional environment. The number of firms adopting target costing (or not) is an indicator of the institutional environment of the country. If a particular country shows evidence of a substantial number of firms following one particular strategy, it is indicative of the institutions surrounding the firm. For instance, if we observe a country with a significant portion of its industry operating and profiting within a cost-plus approach, this suggests that institutions are open to rent-seeking, i.e., rents obtained from engaging in extra-market activities or, at least, benefiting from someone who is involved in extra-market activities. A country with a significant portion of the industry operating by the principles of target costing may suggest a more competitive environment, possibly involving cartel controls (formal or informal), an open economy, and so on.  The main objective of this article is to assess the second dimension. The country chosen for the study is Brazil, a country that has evolved from a quite closed economy during the early nineties to a relatively open economy today. More precisely, this article tests the following hypotheses: 1. The selling price determines the production costs. That is, the relationship is single-directional from price to cost. This is the target costing hypothesis (H1). 2. The cost determines the selling price. That is, the relationship is single-directional from cost to price. This is the cost-plus hypothesis (H2). 3. Previous selling prices determine costs, and costs of production determine selling prices, i.e., there is a feedback mechanism. This is the hypothesis of bilateral causality or interdependence between price and cost (H3). 4. There is no significant statistical relationship between price and cost, inclusive of lagged values. This hypothesis (H4) suggests either independence or an undetermined relationship between the variables. This hypothesis does not suggest that there is no relationship between price and cost, but only that it was not possible to distinguish a statistical significant pattern.The next section briefly mentions some of the previous studies on target cost and describes the method—Granger causality test—and data used in this research. The following section presents all results with comments. The conclusion stresses the positive aspects of this research tool, as well as its limitations. Target costing, despite its underlying market-oriented foundation, has not been extensively studied by academicians. As mentioned during the introduction, its adoption—or omission—provides significant evidence of the competitive environment in a country. Studies have been conducted mainly with the following foci: a) theoretical and dedicated to stating the process of implementing the target costing and its advantages when compared with alternative systems—Cooper and Chew (1996); b) case studies that assess the targeting cost system—Hibbets, Albright and Funk (2003); and c) conducting surveys to assess the adoption of target costing—Dekker and Smidt (2003).  Our study differs from the previous studies by inspecting the actual relationship among costs and prices with a statistical tool. This allows observation—not through self-reported data or desired goals—of what are the factual practices and results of specific firms. For this purpose, this study focuses on 45 publicly listed Brazilian firms from eight different sectors. The reason for using publicly listed firms is that their accounting and financial statements are public and can be easily inspected. The sector divisions are based on the BOVESPA’s industry classification. The data was extracted from the databases EconomaticaTM and ReutersTM. The variables are cost of sales and total revenues, extracted from quarterly statements between 1995 and 2005. To obtain individual costs and prices, it is necessary to divide both variables for the total quantity. In this case, the use of the unit cost and prices, or total, produces the same results during the Granger causality test, which is discussed below.  The Granger causality test—see Granger (1969) and Eagle and Granger (1987)—tries to establish the causality between two variables. Basically the test states that X   “granger” causes Y if previous values of X helps predict the present value of Y.  The bivariate regression runs in the following form:  The F-Statistic (wald statistics) tests if the lagged variables are jointly insignificant.  The next section categorizes the firms by eight sectors. Results are presented and discussed by sector.  The three firms analyzed are representatives of the sector “Meat, Poultry and Others” and they have enough data to run the statistical tests. The first firm, Avipal, does not show any causality between selling price and cost of production. There are some reasons why this might be the case. The simplest possibility is that the firm just does not have a clear-cut price-cost strategy, i.e., neither target costing nor cost-plus margin. Another possible explanation is that the test is just not accurate enough to capture any particular pattern. Nevertheless, the test was enough to identify a pattern for the next two firms. Sadia and Seara both show interdependence between costs and prices. This pattern might suggest that these firms sometimes act competitively, and sometimes might capture the benefits of a certain market power due to the limited number of competing firms. A particular characteristic of these firms is that they export a considerable portion of their products, what might be driving their price-cost strategy.

 

Teaching Tip: Structuring a Rubric for Online Course Discussions to Assess Both Traditional and Non-Traditional Students

Dr. Ruth Lapsley, Central Washington University, Ellensburg, WA

Dr. Rex Moody, Central Washington University, Ellensburg, WA

 

ABSTRACT

Online courses have become increasingly popular.  Students, particularly non-traditional, appreciate online courses because of the flexibility, including learning outside the normal classroom schedule constraints.  This paper discusses how a rubric for an online course was developed to capitalize on the motivated learning style of these non-traditional students.  When the same rubric was used for traditional students taking the class, the assessment did not adequately discriminate among different levels of effort, so modifications were made to the rubric.  It was subsequently used on both traditional and non-traditional students and provided an adequate assessment for both types of learners. Non-traditional students, in particular, appreciate the flexibility of online courses that provide learning outside the normal classroom schedule constraints.  Non-traditional students are usually older, have job and family responsibilities, and prefer flexible curricula that allow them to use their computers and technology to enhance their learning skills (Nellen, 2003; Wooten, 1998; Wynd & Bozman, 1996).  In addition, they tend to be more motivated and produce higher-quality work than traditional students (Nellen, 2003; Wooten, 1998).  This paper discusses how an assessment rubric for an online course was developed to capitalize on the motivated learning style of these non-traditional students, while still being useful for traditional students as well.  Online course developers frequently concentrate on the technological issues surrounding the delivery method instead of the learning objectives and assessment tools (Su, 2005).  In developing an online course, the learning objectives should be the primary guiding factor.  Students in online courses must be able to easily understand the learning objectives, as they are more critical than the medium by which they are delivered (Su, 2005).  In the traditional classroom, the course expectations are spelled out in the syllabus and the instructor typically explains them to students, often times adding more detail throughout the term of the course.  Additionally, with an online course, students do not have ready access to the instructor and must rely more on what is available online to guide their learning.  Lemak et al. (2005) suggest that this type of learning can be considered "limited interactive learning" (p. 152) because it provides some two-way conversation between student and instructor but not the typical classroom interaction.  Students taking online classes are isolated and lose the advantage of interacting with other students.  Because of this isolation, it is important that the course developer emphasizes dialogue and feedback (Littlejohn, 2002), and develops methods to involve students in their own learning. One way for online students to interact on a limited basis with other online students is through discussion boards, either asynchronous or synchronous.  Synchronous discussions are real-time discussions similar to a chat line, and must be monitored to keep students from veering too far from the specified discussion subject.  While these synchronous discussions have the advantage of offering immediate feedback from the instructor and peers, they can be problematic: the major drawback is that not all students are available to participate at a specified time since many online students work or have other schedule conflicts. To overcome this, the instructor can schedule multiple times each week to synchronously "chat" with students, a method somewhat more appealing to students but not necessarily considerate of instructors' time constraints.  Asynchronous discussion, on the other hand, allows interactions of the students, but not on real time.  It is somewhat similar to sending an email message, and allows students to choose the time that works best for them to interact.  Hiltz (1986) found that allowing this personal learning time actually increased the effectiveness of learning by empowering students; this more active role creates an ownership in the learning process for the course (Duffy et al., 2004).  When using asynchronous discussions, a grading rubric becomes an important assessment tool for communicating clearly to the student whether learning objectives have been met.  Arbaugh and Hornik (2006) found that communicating high expectations to students resulted in higher perceived student learning and satisfaction with the course. When instructors communicate with students in a traditional classroom or through an online synchronous discussion, the instructor has the opportunity to directly involve students with questions and can help students formulate ideas and demonstrate that effective learning has taken place.  Furthermore, students in a traditional classroom are exposed to lectures, repetitious terms, and discussions centered around the important topics, and through this emphasis students come to realize what they are expected to glean as outcomes from the course materials.  With asynchronous discussions, the immediate feedback and information exchange is not available, so students need guidance as to whether their online responses are effective.  For effective learning to occur, instructors must spell out in precise detail what their criteria are for student discussion responses (Gopinath, 2004).  This means that, prior to offering an online course, the instructor must develop tools that clearly indicate to students what is expected of them.  An assessment rubric, a structured guide as to what is important in a student’s response and how responses will be graded, is one useful tool in this regard.  Through rubrics, instructors can identify for students what constitutes desirable learning outcomes.  Effective assessment rubrics are difficult to develop, and require instructors to record their typical thought process used in traditional classroom discussions and when grading student papers.  For instance, the instructor may need to identify: "What is the point I want students to master after completing this exercise?  What will tell me the student actually understands?  What constitutes an average ('C') response, and what additional information is needed for an 'A'?  Does grammar matter?  Does terminology matter?   What terms do I expect to see mentioned?"  Most of these decisions are typically subconscious for the instructor and are not normally verbalized or recorded.  A well-formed grading rubric can serve as a guide to students in a virtual classroom, and can encourage the online students to become active participants in their own learning.  The online class forces a shift in the instructor's role (Brower, 2003), where the instructor acts as a coach, requiring students to take responsibility for their own learning through sources other than the instructor (Duffy et al., 2004).  An effective rubric channels students' efforts to maximize grading points, into maximizing learning as well.  The most effective learning is accomplished through active involvement of students (Alavi et al., 1995; Leidner & Jarenpaa, 1993; Webster & Hackley, 1997); effective ways to involve students include structured exercises or short cases that emulate real-life situations (Su, 2005).  In seeking solutions to such exercises or cases, students discover the how and why of certain phenomena (Yin, 1994), and are challenged to go beyond material that is offered in a textbook.  As students delve into learning concepts that are beyond the obvious course material, a rubric serves to direct them to the depth of additional information they should seek.  The rubric also assists the instructor in applying points or grades consistently across assignments and across students, and can be especially useful if two or more instructors are assessing the same coursework (Kryder, 2003). According to Popham (2002), there are two approaches to assessing written material from students: holistic and analytic scoring. Holistic scoring is giving a single grade or response for an assignment based on the impression the instructor develops when reading a response (Hazari, 2004).  An analytic score is derived by allocating points for subsets of the question, then adding up the subset scores to arrive at an overall or total score; this method identifies strengths and weaknesses in student responses but is more time consuming for the instructor (Hazari, 2004).  Assessment rubrics can be used for both holistic or analytic scoring.

 

The Exchange Rate Exposure of Chinese and Taiwanese Multinational Corporations

Luke Lin, National Sun Yat-sen University, Taiwan

Dr. David So-De Shyu, National Sun Yat-sen University, Taiwan

Dr. Chau-Jung Kuo, National Sun Yat-sen University, Taiwan

 

Abstract

This paper studies the sensitivity of the cash flows generated by Chinese and Taiwanese firms to the movements in a trade-weighted exchange rate index, as well as to the currencies of their major trading partners. To overcome the deficiencies in previous researches using variations of the market-based model, this paper adopts the polynomial distributed lag (PDL) model to investigate the relative importance of transaction exposure versus economic exposure by decomposing exchange risk into short-term and long-term components. In contrast to the existing market-based model, we find that PDL model is better in detecting exposures. Furthermore, our empirical results also indicate that a considerable exposure difference between Chinese and Taiwanese corporations under two types of exchange rate regimes.   With China’s entry into WTO, more and more Chinese firms participate in international business, and their understanding of the exchange rate exposure of their business becomes more important. Meanwhile, now most Taiwanese firms investing in China desire to use cheaper input factors and sell manufactured goods back to Taiwan’s trading partners. Such an investment decision has been establishing a close link between the two markets. While China and Taiwan markets are inseparable, exchange rate systems in the two markets are very different. For example, China has officially maintained the pegging regime since 1994 and Taiwan now follows a floating exchange rate policy (Schena, 2005). Therefore, the features of exchange rate exposure in two markets are important for corporate managers to know before making risk management decisions as different types of exchange rate systems breed different sets of risk, especially for emerging market. Booth (1996) interprets that three types of exposures are identified in the literature as translation, transaction, and economic exposure. However, the impact of fluctuating exchange rates on cash flows excludes the translation exposure because this exposure does not affect cash flows (Martin and Mauer, 2005). Transaction exposure, which typically has a shorter-term time dimension, arises because the value of the foreign currency may change from the time a transaction is contracted and the time it is actually settled, and can in most cases be effectively hedged with derivative instruments. Economic exposure, which typically has a longer-term time dimension, arises mainly from changes in the sales prices, sales volumes, cost of inputs and the competitiveness of the firm, and we cannot be sure whether the hedging is useful. We argue that when studying the exchange risk, there are some differences to acquire the results analyzed by the stock return angle and cash flow angle. Using the real performance of operating income, this paper attempts to investigate the impact of fluctuating currencies on the values of Chinese and Taiwanese companies by decomposing a firm’s overall exchange rate risk into transaction and economic components. The major contribution of this study is the overcome of the deficiency that prior studies have with limited success in detecting significant currency exposure. We further bring forth the ways of measuring potential economic exposure that firms are confronted with. Comparing with the capital market approach, we find some evidence of the relative strength of cash flows to detect exposure from the two emerging markets. Meanwhile, the results indicate that a considerable exposure difference between Chinese and Taiwanese corporations under two types of exchange rate regimes. The existing capital market approach estimates the exposure as the sensitivity of stock returns to movements in a trade-weighted exchange rate index while controlling for market movements: where Rt is the stock return for time t; Rmt is the market portfolio return for time t; Xt is the percent change in the exchange rate factor for time t; βx is the foreign exchange exposure or residual exposure. Using the equation (1), Jorion (1990) proposes a two-step estimation procedure in which he first estimates exposure from time series regressions of firm-level stock returns against market returns and a trade-weighted exchange rate. Then he uses the coefficient of the exchange variable as the dependent variable in a cross-sectional regression to be explained by a firm’s characteristics. His results show that out of 287 U.S. multinational corporations only 5.23% (15 firms) exhibit significant exposure. Most of the succeeding studies follow this two-step estimation procedure to examine exposures of firms in different countries (see Table 1). For example, He and Ng (1998) examine the exchange rate sensitivity of 171 Japanese multinationals. They find that 26.32% (45 firms) of firms have significant response coefficient. Also, Schena (2005) studies for 70 Chinese firms with A and B shares. Disappointingly, he find that only 12.86% (9 firms) of the sample have statistically significant exchange rate coefficients at 10% level but none at 5% levels. Muller and Verschoor (2006) reveal 13.95% (114 firms) of 817 European multinationals have significant exposure. An alternative methodology, cash flow approach, estimates the effects of exchange rate movements on firm’s operating income. Because of using the real performance of operating income, economic exposure that firms are confronted with can be revealed more correctly. Martin and Mauer (2003, 2004, 2005) employ this cash flow approach to examine the exchange rate exposure among various industries of U.S. multinational corporations. Their results indicate that cash flow effects are greater for long-term lags than for short-term lags in exchange rate movements for the currencies examined. Moreover, they find that a sample of U.S.-based multinational corporations with heavy involvement in Europe is less frequently exposed to European currency risk than to non-European currency risk. In this study, we aim to measure foreign exchange exposure of Chinese and Taiwanese companies by using the cash flow approach. Following the previous empirical results, we are going to test three hypotheses: H1: The performance of cash flow approach is better than those of capital market approach in terms of detecting exposure. We argue that there are some differences to acquire the results analyzed by the stock return angle and cash flow angle. Because the stock returns are more prone to be influenced by market supply and demand, microstructure, and macroeconomic, stock returns may not have a close relationship with the performance of company operation. More seriously, compensation in the form of stock or stock options may induce managers to heavily focus on stock price performance. Under this circumstance, managers may try to support stock prices in order to receive payoffs from granted stock or stock options. And the relation between exchange rate volatility and firm valuation will be distorted further. Therefore, as the capital market approach using stock returns that may be easily affected by many different factors, the cash flow approach using the real performance of operating income can measure more correctly the exchange rate risk. H2: Short-term exchange rate exposure tends to increase (decrease) with the level of foreign involvement (firm size). Copeland et al. (2004) argues that multinational corporations are not all giant firms. More than half are small firms with less than 100 employees. Even firms and individuals not directly engaged in international business will also be affected by the relative values of domestic versus foreign currencies. However, the revenues and costs are more directly affected by exposure for the firms engaged in international business. Based on existing empirical researches on the U.S. and Japanese markets, we expect that firms with higher foreign involvement have higher exchange rate exposure. The larger firms have a motive to hedge because they are better able to cover the fixed costs of derivatives than small firms. Therefore, we expect that firms with larger size have lower exchange rate exposure.

 

Investment Under Uncertainty with Stochastic Interest Rates

Dr. Cherng-Shiang Chang, FRM, China University of Technology, Taipei, Taiwan

 

ABSTRACT

In recent years, the real options analysis developed to cope with an uncertain future is already having a substantial influence on corporate practice due to its offering new insights into corporate finance and strategy (Smit and Trigeorgis, 2004).  For the application of options pricing theory, unlike the works on the financial derivatives areas, most studies on the corporate finance and investment problems assume the underlying dynamics follow a geometric Brownian motion and the discount rates of the expected cash flows constant in order to obtain a closed-form solution.  In the real world context, however, the dynamics of the underlying usually track a product-specific lifecycle, in contrast to the one increasing versus time characterized by the geometric Brownian motion.  Further, it is obvious that the discount rates or the risk-free interest rates are not constant as well.  In this article, we relax the restrictions by employing the Ornstein-Uhlenbeck process for the underlying to meet the real product specific lifecycle.  Second, we setup the classical Vasicek (1977) model to describe the interest rate dynamics.  The derived partial differential equations (PDEs) are so complicated that a novel finite difference method is then selected and implemented to solve the problem numerically. Project valuation using real options has been a subject of much research during the last 15 years (Ingersoll and Ross, 1992, Dixit and Pindyck, 1994).  The real options analysis developed to cope with an uncertain future is already having a substantial influence on corporate practice due to its offering new insights into corporate finance and strategy (Smit and Trigeorgis, 2004).  Grenadier and Weiss (1997) develop a model of the optimal investment strategy for a firm confronted with a sequence of technological innovations.  Pyndick (1993) is probably the first to take the technical uncertainty exogenously for randomly advancing through stages of the project.  Panayi and Trigeorgis (1998) evaluate an IT infrastructure project by two stages: an initial stage in which the organization develops the information systems needed for its future operation and a second stage in which it proceeds to expand its network.  Brach and Paxson (2001) model investment in the drug development process using a Poisson real option analysis.  Schwartz and Zozaya (2003) employ a two- factor diffusion model to analyze investment in the IT industry both in acquisition and development projects.  Schwartz (2004) argues that patents and R&D projects can also be regarded as a complex option on variables underlying the value of the project. For the application of options pricing theory, unlike the works on the financial derivatives areas, most studies on the corporate finance and investment problems assume the underlying dynamics follow a geometric Brownian motion and the discount rates of the expected cash flow constant in order to obtain a closed-form solution.  In the real world context, however, the dynamics of the underlying usually track a product-specific lifecycle, in contrast to the one increasing versus time characterized by the geometric Brownian motion.  Further, it is obvious that the discount rates or the risk-free interest rates are not constant as well.  In this article, we relax the restrictions by employing the Ornstein-Uhlenbeck process for the underlying to meet the real product specific lifecycle.  Second, we setup the classical Vasicek (1977) model to describe the interest rate dynamics.  The derived partial differential equations (PDEs) are so complicated that a novel finite difference method is then selected and implemented to solve the problem numerically. Our basic model is similar to the model in Dixit and Pindyck (1994).  Consider a firm proceeds an irreversible investment by paying a sunk cost I (>0).  After the investment for the product is made the firm can receive the revenues and spend the costs during the product life cycle.  In the present paper, we employ the Ornstein-Uhlenbeck process for the revenues and costs to meet the real product specific lifecycle.  Let EP*[×] denotes the expectation with a risk-neutral measure, conditional on the information available at time t.  The value of the project, V is then given by: Equation (7) describes the profit rate obtained by discounting the expected cash flows under a risk-neutral measure at the random spot rate.  It is hence necessary to model and calculate the random discount factor Du at each u: While the Equation (10) has similar form to Equations (1) and (2), the bond price dynamics is not uniquely specified by the geometric processes because the drift and diffusion parameters may be a function of B and are not constant.  The above setup is known as the Vasicek model, after the seminal work of Vasicek (1977).  By applying the Ito’s Lemma and Feynman-Kac theorm (Neftci, 2000), Equations (7) and (11) must satisfy the corresponding partial differential equations (PDEs) as follows:    The value F of Equation (14) is the type of American call option with strike price I.  Once Equations (12) - (13) are solved, it is straightforward to calculate the value of the option to invest F, respectively. The models described in the previous sub-section are so complicated that analysis best carried out numerically.  Brennan and Schwartz (1977) first introduced the finite difference methods to deal directly with PDEs.  Specifically, the finite difference methods are suited to deal with the American style claims due to its features of backward dynamic programming.  Chang (2007) presents the finite difference algorithm for a PDE with two-state variables in details.  Though it is illustrated for a two-state variables case, the methodology can be generalized to solve the problems of more state variables by a similar way. The Equations (12) - (13) were then solved using an “implicit finite difference approximation” (Hull, 2006) on a (t, r) space with a (41 x 41) grid and on a (Ct, Rt, t) space with a (31 x 41 x 21) grid.  The second-order partial derivatives with respect to the state variables are discretized using second-order accurate central difference approximations.  The first-order partial derivatives with respect to the state variables, on the other hand, are discretized using first-order accurate upwind difference approximations.

 

A Research Study of Frederick Herzberg’s Motivator-Hygiene Theory on Continuing Education Participants in Taiwan

Dr. Ching-wen Cheng, National Pingtung University of Education, Taiwan

 

ABSTRACT

This study seeks to determine the factors motivating on-campus continuing education participants in Taiwan using Frederick Herzberg’s motivator-hygiene theory. Herzberg’s motivator-hygiene theory, also referred to as the two-factor theory, is commonly used in the academic area of organization management (Jones, George, & Hill, 2000). Due to costs involved and the study’s limitations, the research sample of the present study included students enrolled in the “2006 Human Capital Investment Plan” continuing education program at National Pingtung University of Education, a government plan that tries to improve Taiwanese laborers’ career competency by cooperating with higher education institutions (Bureau of Employment and Vocational Training, 2006). National Pingtung University of Education is a public education institution located in southern Taiwan offering bachelor, master, and doctoral programs. The purpose of this study is to construct a management opinion on adult learning motivation and provide the students’ motivators to the program administrators. The research determined that the major motivators of adult students’ participation are personal-advantage creation, personal-need recognition, learning enjoyment, program schedule, institution’s reputation, personal growth, and demand in the new economics. Furthermore, this study also discovered that information about hygiene needs included in organizational policy, new friends, relationships with subordinates, peer pressure, and workplace management authority also to be significant. Based on the research data analysis, no significant difference exists between male and female adult students’ motivation for learning. Finally, this study found no significant difference in motivation among adult students of different age groups. In studying adult learners’ motivation, many scholars have tried to develop a theory to explain why adult learners participate in continuing education programs on campus. Houle (1961) stated that adults return to school to learn based on three types of learning motivators: job-related reasons, activity-related reasons, and learning-related reasons. Houle’s typology became the first academic document to focus on adult learners’ motivations. Based on the field theory of psychological status, Miller (1967) developed another point of view regarding adult learners’ motivators, believing that an adult participates in education as the result of a social force influencing the individual’s mind. Meanwhile, Boshier (1971) used his congruence model to explain why adults return to school for continuing education programs, asserting that adult learners’ motivational strength relates to a congruence between the educational environment and the individual’s internal psychological status.  As the academic research on motivation expanded, so did the theories about adult learners. Tough (1980) constructed the anticipated benefits theory to explain adult learners’ participation in continuing education programs. Tough assumed that adult learners understand the reason for participating in such programs and expect specific benefits from the learning process. Meanwhile, Cross (1981) developed her chain-of-response model to describe how adults implement participation in education. According to Cross, an individual’s participation in continuing education programs is the result of a chain of responses to several events. Concerned about the lifespan of adult learners, Cookson (1986) developed the Interdisciplinary, Sequential specificity, Time allocation, and Lifespan (ISSTAL) model to describe the participation of adult learners. The major concept of the ISSTAL model is that an adult participates in an educational program as part of his or her social activities. Unlike earlier theories, the ISSTAL model not only explains the motivation of adult learners, but also tries to predict the future participation of adult learners. More recently, Henry and Basile (1994) built their decision model to explain why adult learners choose to participate in continuing education programs. According to this model, an adult decides to participate based on the influence of both the learning motivation and the participant block.  Although these theories are useful for understanding adult learners’ motivation to participate in continuing education programs, they focus on the viewpoints of adult learners, not program managers. For those who administer continuing education programs on campus, a need exists to focus on how to attract more adult students to participate in their programs. According to this demand, the current researcher tried to rethink the issue of adult learning motivation from another viewpoint. In the field of organizational management, the motivator-hygiene theory was developed to describe the relationship between employees and their organizations (Herzberg, 1968; Herzberg, Mausner, & Snyderman, 1959). According to the study hypothesis of significant similarity between two situations, the motivator-hygiene theory might be able to describe the relationship between adult learners and their institutions. Therefore, the purpose of this study is to construct a new opinion on adult learning motivation, attempting to discover adult learners’ motivators while helping program managers to attract more adult students to continuing education programs.  When discussing human needs, an unquestionable significant milestone is the famous hierarchy theory put forth by Maslow (1954), the father of humanistic psychology. The hierarchy theory suggests five levels of human needs: physiological, safety and security, love and belonging, esteem, and self-actualization. Physiological needs include everything to keep the human body alive, such as water, food, oxygen, appropriate environmental temperatures, sleep or rest, excretions, and healthcare. Safety and security needs represent those items that can help an individual avoid his or her fear and anxiety, such as financial insurance, a safe neighborhood, career stability, and security protection. Love and belonging needs include those things that eliminate loneliness, such as good friends, offspring, a significant other, and social relationships. In the hierarchy theory, esteem needs include those items that people need to feel noticed and important for their meaning of life; these needs include social status, reputation, appreciation of others, glory, attention from others, achievement, independence, freedom, mastery, and confidence. Finally, self-actualization is the highest level of human needs in the classic hierarchy theory. Once people have completely met all the needs from the lower level, they can demand self-actualization. Maslow’s examples of people who were truly self-actualizing include Abraham Lincoln, Thomas Jefferson, Albert Einstein, Eleanor Roosevelt, Jane Adams, William James, Albert Schweitzer, Benedict Spinoza, and Alduous Huxley. Thus, self-actualization seems like a standard of being a glorious person, not a human need. Similarly, the ancient Chinese philosopher Confucius said that a great man is a person looking for triumph of the self without fear or anxiety (Tsai, 2001). Unlike Maslow, Frederick Herzberg tried to look for human needs at the vocational level in another way (Jones, George, & Hill, 2000). Herzberg surveyed 203 engineers and accountants in Pittsburgh and established the motivator-hygiene theory (Herzberg et al., 1959), pointing out that the needs for people at the vocational level can be identified by two separate groups: motivators and hygiene needs (King, 1970). According to the motivator-hygiene theory, Herzberg (1969) suggested that hygiene needs can not bring true happiness to people in their vocational environment; only motivators can truly stimulate people to work hard and enjoy their jobs (Herzberg, 1976, 1984). Hygiene needs include organizational policy, salary, relationships with co-workers, job benefits, working conditions, traffic during the commute, relationships with subordinates, career stability, relationships with the supervisor, guaranteed retirement fund, and so on; meanwhile, motivators include personal growth, passion for the job, social responsibility, opportunity for advancement, and the feeling of achievement. One of the biggest arguments of the motivator-hygiene theory is that people usually confuse money with a primary motivator (Daft, 2003); if money only means the buying power, it should not be considered a primary motivator. On the contrary, money could be a primary motivator in the vocational environment if it represents not only buying power, but also a symbol of achievement at work. People seek hygiene needs such as salary in a vocational environment because they are unsatisfied without these needs; however, these needs can not truly motivate people to work hard.

 

Tax Burden Convergence in EU Countries: A Time Series Analysis

Dr. Tiia Püss, Tallinn University of Technology, Tallinn, Estonia

Mare Viies, Tallinn University of Technology, Tallinn, Estonia

 

ABSTRACT

Taxes are an important fiscal policy instrument and the main source of revenue for any country, which are used to regulate and influence economic and social development in the country. The EU has harmonized standards and regulations in numerous areas; however, there has been a lower degree of harmonization in taxation. Significant measures towards the harmonization have been raised strategically in the EU agenda. The aim of this paper is to analyze and compare the trends of the tax burdens in the European Union countries and test for convergence in taxation using the time series approach. We use harmonized data on the tax revenue and tax burden in the European Union countries collected by OECD and Eurostat in the period 1970-2004. The issues of economic convergence have been in the focus of interest of many theoretical and empirical studies over the last two decades. Many concepts of convergence and different econometric methods for empirical analysis have been proposed. There are two main trends in the methodological approach: the cross-sectional data approach is based on the neoclassical growth model and studies convergence between countries or regions through relationships between the initial economic level and average growth (b-convergence); the time-series approach discusses convergence as a stochastic process and uses mainly unit root or cointegration tests in empirical analysis.  In an open economy tax policy of one country may affect economic activity and public revenue in another country. Although lower taxes can yield significant efficiency gains, there is a risk that the financing of public goods and social protection will be shifted to the least mobile tax bases – labor, or that the production of public goods and the welfare systems will be endangered, especially in these countries where income redistribution, social protection and public goods provision are given a high weight in social preferences. The tax harmonization process in the past decade was designed to meet the objectives for improving the economic environment and facilitating development, which are still relevant. Significant measures towards the harmonization have been raised strategically in the EU agenda.  Our previous analysis indicated that the tax burden increased in most of the countries over the period 1980-2003 (Püss et al., 2006). A particularly fast growth of tax burdens occurred in the 1980s, which in the 1990s slowed down and in 1999-2003 even diminished. The reasons for such tax burden developments have been different in different countries. EU-10 countries are characterized by much lower tax burdens than EU-15 countries. Our research also supported in general the notion of σ-convergence and β-convergence in tax revenue as a share of GDP in EU-15 and EU-10 countries. In this paper our analysis is based on the concept of stochastic convergence. We investigate the convergence of tax burden in time series framework and use several unit root tests. Using harmonized data on tax revenues and tax burdens collected by OECD and Eurostat mainly in the period 1970-2004, we provide an analysis of the main trends of the tax burdens in the European Union countries. As the data covering that period are available only for EU-15, we have discussed these countries. According to the economic theory of convergence, economic development level of less developed countries should approach the level of more advanced countries which have the same economic resources or fundamentals. Socio-economic convergence is mainly discussed in the context and on the basis of two main economic growth theories: neo-classical and endogenous. Two main concepts of convergence are used in the classical literature of growth theory: σ-convergence and b-convergence (Quah, 1996; Sala-i-Martin, 1996).  One of the simplest methods for estimating socio-economic convergence is calculation of σ-convergence, which is based on standard deviation. With this method it is possible to examine how the dispersion between national income levels (or other indicators) has changed, or how the differences of indicators inside groups of countries are changing compared to the average (Baumol, 1986; Dorwick and Nguyen, 1989; Barro and Sala-i-Martin, 1991, 1992a, 1992b). A reduction coefficient of variance (standard deviation/arithmetic mean) of indicators indicates a reduction of the difference, or the presence of σ-convergence.  The test for the presence of b-convergence (Baumol, 1986; DeLong, 1988; Barro and Sala-i-Martin, 1991, 1992a, 1992b; Sala-i-Martin, 1994; Boyle and McCarthy, 1997) posits that b-convergence exists if a poor economy tends to grow at a faster rate than a rich one so that the poor country tends to catch up in terms of per capita income or product. The literature makes distinction between absolute (unconditional) and conditional b-convergence. Absolute b-convergence pertains to the coefficient b of the bivariate equation. This is based on the assumption that all countries in the sample converge to the same steady state. Conditional b-convergence pertains to the coefficient b of the socio-economic level variable in an equation that includes additional explanatory variables reflecting differences across countries, which direct each economy to converge to its own steady state. In both cases, the convergence hypothesis is that the growth rate of a socio-economic indicator will be negatively related to the level of this indicator. A simple but unbiased measure of convergence that is consistent with Sala-i-Martin’s (1994) concept of β-convergence is interpretation of β-convergence, which is concerned with tracking the mobility of individual countries within the distribution of income levels over time (γ-convergence). Taking this interpretation as given, a straightforward and direct assessment of inter-temporal mobility would require an examination of the change in the ranking of income levels. A simple measure that captures the change in rankings is Kendall’s index of rank concordance (Boyle and McCarthy, 1997).  The time-series approach views convergence as a stochastic process and in empirical analysis mostly unit root or cointegration tests are used. Following Bernard and Durlauf (1995, 1996), we can distinguish between two definitions of convergence, the weak notion of catching-up and strong notion of long-run convergence. Both notions are testable in time series framework. Catching-up implies that the difference between the two series yi and yj is a stochastic variable with a non zero mean, suggesting that the deviation between the series, even if expected to decrease, would not disappear: where It denotes all information available at time t and it is assumed that A sufficient condition for catching up would imply the absence of stochastic trend in the difference between two variables, but the deterministic trend is allowed. The long-run convergence as a more demanding level of convergence is defined as equality of long-term forecast at a fixed time, which means: Long-run convergence precludes both stochastic and deterministic trends in the cross-country differences and the series of gaps should have a zero mean. However, for our analysis the request of zero mean may be too strict. There may be a stable difference between countries’ tax burdens that reflects some fundamental differences between countries, mainly due to the economic and social development model of the country. To capture the notion of a stable long-run difference, we distinguish between absolute and conditional convergence: Convergence is conditional when constant C differs from zero and then two countries i and j converge toward an equilibrium differential.  Evans and Karras (1996) used a panel data approach. A convergence process occurs when deviations of countries i = 1...n from the cross-country average  approach individual constant values as time approaches infinity. They distinguished between absolute and conditional convergence on the basis of individual effects μi. Convergence is absolute if μi = 0 for all i = 1...n or conditional if μi ≠ 0 for some i.

 

Synergy in Business: Some New Suggestions

Dr. Vojko Potocan, University of Maribor, Maribor, Slovenia

Bostjan Kuralt, University of Maribor, Maribor, Slovenia

 

ABSTRACT

In the global competitive environment, enterprises can only survive (in the long term) by permanently improving their business. They have limited resources and they face very harsh conditions, therefore they can (significantly) improve their business results, if they organize their working better, e.g. by implementing potential/possible synergies. The concept of synergetic working is based (also, or even primarily) on the application of the process approach and of the (dialectically) systemic understanding of the enterprise as a business system (BS). Synergetic working enables BSs to attain the best overall results by harmonizing activities of all BSs' parts and considering the needs (and requirements) of the BSs' environments. This contribution deals with two theses: 1) Synergy is encountered and studied in various sciences; it is equally important in business sciences; 2) The process approach and (dialectically) systemic consideration enables us to define synergy in BS requisitely holistically (in terms of both contents and mathematics) and to use it for further consideration of BSs in business sciences. The new global market pressure makes it increasingly difficult for enterprises to compete rationally efficiently and effectively, respectably (reasonably from their business behavior image aspect), ethically (morally appropriate and responsible attitude in harmony with their social and natural environment) and innovatively (novel service / products, and production systems and gaining additional benefits from these).  Important characteristics of many successful enterprises include using the process approach and systems understanding. This enables the effective creation and operation of their business. On this basis they define their operation as a relatively open and dynamic business system, which mainly depend upon co-operation and synergy of actions of all areas and levels of their functioning.  A brief comment about the process approach: For millennia work processes used to be simpler and less changeable than today; they were physically demanding, required little creativity, and provided little reward. Thus, bosses had the need to force subordinates to be productive and to coordinate / direct their activities. These actions require a favorable hierarchy to support / direct process aspects. In the next step, bosses were many and on many levels. This caused many of them to forget that (1) outcomes result from processes, (2) organizational hierarchy is supposed to facilitate processes and to be adapted / subordinated to process.  The brief comment about the concept of systems understanding: In Webster's dictionary (Gove, 1987) the notion system has fifteen groups of meaning/contents. Hence, it is unclear. The term "system" has many shades of meaning according to context in the systems theory, too (Bertalanffy, 1968; Wiener and Masani, 1976; Mulej, 1979, 2000, 2004; Checkland, 1981; Potocan, 1997, 2002, 2005; Flood, 1999). Our conclusion may encompass the following statements: In order to be understood by the reader, authors must never use the notion "system" without at least an adverb denoting the tacitly or explicitly selected viewpoint, hence - the resulting selection of attributes of the object under consideration. System is supposed to mean holism of considerations—of which, the requisite holism is the best attainable level, while a fictitious holism is more common because every human in unavoidably a narrow specialist.  An enterprise can be treated as a business, a social, natural and any another system, depending on authors' selection of viewpoint/s. Also, any organization or individual can be treated as a business system when a special interest is transformed into a business operation. Organizations as business systems (BSs) are more and more under the impact of synergistic functioning, which enables harmonized activity of all parts of the organization, e.g. concerning their environments (e.g. business, social, natural, etc.).The introduction of the synergetic functioning, instead of the one as a set of independent rather then interdependent divisions, requires the enterprises to reconsider and restructure their intra-business associations, their style and content of business objectives and realization, etc. This step may provide a way toward the requisite holism and may result in improvement of business processes, their management and outcomes. But this innovation requires more openness and cooperation from BS members.  The enterprise as business system (BS) is, therefore, faced with many challenges of how to set up their intra-business associations in their new business environment/s, full of interdependent and unavoidably narrow specialists, who have had very little education for interdisciplinary and inter-functional creative cooperation. Thus, synergy is a strange notion to most of them.  From the whole area of researches of synergy and its rules in business we will focused our attention on questions of: characteristics of present studies of synergy, role of synergy in business and a more holistic definition of synergy in business.  The word synergy originates from the ancient Greek, as a compound noun made up of “syn”, which means “with, together with, at the same time” and “ergon” which means “work, to work, to function” (Gove, 1987; Bowman and Faulkner, 1997; Black, 1997). The dictionary defines synergy as “to collaborate, mutual support and mutual supplementing of two or more forces or organs” (Gove, 1987; Black, 1998). Research has developed a series of different concepts of synergy investigated by numerous and different sciences (for example economic, technical, social, sociological, legal) (Wiener, 1956; Ansoff, 1965; Lange, 1965; Ashby, 1968; Wiener and Masani, 1976; Haken, 1977; Porter, 1985; Kajzer, 1996, 2000: Potocan, 1997, 2002). From business viewpoints, the concept of synergy was introduced by Ansoff (1965) in connection with strategy in his book “Corporate Strategy”.Ansoff states, that “synergy refers to the idea that firms must seek' product-market posture with a combined performance that is greater than the sum of its parts', more common known as “2+2=5”. Ansoff has suggested that synergy can be classified into three broad categories: collusive synergy, operational synergy, and financial synergy. Much research has examined the benefits of relatedness among the businesses in multi-business corporations (Rumelt, 1974; Ramamnujam and Varadarajan, 1989; Hoskisson and Hitt, 1990; Markides and Williamson, 1996). When operations are managed appropriately, both intangible and tangible synergies will develop that make the vield of corporate strategy more than the sum of the vield from the individual divisions’ strategies separately (Kanter, 1989; Porter, 1985). This synergy will put slack resources that might not have been fully used to work, and scarce resources may in turn be obtained from the market (Teece, 1980; Porter, 1985). The performance of these linked divisions by means of synergy may be superior to other firms (Markides and Williamson, 1996). Firms striving synergy must look beyond an only goal of cost coordination and try to develop benefits far superior (Govindarajan and Fisher, 1990; Corning, 1998, 2001; Moggridge, 2006). An example of intra-company synergy is the degree to which the pool of knowledge and experience in a BS can be deployed to reduce the cost or time required to create new strategic assets or to expand the stock of the existing ones (Markides and Williamson, 1996). The potential for synergy can be measured by how closely the operations are similar, but actual synergy is not realized market gains, or innovations have produced added profit after the administrative costs have been taken into account (Slusky and Caves, 1991; Corning, 2001; McDonagh and Goggin, 2005; Moggridge, 2006). Also, managers must be heavily involved in this process since poor implementation and coordination will militate against realizing actual synergies (Corning, 1998, 2001).  For synergy, both commitment and coordination are required and administrative mechanisms must be in place to encourage these (St. John and Rue, 1991). Coordination is especially complex for encouraging synergy because the environmental conditions, the firm’s strategy, and its structure complicate implementation (Gupta and Govindarajan, 1991).

 

Cross Analysis on the Contents of Children’s Television Commercials in the United States and Taiwan

Yi Hsu, National Formosa University, Taiwan

Liwei Hsu, National Kaohsiung Hospitality College, Taiwan

 

ABSTRACT

The influence of television programs toward children cannot be overlooked, TV commercials included.  This study is designed to examine the factors that are decisive for the marketing effects of children’s TV commercials.  The discussion about how the contents of Taiwanese children’s commercials differ from those of U.S. commercials and to explain reasons that cause these differences are the main themes of this research.  Six research hypotheses are tested and the results of the content analysis show that four of the six research hypotheses are accepted. The hypotheses on uncertainty avoidance and masculinity/femininity are not accepted by the statistical examination.  Television is a powerful socializing force for children, reaching them daily with extraordinarily high levels of exposure.  Children between six and fourteen years old watch approximately 25 hours of television weekly and view approximately 20,000 commercials annually (Moore et al., 2000).  In November 1999, the U.S. Kaiser Family Foundation published reports on children and teenagers exposure to television.  This investigation indicated that American children watched TV for an average of 2 hours and 46 minutes daily.  Since commercials represent around 20 percent of the content of children’s television, television advertising is a pervasive presence in the lives of most American children.  A similar situation exists in Taiwan.  An investigation by a Taiwanese advertising magazine on children in the 4 to 14 year old age group found that television watching was a major recreational activity among this age group in Taiwan.  Furthermore, an investigation by the Foundation of Broadcasting and Television reported that two thirds of Taiwanese children watch TV every day.  Three children’s television channels, Yo Yo TV, the Disney Channel, and the Cartoon Network, broadcast children’s programs and cartoons 24 hours per day. Unlike adults, children are relatively uninformed about product quality and prices, and have comparatively less-developed awareness toward the influence of advertising.  Consequently, children are extremely receptive to the messages from television, including messages from TV programming and advertising (Oates et al., 2002).  Television advertising thus significantly impacts children’s values.  Advertisers know that TV advertising must reflect the values of the audience effectively.  Thus, for children, just as for adults, TV advertising must reflect and influence the cultural values of the audience.  These cultural values are dynamic, and can be associated with family structure, economic development, lifestyle, social mobility, and education (Hofstede, 1991). Content analyses of TV advertisements have generally been well researched (Shao et al., 1999, Callcott et al., 1994, Huang, 1995, Cho et al., 1999, Lin, 2001).  However, only a handful of related studies crack down on the contents of commercials for children.  Previous researches focused entirely on gender stereotyping (e.g. Smith, 1994, Furnham et al., 1997, Browne, 1998, Lin, 2001).  Gender values are not holistic concept on cultures, and thereby can only partially explain cultural values.  This study is designed to describe how the contents of Taiwanese children’s commercials differ from those of U.S. commercials and to explain reasons that cause these differences.  According to pertinent literature on communicational arts, communication content is the consequence of antecedent conditions or contextual factors, including cultures that have led to shape the construction (Riffe, Lacy, and Fico, 1998).  A basic hypothesis of this study is that children’s commercials are a reflection of the cultural conditions.  In other words, this type of commercials are developed and broadcasted in the society with culture embedded.  Therefore, differences between Taiwanese and U.S. children’s commercials can be predicted and explained by the dissimilar parts in the cultures of these two countries.  Under this assumption, this research discusses how Taiwan differs from the United States in each antecedent condition.  Then, the research hypotheses are proposed regarding how the contents of children’s commercials might be expected to differ between the two countries as a result of each antecedent condition and social environment.  To test the hypotheses, tape-record children’s commercials in the two countries for in-depth analyses is the major research method adopted in this study.   The following sections identify differences in cultural values related to children’s advertising in Taiwan and the U.S., describe the content analyses procedures, and discuss the findings, consider their marketing implications, and make suggestions for future research. Cultural variations have increasingly attracted attention from marketing scholars, who have recently begun to employ Hofstede’s culture model as a framework for studying cross-cultural differences (Ablers-Miller and Gelb, 1996; Ji and McNeal, 2001).  Hofstede’s (1980) four cultural dimensions, power distance, collectivism/individualism, uncertainty avoidance, and masculinity/femininity, were designed as work values, and have been applied to various marketing topics (Albers-Miller et al., 1996).  Hofestede’s dimensions offer a good basis for understanding the influence of national culture on organizations’ self-representation but miss the actual practice of social activities (Harvey, 1997).  Therefore, this study proposed the fifth dimension, social factor.  Social factor also influences the advertisings development and be a part of culture value. Power Distance.  Power distance describes the extents to which less powerful members of institutions and organizations within a culture expectation and accept unequal distribution of power (Hofstede, 1991).  TV advertising reflects social differences with respect to the power distance norm (Huang, 1995).  In societies with greater power distance, advertising employs older people more frequently than in other societies, with the aim of enhancing credibility.  Additionally, advertising in such societies values such as tradition, history and prestige are stressed.  Lin (2001) examined cultural values as reflected in American and Chinese advertising.  The results indicated that the youth/modernity appeal that reflects the trend of westernization and modernization in China appears equally prominent in both Chinese and U.S. commercials.  Ji et al. (2001) examined children’s advertising and obtained analytical results that differed from previous research, failing to support the notion that Asian people have a stronger respect for authority and hence are more prone to accept authority than western people (Yau, 1988; Bond, 1991).  The findings of the research conducted by Ji et al. reveal that children’s television commercials from the U.S. employ more adult voiceovers, models, characters and spokespersons than those from the People’s Republic of China.  This difference probably occurs because people in China are highly focused on serving the needs of children owing to the One Child Policy (Ji et al., 2001).  Different from China, Taiwanese government encouraged increasing birth rate since 2002.  Taiwan has a longer history of being democratic society than China but still emphasize Confucius’s five cardinal relations between sovereign and minister, father and son, husband and wife, old and young, and friends.  According to Hofestede’s research in 1980, compared with Americans, who have low power distance, Taiwanese people show a stronger respect for authority.  Therefore, it is more natural for children to accept authority opinion leaders, including older people and family elders who recommend products or services in commercials (Ji et al., 2002).  Therefore, using adults as the spokespersons in a commercial should appeal more in Taiwan than in U.S. H1a: Children’s TV commercials in the U.S. use an adult as the spokesperson appeals less frequently than do advertisements in Taiwan. Traditionally, Taiwanese personality characteristics such as inner harmony, concern for others, submissiveness to authority influence on children (McDaniel et al., 1981).  Taiwanese children probably receive less respectful treatments because of the authoritarian family tradition wherein the parents make the decisions and the children are expected to obey (Chiu, 1989).  Taiwanese students perceived more influence from the parents than American students in making decision (Wolansky et al., 1991).  Taiwanese parents more strongly endorsed traditional Chinese values and exerted more parental control than American parents (Jose et al., 2000).  Traditionally, Taiwanese parents are much authoritative than parents in the U.S. in terms of the influences on their children’s decision-making (Chiu, 1989).  Consequently, it is assumed that the major target audiences of most children’s TV commercials are adults rather than children in Taiwan.

 

The Technology Disruption Conundrum

Von Johnson, Woodbury University, Burbank, CA

Pierre Ollivier, Ecole Polytechnique, Paris, France

 

ABSTRACT

During the 20th Century, the filmed entertainment business evolved from a regional studio factory system into the global media and entertainment industry we know today.  Once controlled by a handful of powerful and creative entrepreneurs (the studio ‘moguls’), the seven major studios are owned today by multi-national corporations, transformed into finance, marketing and distribution entities, content owners and licensors, network operators, recording companies and producers, Internet portals, game producers and publishers.  Some companies are vertically integrated in many or all of these businesses (e.g., Walt Disney, Time Warner, Viacom, NBC-Universal and News Corp.) Although they may differ in scale and corporate culture, the common thread among them is an economic model that relies on the ability to control levels of presentation quality, and when, where and how consumers access entertainment content.  Traditional peripheral constituents supporting this supply model are the post-production and distribution service providers, and the systems/equipment suppliers that comprise the industry ‘ecosystem’ for content production, preparation and distribution.  This paper presents an argument that evolving media consumption habits are fueling technologies and innovations that threaten the ecosystem by disrupting the studio’s control and empowering consumers.  Moreover, the authors contend that technology disruptions are becoming increasingly more insidious and occurring at faster intervals.  Information lags create missteps and confusion between the constituents that negatively impact the entertainment industry’s ability to manage change.   To eliminate the effects of the “Technology Disruption Conundrum”, the authors call for greater transparency and shared strategic planning between studios and their suppliers along the post-production and distribution value chain.  Historically, the entertainment industry enjoyed relatively long periods of business stability between short periods of upheaval.  In the early to mid 20th century, the economics of the filmed entertainment business were largely under the control of a few “moguls” who owned or controlled practically every component along the value chain, from script to theater.  The studio moguls built their own production factories staffed with creative and craft resources at controlled wages and terms.  Even the most popular (and profitable) actors were held to exclusive, long-term contracts that dictated terms and wages favorable to the moguls.  To complete the monopoly in the United States, the major studios also owned extensive national theater chains that bore their names (Warner, Paramount, Fox, etc.).  Over time, this “studio factory system” collapsed under the weight of government regulation, organized labor and disruptive technologies.  Two outstanding examples of government intervention are the 1948 ‘consent decree’ which divorced the major studios from their domestic theater chains (Aberdeen, 2000); and, the embracing of television production by the major studios in the 1970’s, when the original three U.S. networks (NBC, CBS and ABC) were banned by the Federal Communications Commission’s Financial Interest and Syndication Rules (Fin-Syn) from financial interest in their productions, beyond first-run network broadcasts.  [VWJ1] The rise of professional creative unions such as the Screen Actors Guild and the Director’s Guild of America fundamentally changed the economics of movie and television production and distribution.  Actors, directors and other creative artisans organized to negotiate performance and participation terms that significantly differed from the previous long-term individual contracts.  In the early days of the filmed entertainment industry, professional advancements to the technical tools of the trade (e.g., color, audio, optics) were commonly developed through industry-centric companies such as Technicolor, Deluxe, Todd AO and Panavision.  These improvements were not disruptive because they enhanced the movie going experience to the benefit of the studios. Technologies create disruption when they empower consumers with products that a) reproduce creative experiences of a quality similar to those offered by studios (e.g., video cameras, editing software) or, b) challenge the studio’s control over when, where and how consumers access content (television, VCRs, Internet).  It is interesting to note that professional innovations in the filmed entertainment industry have almost always migrated from expensive professional products into low-cost consumer products; for example, photography, phonograph, film projection, radio, computers, video cassette recording, etc.  The major filmed entertainment companies react negatively to disruptive technologies until they better understand how to incorporate the technology into their business model and regain stability (see Figure 1).  In fact, the entertainment industry has typically and significantly prospered by adopting common technical and business standards and practices around new technologies and innovations that initially threatened their control over content consumption.  The best and most recent example is the multi-billion-dollar windfall from catalog sales and sell-through due to VHS machines and DVD players.  During the first half of the twentieth century, and still in a monopolistic context, the cycles resulting from successive disruptions characterize a relatively benign and stable environment.  It is possible to illustrate this using Figure 2, a ‘systems map’ inspired by Dr. Peter Senge’s novel The Fifth Discipline, which employs the notion of system dynamics to examine the interactive actions and reactions occurring between constituents in a dynamically complex social system (Senge, 1990). A high-functioning system continually exchanges feedback among its various parts to ensure that they remain closely aligned and focused on achieving the goal of the system.  If any of the parts or activities in the system seems weakened or misaligned, the system makes necessary adjustments to more effectively achieve its goals (McNamara, 2006). In this first example, we use systems mapping to visualize the adaptation and improvements following a potentially disruptive technological invention in the early to mid 20th century.  The wheel represents the event of a disruption and the subsequent reactions from a single constituent, a major motion picture studio.  By moving the system at a certain speed, one can represent the pace at which the reactions occur.  Set in the context of monopoly, the ‘impact’ and ‘panic’ stages occur when studio chiefs realize that a technology or innovation could disrupt their control over the consumer’s interest or access to their entertainment products.  As they came to understand the technology over time, the studio chiefs adapted their business models to embrace and manage the technology to their advantage.  For the better part of the 20th century, technologies and innovations that disrupted the filmed entertainment industry occurred far enough apart to enable reasonable, studied analysis and business adjustments.

 

Improved Algorithms for Lexicographic Bottleneck Assignment Problems

Dr. Zehai Zhou, University of Houston-Downtown, Houston, TX

 

ABSTRACT

The lexicographic bottleneck problem is a variant of the bottleneck problem that is in turn a type of the traditional cost (or profit) minimizing (or maximizing) assignment problems. In this paper, the author presents two polynomial algorithms for the lexicographic bottleneck problem. The suggested algorithms solve the lexicographic assignment problem with 2n nodes by scaling and/or sequentially solving a set of classical assignment problems and both algorithms run in O( n5 ). These algorithms improve the previous ones, devised by R.E. Burkhard and F. Rendl [3], by a factor of log n. In the special case where all the cost coefficients are distinct, an algorithm with run time O( n3.5 log n ) is presented. A numerical example is also included in the paper for the illustration purpose. The cost (or profit) minimizing (or maximizing) assignment problem has been extensively treated in the literature and many polynomial algorithms have been devised to solve it [1, 8]. Several variations of the classical profit (cost) maximizing (minimizing) assignment problem have also been analyzed by researchers and efficient algorithms are readily available. For instance, Garfikel [4], Ravindran and Ramaswany [10] studied bottleneck assignment problems, Martello et al. [7] discussed balanced assignment problems, Berman et al. [2] studied several different constrained bottleneck assignment problems, Seshan [11] discussed the generalized bottleneck assignment problems, Geetha and Nair [5] presented a polynomial algorithm to solve assignment problem with an additional ‘supervisory’ cost, and Hall and Vohra [6] discussed an on-line assignment problem with random effectiveness and cost information, etc. In this paper, the author discusses yet another variant of the bottleneck problem, the lexicographic bottleneck problem. In the regular bottleneck problems, the maximum cost (or minimum profit) is to be minimized (or maximized). This is, however, sometimes too crude or the solution is not as desirable as some other alternatives in some real world applications. For example, when one tries to assign n jobs to n workers one may not only want the maximum completion time to be minimized but also may hope to have the second longest completion time, the third longest completion time, etc., to be minimized. There are many other real-life instances where one wishes to solve a lexicographic version of assignment problem instead of bottleneck assignment problems. This problem was first studied by Burkard and Rendl [3]. They presented two different algorithms for solving this problem. The first was based on scaling of the cost coefficients and the second was an iterative approach. The fundamental idea behind both algorithms is to reduce the lexicographic bottleneck problem as a traditional sum optimization problem by redefining the cost coefficients and then solve the sum optimization problem constructed. The computational complexity of both algorithms is of O( n5 log n ) in the worst case analysis.  In this paper, the author presents two polynomial algorithms for the lexicographic bottleneck problems. Our algorithms solve the lexicographic assignment problems with 2n nodes by scaling and/or sequentially solving a set of classical assignment problems and both run in O( n5 ). In the special case where all the cost coefficients are distinct, we present an algorithm with run time O( n3.5 log n ). Consider the traditional cost minimizing assignment problem (i.e. linear sum assignment problem or LSAP), where cij is the cost of assigning worker i to process job j. Let X be the set of all feasible solutions satisfying constraints (2) through (4) and be the n cost coefficient values (where  ) corresponding to xij=1. Obviously, the solution of LBAP is determined by the order of the cost coefficients rather than the actual values of these coefficients. Let m be the number of different values in the cost matrix C and t1<t2<...tk-1<tk<tk+1<...tm-1<tm denote these values. Instead of dealing with the original cost coefficients, the new cost coefficients are instead defined as   It is obvious to see that the new cost coefficients dij {0,1,...,k-1}. It is also clear that this transformation keeps the relative order of solutions of (LBAP) with cost matrix C. Now it is needed to show how to solve the LBAP with cost matrix E by an LSAP where E will be chosen appropriately.  For the ease of exposition, let us assume that all the cij are distinct. When all the cij are distinct, there are n2 different values in matrix C, which means that D consists of 0,1,...,n2-1. Set p1 = 1 and let us define  Proposition 1.  Let cijN be cost coefficients of a LBAP. If the cost matrix E is obtained by scaling the original cost cij as described above, then a solution F is optimal for LBAP if and only if F is optimal for LSAP with cost matrix E.  Proof.  In any feasible solution, there are n variables (among n2 variables) taking the value 1 and all other variables equal 0. The cost coefficients in E are obtained in such way that element pk is 1 more than the summation of the values of the n elements (which are immediately) smaller than pk (i.e. elements pk-n-1 through pk-1, where k-n-1>0). Now assume that F is an optimal solution for LSAP. First of all, the largest cost value α1 is minimized since it has the scaled value which is 1 more than the summation of any n elements whose orders are lower than the one corresponding to α1 in cost matrix D. Suppose that there exists a solution with lower α1 value, then F is not a optimal solution for LSAP, which contradicts the assumption. This argument also holds for all αi, 2 ≤ i ≤ n. So we conclude that F is an optimal solution for LBAP if it is an optimal solution for LSAP. On the other hand, if F is not an optimal solution for LSAP, then there must exist an optimal solution with lower total cost sum. This indicates that there exists solution with lexicographically smaller bottlenecks (α1, α2, …, αn) because cost coefficients in E are obtained in such way that element pk is 1 more than the summation of the values of the n elements (which are immediately) smaller than pk (i.e. elements pk-n-1 through pk-1, where k-n-1>0). This implies that F is not an optimal solution for LBAP, which completes the proof.

 

Do Employees Trust 360-Degree Performance Evaluations? (A research on the Turkish Banking Sector)

Dr. Harun Demirkaya, Kocaeli University, Kocaeli, Turkey

 

ABSTRACT

Recently performance evaluation has been emphasized and systematized in businesses because the strategic importance of human resources in creating a high-performance organization is well understood. However, this has led to arguments, since both the evaluated and the evaluator are humans. The 360-degree evaluation, which was designed to settle those arguments and to provide an objective evaluation, is now widespread in Turkey.  Trust is one of the determinants of the effectiveness of performance evaluation. It is even more crucial in the 360-degree performance evaluation because so many evaluators are involved. This study tests the degree to which the employees of a bank trust the 360-degree performance evaluation.  A business corporation is a socio-technical system in which many people of different abilities, dreams, and creative skills come together. Three types of behaviors are required to make the system work well. First, people must be convinced to join and remain in the organization. Second, employees must perform their job responsibilities reliably. Third, they must voluntarily dedicate their creative and innovative skills beyond any sense of duty (Werner, 2000:4). This third expectation is indispensable for organizations aspiring to high performance.  On the other hand, in these organizations, management becomes much more complex and multi-dimensional (Peterson, 2003:243). Performance evaluation provides input for almost all functions of HRM applications, and the system outputs are taken into consideration as objective data in decisions and applications.   Developed in the American army (Cadwell, 1995:23) and eventually spreading into businesses, a performance evaluation system evaluates employee performance as a sub-system of performance management (Milkovich and Boudreau: 1991:91). Measuring achievement and creating a high performance organization are crucial for businesses. They spend huge amounts of effort and money in them (Stiffler, 2006:17). Any improvement in individual performance may lead to much greater developments within an organization (Milkovich and Bouderau: 1991:92). Therefore, the first step in creating a high performance organization is recruiting high performance employees. This understanding adds strategic dimensions to performance evaluation and HRM. Despite the importance attributed to performance evaluation, this issue has unfortunately not been treated as a system in many corporations, and therefore a performance evaluation system that is suitable to the corporate strategy and culture could not be structured. Employees’ trust in the organization’s performance evaluation system plays a significant role in this respect. Trust directly influences the success of a performance evaluation system. The purpose of this study, conducted on a bank affiliated with a large presence in Turkey is to test the level of trust in the performance evaluation system in general and in the 360-degree performance evaluation in particular.  Performance means “achievement” or “effectiveness.” Performance evaluation measures achievement or effectiveness as a function of human resources, and is becoming indispensable to the public and private sectors (Clement and Stevens, 1984:43). In this context, performance evaluation includes activities that determine an employee’s efficiency. The employee’s work, activities, weaknesses, strengths, competences and deficiencies are thoroughly examined (Fýndýkçý, 2003:297).  Performance evaluation is “the process of evaluating job achievement of employees by measuring and comparing with predefined standards” (Palmer, 1993:9). This process must be managed well in order to be an effective tool of competition (Robertson, 2004:24).   However, this effectiveness is only possible if performance evaluation is integrated into the system. Otherwise, it will inevitably face certain problems. Performance evaluation is such a rapidly developing process that Kirkpatrick has increased the four levels that he initially proposed to seven (2006:5-8). Despite this rapid development, each organization has distinct conditions. Organizations determine efficiency of their own applications, conditions, purposes and expectations. Therefore, a well-designed performance evaluation system should be a process of managing, evaluating, waging, rewarding and developing performance in a way that contributes to the shared efforts to reach organizational targets (Barutçugil, 2002:125). A performance evaluation that accommodates the constantly developing and changing conditions within a system will prove highly useful for the evaluator, the evaluated, and the organization. In this context, its benefits for supervisors are as follows:  It enhances the planning and control functions of a supervisor. It strengthens a supervisor’s relationship and interaction with subordinates (Gliddon, 2004:28). It helps a supervisor in coaching (London and Smither, 2002:6) and mentoring (Holmes, 2004:3) employees more effectively. It guides a supervisor in determining which aspects of subordinates need to be improved. It makes it possible for a supervisor to evaluate his or her own performance. It facilitates delegating. It develops supervising skills. It facilitates teamwork. The benefits of performance evaluation for employees: They become aware of the organization’s expectations of them. They get a chance to see their weaknesses and strengths. They comprehend their roles and responsibilities. It meets their need to be noticed and recognized.  They learn how their achievement is measured. Their relationships and interaction with coworkers and superiors improve. Job satisfaction, self-confidence, and corporate loyalty develop. Individual career planning becomes more consistent. The benefits of performance evaluation for an organization: It clearly defines the corporate targets and purposes, and allows the organization to monitor the achievements of individuals, teams, and departments. It provides resources for reporting and an information system. It has a positive influence on organizational efficiency and profitability. It enhances the quality of goods and services (Philip, 1990:9). It allows for clear determination of employees’ potential and direction of development. It is useful in relating the HR planning, waging management, career planning and training planning to performance outcomes (Gliddon, 2004:28-29), and thus ensures consistency. It enhances consistency of corporate training budget. It helps in inspecting the HR systems. It ensures flexibility in meeting the need for a short-term workforce. Determining the purposes and targets of performance evaluation is the first step towards creating a system (Drucker, 1989:67). It is necessary to define the corporate purposes clearly in order to set up a good system. One of the difficulties in performance evaluation in most cases arises when the corporate purposes are not clearly defined (Anonymous, 1998:12). Besides, the purposes of performance evaluation system and the purposes of the organization must be harmonized.  A performance evaluation that is harmonized with the organization’s purposes starts with preliminary studies. In this context, a coherent action plan is designed to answer the following questions: Who will be evaluated? When will the evaluation be done? How often will the evaluation be done? What is the method? After the tasks and responsibilities are delegated, a comprehensive training regimen can be created. Consequently, the application is carried out by the delegates. The HR unit provides advisory services at each stage of the performance evaluation by coordinating, checking and filing the outcomes for future reference.

 

Factors Influencing Sales Partner Control and Monitoring in Indirect Marketing

Dr. Roland Mattmüller, International University Schloss Reichartshausen, Germany

Dr. Ralph Tunder, International University Schloss Reichartshausen, Germany

Dr. Tobias Irion, International University Schloss Reichartshausen, Germany

 

ABSTRACT

When selling their products, manufacturers of consumer goods are heavily reliant on the support of their sales partners as they act as gatekeepers for the end customers and thereby determine the extent and quality of the goods available to the customer. In order to ensure the desired market presence of their products, manufacturers are increasingly creating institutionalised structures for the continuous monitoring and control of their sales partners. Using a survey of 130 managers in various consumer goods sectors in Germany, the following investigation will clarify and empirically substantiate what the fundamental parameters are that shape the control of a consumer goods manufacture’s sales partners, and which factors influence the intensity of sales partner control.  In contrast to intraorganisationally orientated organisational and sales management literature, there are only a few empirical studies relating to interorganisational sales partner monitoring.   So even today, Frazier for example, still has to be endorsed, who in a meta-analysis of the contributions to knowledge in the field of distribution research comes to the conclusion:  “Despite the importance of monitoring, to the best of my knowledge, Bello and Gilliland (1997) are the first to explicitly examine it in a major channel study. Clearly much more has to be done. What needs to be monitored across different channel relationships and contexts is an important question. Behaviors as well as performance outcomes will need attention in many cases.” (Frazier, 1999). The indicated need for research is manifested particularly in a lack of theoretically founded and empirically demonstrated knowledge with regard to construct formation and measurement of the monitoring construct as well as with regard to central influencing factors on sales partner control and monitoring in the consumer goods industry. This study is the result of this research deficit. Its aim is to make theoretically founded and empirically demonstrated comments on the monitoring construct and the causal relationships between the central influencing factors (antecedence variables) and the intensity of sales partner monitoring (dependent variables). Initially the construct of sales partner monitoring is conceptualised and operationalised, and building on this, the study model is created in which the causal relationships between selected influencing parameters and the intensity of  sales partner monitoring are derived taking into account theoretical reference points and made specific in hypothesis form by way of a survey of 130 consumer goods manufacturers.  In order to derive precise hypotheses relating to the direction of action of the influencing factors, the existence of a clear understanding of the concept of dependent variables in the study model is necessary. Below, sales partner monitoring is defined as the process of gathering and processing information on the basis of formal principles by a manufacturer after concluding a contract with a sales partner in order to verify the extent to which the sales partner is meeting the obligations imposed on him. Apart from the required conduct (abilities and activities), the contractual obligations also include the resulting performance outcomes and are thus compared as normative specified values with the actual values achieved by the sales partner in order to be able to identify and analyse any discrepancies. In the literature on a theoretical level sales partner monitoring is in principle broken down into the two dimensions of outcome-based control and behaviour-based control (Mattmüller, 2004; Celly/Frazier, 1996; Anderson/Oliver, 1987; Oliver/Anderson,1994; Ouchi/Maguire, 1975). Whereas outcome-based control relates to the result of a completed implementation process and thus the level of achievement of quantitative specific targets, behaviour-based control covers the implementation process itself. In order to be able to precisely formulate the causal relationships between the identified influence factors and sales partner monitoring, the causal relationship between the outcome-based control and behaviour-based control is also important.  In connection with this, the findings of the literature analysis show an extremely non-uniform picture as with the substitutionality thesis (existence of a construct with two negatively correlated dimensions), the complementarity thesis (existence to two different, positively correlated constructs) and the independence thesis (existence of two different, uncorrelated constructs) there are three competing logically mutually exclusive modelling approaches present.  The substitutionality thesis was developed within the framework of intra-organisational sales management literature and relates to the formulation of an employee management system. In addition to employee monitoring, such a system includes reward and incentive formulation and exceptionally has neither an outcome-based nor conduct-based focus (Anderson/Oliver, 1987; Oliver/Anderson, 1994; Krafft, 1999). In an intraorganisational context this assumption of a one-dimensional continuum is fully justified as all conceivable possibilities of employee management can be shown on one continuum. However, the argument against substitutionality in an interorganisational context is the fact that neither of the two forms of monitoring can provide such comprehensive information about the sales partner that the other one could be dispensed with. Consequently the indicated formulation possibilities for sales partner monitoring cannot be shown on a one-dimensional spectrum.   Although the complementarity thesis presumes the existence of two different constructs, it sees these as positively correlated at the same time (Celly/Frazier, 1996). Modelling of this type implies that an intensification of outcome-based control is always concomitant with an increase in behaviour-based control and vice-versa. This assumption is not however plausibly justified in causal terms and is therefore also rejected.  Consequently the arguments in this study are based on the independence thesis.  The conduct and outcome-based control of a sales partner are modelled as two independent multi-factorial constructs which are assumed to have no systematic, reciprocal, significant causal dependences.  The outcome-based control factors cover cost and yield monitoring, while behaviour-based control is broken down into the factors capabilities and activities monitoring. Variations of the constructs ‘behaviour-based control’ and ‘outcome-based control’ are interpreted below as causal effects of variations of the relevant factors or their indicators.   Thus, the intensity of behaviour-based control increases with the increase in a capability or activity monitoring indicator. As even a variation in one item leads to corresponding construct variations, without the other indicators necessarily having to vary too, this construct consequently has a formative structure (Diamantopoulos/Winklhofer, 2001).  As part of the scale development of both constructs and taking into account the (reflective) scales of   Celly/Frazier, Challagalla/Shervani and Krafft two separate multi-factorial, compilated indices were drawn up, which accordingly show the intensity of the conduct and outcome-based control of a manufacturer via the factors (Celly/Frazier,1996; Challagalla./Shervani, 1996; Krafft, 1999). In accordance with Diamantopoulos/Winklhofer the items are condensed directly at construct level (Diamantopoulos/Winklhofer, 2001). To summarise, the following basic hypotheses are postulated with regard to the conceptualisation and operationalisation of sales partner monitoring:  HVP1: The intensity of behaviour-based control has no influence on the intensity of the outcome-based control of the sale partner by the manufacturer.  HVP2: The intensity of outcome-based control has no influence on the intensity of the behaviour-based control of the sales partner by the manufacturer.  HVP3: The intensity of outcome-based control and the intensity of behaviour-based control of the sales partner by the manufacturer are not correlated.  The subject of this study is the business relationship between the manufacturer and sales partner, whereby economic exchange processes can be described using five constituent elements “Characteristics of the provider, the customer, the object of exchange, the exchange relationship and the framework conditions” (Mattmüller, 2004;  Ahlert, 1996; Kotler/Bliemel, 1992). Against this background it must be assumed that variables in all five categories can represent potential influence factors on sales partner monitoring and thereby initially form the starting point of systematisation of all influence factors to be studied. In this study a total of 9 influence factors on sales partner monitoring from 4 categories were identified which met the fundamental selection and requirement criteria (Merchant, 1988). All the independent variables and/or influence factors used in the model are summarised in fig. 1 and will be briefly explained below: [FIG. No. 1]

 

Perceptions Affecting Employee Reactions to Change: Evidence from Privatization in Thailand

Dr. Chaiporn Vithessonthi, University of the Thai Chamber of Commerce, Bangkok, Thailand

 

ABSTRACT

The focus of the present study is to test whether perceived participation in decision-making process and perceived change in power influence employee reactions to change (i.e., resistance to change and support for change) in a sample of 197 employees at a large state-owned organization in Thailand in the context of planned privatization. Using multinomial ordered probit regression, the results provide some support for the proposed hypotheses. More specifically, the level of perceived participation in decision-making process is negatively associated with the level of resistance to change whereas the level of perceived increase in power resulting from the change is negatively associated with the level of resistance to change and positively associated with the level of support for change. During the past decade most state-owned enterprises in both developing and developed countries have come under increasing pressure to significantly improve their performance. In addressing this challenge, many state-owned enterprises have sought to go through a process of privatization, necessitating the implementation of organizational change. It has been argued that employee reactions to change, e.g., resistance to change and support for change, have critical implications for the outcomes of organizational change (Kotter, 1995). In a broader context, the attitudes and behaviors of employees have an impact on strategic implementation and firm performance (Becker, Huslid and Ulrich, 2001). A central research question in this context is: How can organizations minimize employee resistance to change and promote employee support for change? The aggregate result of a series of actions made by the firm in the change processes tends to cause employee resistance to change (e.g., Judson, 1991; Kotter, 1995). The emphasis in the change management literature so far has been on what organizations should undertake to promote support for change and reduce resistance to change. This literature highlights that employees orient their reactions to change towards the actions of organizations. In spite of a large body of normative techniques for managing change, there is a lack of empirical studies of their application to suggest whether the techniques proposed in those models will in fact influence employee reactions to change. Dent and Goldberg (1999) challenged the conventional wisdom that people resist change and argued that people do not resist change, per se, but rather resist losses of status, pay or comfort. They posited that these are not the same as resisting change. This view has been supported by several studies suggesting that certain factors influence resistance to change, and these include, for example, fear of real or imagined consequences (Morris and Raben, 1995), fear of unknown consequences (Mabin, Forgeson, and Green, 2001), a threat to the status quo (Hannan & Freeman, 1988; Spector, 1989), and different understandings or assessments of the situation (Morris and Raben, 1995). Hence, it is plausible that employees react to the consequences of organizational change. During the last decade, significant attention has been devoted to understanding perceptions that are expected to influence the formation of employee reactions to the decisions of organizations (e.g., Eisenberger, Fasolo, and Davis-LaMastro, 1990; Eisenberger, Huntington, Hutchison, and Sowa, 1986). In this view, the tendency of employees to form reactions to change is greatly influenced by some perceptions. For instance, it has been argued that perceived organizational support is related to a wide array of work-related attitudes and outcomes (Eisenberger et al., 1986). The question that now arises is: Do perceived participation in decision-making process and perceived change in power resulting from organizational change affect an employee’s reactions to change?  The notion that certain perceptions influence the decisions of employees has been widely accepted in the research. Accordingly, this paper advances and tests an argument that perceptions influence employees’ reactions to change. Well-understood effects of perceptions may actually promote more comprehensive, effective and pragmatic change management models designed for promoting employees’ support for change and/or for reducing employees’ resistance to change. Taking a step in that direction, this paper attempts to fill a gap in current empirical research by empirically examining the links between employees’ perceptions about the change processes and the consequences of change and employees’ reactions to the planned privatization pursued by a large state-owned organization in Thailand.  Resistance to change has been an important construct in a number of fields, including, for example, psychology, organizational development, and organizational change. In literature on change management, researchers generally agree that resistance to change is a key variable affecting change decisions and outcomes, and it is a negative and undesired response for organizations because it might lead to a failure of organizational change (Regar, Mullane, Gustafson, and DeMarie, 1994). Hence, it is not surprising that much research has been devoted to examining the ways in which resistance to change can be minimized. What is resistance to change? Despite a large body of research on resistance to change, it is difficult to find a definition of resistance. According to Lewin (1951), who was one of the first authors to use the notion of resistance to change, the status quo represents the equilibrium between the forces supporting change and the barriers to change. Some difference between these forces is therefore required to generate the “unfreezing” that initiates change. To make the change permanent, “refreezing” at the new level is required. In this sense, resistance, which is a system phenomenon, is part of the change process. Many studies have posited that resistance to change is negative and should be removed or minimized. For instance, Coch and French (1948: 521) defined resistance to change as “a combination of an individual reaction to frustration with strong group-induced forces.” Agócs (1997: 918) defined institutionalized resistance as “the pattern of organizational behavior that decision makers in organization employ to actively deny, reject, refuse to implement, repress or even dismantle change proposals and initiative.” Interestingly, Lewin (1951) suggested that it is easier to lower resistance to change than to increase support for change. Consequently, one may argue that it is not necessary that a factor that lowers resistance to change will increase support for change. In this sense, resistance to change and support for change should not be the same construct.  Several researchers in management science have indicated that fairness of organizational policies and procedures exerts an impact on people in organizations (e.g., Adams 1965; Gopinath and Becker, 2000; Thibaut and Walker, 1975). The literature dealing with (1) how employees react to inequitable processes and outcomes and (2) how they try to establish equitable conditions suggests that perceptions of fairness of organizational decision-making processes have significant effects on employees’ attitudes and behaviors. For instance, empirical research has shown that perceived justice of the organization’s decision-making process exerts an effect on employees’ reactions to strategy implementation (Kim and Mauborgne, 1993) and pay raise decisions (Folger and Konovsky, 1989). Perceptions of fairness have been found to be positively associated with employees’ organizational commitment (McFarlin and Sweeney, 1992) and job satisfaction (Conlon and Fasolo, 1990). There are two forms of justice in the social psychology literature: procedural justice and distributive justice. Procedural justice refers to decision control and process control in determining fairness (Thibaut and Walker, 1975) and deals with the fairness of a procedure or set of procedures (Tyler, 1994) whereas distributive justice refers to the perceived fairness of resource allocations or outcomes (Moorman, 1991). Distributive justice theory has been used to study various organization phenomena such as the conflict resolution process (Karambayya and Brett, 1989) and work group incentive pay plans (Dulebohn and Martocchio, 1998). In addition to its root in social psychology literature, distributive justice is also grounded in equity theory, which suggests that outcomes are fair when individuals form beliefs that the outcomes are consistent with their inputs (Folger and Cropanzano, 1998).

 

A Study of the Relationship of the Perception of Organizational Conflicts and Organizational Promises among Faculty and Staff Members in the Technical and Vocational Colleges

Hui-Chuan Tang, Far East University, Taiwan

 

ABSTRACT

Colleges under the pressure of educational revolution need to constantly do something within their inner organization to cope with the changes of the society. In the process of transformation, many organizational conflicts occur both in the instruction and administration units. The reasons of such conflicts may be positive or negative. The former is helpful to enhance the members’ promises to the organization. Therefore, the primary purpose of this study is to understand the perception of technical and vocational college faculty and staff members’ organizational conflicts and promises, their relationship, and differences. Subjects are college faculty and staff members from technical and vocational colleges in central and southern Taiwan. Totally 720 copies of questionnaires are sent out, with 462 copies returned, of which 51 copies are invalid. Therefore, the return rate is 64% with 411 valid copies. The collected questionnaires are analyzed through One-way ANOVA and Typical Correlation.  The findings in the study are:  1. Role duty conflict is perceived the most obviously, compare to the others when organizational conflicts are discussed. As for the organizational promises, promises to work hard get the highest value, compared with the value promises and position promises. 2. For those with different background variables, it is found that younger people, administration staff, and teachers with lower ranks perceive higher conflicts. It is also discovered that teachers with higher ranks, with director duties, and with age more than 40, will have higher loyalty to the organization. 3. The more conflicts are perceived by the members within an organization, the lower the position-stay promises will be for the organization; the higher perception of ideal organizational operation, the lower the effort promises and position-stay promises. The policy of opening the door of higher education brings a series of revolution for technical and vocational colleges which have to reform their inner organization to cope with the changes of the outer environment. In the transformational process, more conflicts will occur within the operation of instructional or administration units. Without understanding the conflicts and searching for solutions, there may be negative influences on the long-term school development and management.School organization revolution comes from the concept of organizational management and behaviorism. It is a long-term and continuous operation within which many different people of the school organization are involved, such as school president, administrative staff members, teachers, and students. In order to achieve the common goals, an organization is an organism produced by the members and structure and the interaction between them (Liao, 1995). The educational revolution may elicit inner conflicts of school organization, because of the different individual values, overloaded duties, and different role-play. These conflicts are not necessarily negative, however. They can also bring school energy, upgrade the instructional quality, open more space for question discussion and increase the problem-solving abilities of members in the organization. Through the discussion of problems, new ideas can be found to become the power of revolution and increase faculty and staff members’ loyalty so that they are willing to contribute more to the schools. Therefore, both the achievement motivation can be exerted and loyalty is increased. Hong,(1995) pointed out school organizational conflicts have both positive and negative influence on schools. For positive influence: (1) they can encourage members’ creativities and bring the organization revolution; (2) they can force more suggestions to be offered under the benign interaction; (3) they can inspire the members to pay more efforts for the organizational goals; (4) they build and improve the organizational structure and atmosphere. For negative influence: (1) more time and sources of the organization are consumed; (2) members of the organization will have a negative psychological feeling; (3) the confidence and support between members are destroyed; (4) members are unwilling to participate in the work and even go strike; (5) the flow rate is getting too high and the organization is loosely organized; (6) and the low instruction quality of a long period has a bad impact on the school reputation. Lin (2002) divided organization conflicts into four levels: objective realization conflict, role duty conflict, organizational operation conflict, and habit change conflict. These four levels can cover personality conflict, value conflict, inter-group conflict, and professional requirement conflict. Moreover, these four levels also involve the members’ psycho, objective, cognitive, affective, and behavior conflict. Lin continued to point out that organizational conflict will have direct or indirect negative influence on organizational promises. School presidents are the key role to push the educational reformation. His leading role should be an effective step to decrease teachers’ pressure and organizational conflicts and future enhance the organizational conflict. Lin & Chen (2005) found out that administrative staff in the technical and vocational colleges perceives that the occurrence frequency of organizational conflicts is above the average level. When compared to the different types of conflict, the most-frequently found is cognitive conflict. Value conflict, material conflict, benefit conflict, and feeling conflict are ranked in succeeding order after the cognitive conflict. However, cognitive conflict is found to be more often in the instructional units than the administrative ones. In Lan’s (2004) research findings, volunteer conflict cognition shows negative correlation with organizational promises. This means that the stronger of the volunteer conflict cognition, the lower of their organizational promises. Therefore, educational organizations should clarify the job duty distinctively to avoid such conflicts. Moreover, the conflict control should be taken into consideration to allow the organization members to have sufficient time to be engaged in the conflict management (Lu,2004). All in all, the reasons of organizational conflicts are in wide variety for people’s differences, the dissimilar understanding of the information, ill communication, different roles, various goals, competition of insufficient resources, or the differences resulted from the organizational reformation. Due to the numerous staff on campus, the different viewpoints result in diverse conflicts, especially when teachers in different professional fields hold their own insistence of educational reforms. This will be even serious if the school organizations have limited sources. Ill inter-communication can only make this situation worse. Humans are important assets of an organization. The success and failure of its reformation relies on the cooperation of the organizational members. The school managers or administrative authorities should understand thoroughly the staff members’ perception of the working environment if effective organization reformation is to be implemented. In copying with a series of reformation and measurement, owning to factors of the objective realization conflict, role duty conflict, organization operation conflict and inertial change conflict, whether members within an organization will be influenced to be willing to keep their promises toward the school organizations is a question. Furthermore, is there a correlation between school faculty and staff members’ perception of organization conflicts and promises? These have motivated the researcher to investigate the questions.  With the above motivation, the subjects in this study are faculty and staff members of technical and vocational colleges in central and southern Taiwan. As an exploratory research, this study has the following purposes: (1). To know the current situation of the perceived organizational conflicts and promises of college faculty and staff. (2). To find out the differences of organization conflicts and promises of subjects with different background variables. (3). To discuss the correlation between their perceived organization conflicts and organizational promises.

 

Mass Customization Manufacturing (MCM): The Drivers and Concepts

Dr. Muammer Zerenler, Selçuk University, Konya, Turkey

Derya Ozilhan, Selçuk University, Konya, Turkey

 

ABSTRACT

Today’s business environment is characterized with extremely tight competition. Companies are forced to constantly reduce costs and outperform when pursuing efficiency. At the same time, companies are struggling to reach effectiveness to retain customer loyalty. Combining these two aspects is difficult at best and requires reasonable trade-off between variety, functionality, and price of the products and services. Mass customization relates to the ability to provide individually designed products and services to every customer through high process flexibility and integration. Mass customization has been identified as a competitive strategy by an increasing number of companies. This paper surveys the literature on mass customization manufacturing. Enablers to mass customization and their impact on the development of production systems are discussed in length. Approaches to implementing mass customization are compiled and classified.  Market and technology forces that affect today’s competitive environment are changing dramatically. Mass production of identical products—the business model for industry in the past—is no longer viable for many sectors. Market niches continue to narrow. Customer preferences shift overnight. Customers demand products with lower prices, higher quality and faster delivery, but they also want products customized to match their unique needs. To cope with these demands, companies are racing to embrace mass customization, “the development, production, marketing, and delivery of customized products and services on a mass basis,” according to a definition popularized by Joseph Pine, a leading spokesman for the concept. Mass customization means that customers can select order and receive a specially configured product, often choosing from among hundreds of product options, to meet their specific needs. In today’s economy, continuous competition and the dynamic global market have pushed manufacturers to transition from mass manufacturing techniques toward flexible and rapid response methods, to enable them to deliver products rapidly while keeping costs down. This can mean embarking on an approach called .Mass Customization Manufacturing (MCM). The goal of MCM is to build customized products, even if the lot size is one, and to achieve a customization/costs balance (Pine 1993). Today’s customers won’t accept the Henry Ford’s dictum ‘You can have any colour car you want as long as it's black’ (Pine 1993). “Every customer is unique” -phrase has challenged manufacturing companies. Fulfilling every customer’s individual needs has been, if not impossible, targeted only to very solvent customers. Mass customization (MC) strategies have tried to achieve the goal to fulfil individual needs cost efficiently. The manufacturing trend of producing a smaller number but wider variety of products forces enterprises to adopt differentiation strategy to offer customers more choices of products. Such kind of variation strategy often makes the interwoven constraint relationship of products even more complicated, which is one of the characteristics of in a customization manufacturing environment (Jiao et al., 2003; Salvador & Forza, 2004). Fohn et al., once used computers as a case study and demonstrated that approximately 30–85% of product information was wrong and that this kind of mistake would causes in engineering design and substantial burden to an enterprise (1995). Mass customization, once considered a paradox to be resolved in the future, has become an everyday reality for many manufacturers.  The term “mass customization” was coined by Davis in Future Perfect (Davis, 1987) and then popularized with the publication of Pine’s Mass Customization: the New Frontier of Business Competition (Pine, 1993). In 1989 Kotler posited that market segmentation had progressed to the era of mass customization in which computer technologies and automation capabilities allow companies to produce cost-effective, individualized versions of products (1989). However, mass customization not only aims to address the customer requirements effectively but also efficiently. The costs of mass customized products should be low enough that the charged prices are not considerably different from the prices of comparable standard products, which are manufactured on the basis of mass production principles. Thus, mass customization is a strategy that contradicts the stuck-in-the-middle hypothesis postulated by Porter (1998). This hypothesis, which states that product differentiation and cost leadership are two incompatible strategies, has been the subject of a very long discussion within the scientific community. More contemporary researches suggest that the advances in manufacturing, information technology and management methods since the publication of Future Perfect in 1987 have made mass customization a standard business practice (Kotha, 1995; Pine, 1993). The confluence of these advances allows producers to customize at low cost and customers to reap the benefits of customized products with relatively low prices. MC, a recently popularized concept, has been advocated as the 21st century manufacturing paradigm. It is seen as the winning strategy to be adopted by manufacturers bracing themselves for dramatic performance enhancements to become national and international leaders in an increasingly competitive market of fast changing customer requirements. This paper identifies the drivers of mass customization and discusses the portfolio of competitive advantages that have emerged over time as a result of the changing requirements of manufacturing. The need to achieve the competitive advantages of manufacturing in synergy and without trade-offs is fundamental to the MC paradigm. To further the understanding of mass customization, this paper reviews the meaning of mass customization from different perspectives and suggests a comprehensive definition which can be adopted as a working definition by practitioners. Four underlining concepts of mass customization have emerged from the working definition and the paper presents a representation of these concepts and their interactions. Finally, the paper highlights some of the key enablers of mass customization and identifies potential future research. MC relates to the ability to provide customized products or services through flexible processes in high volumes and at reasonably low costs. The concept has emerged in the late 1980s and may be viewed as a natural follow up to processes that have become increasingly flexible and optimized regarding quality and costs. In addition, mass customization appears as an alternative to differentiate companies in a highly competitive and segmented market. Kaplan and Haenlein define mass customization as "a strategy that creates value by some form of company-customer interaction at the fabrication / assembly stage of the operations level to create customized products with production cost and monetary price similar to those of mass-produced products" (2006).

Contact us   *   Copyright Issues   *   Publication Policy   *   About us   *   Code of Publication Ethics

Copyright 2000-2017. All Rights Reserved