The Journal of American Academy of Business, Cambridge
Vol. 10 * Num.. 1 * September 2006
The Library of Congress, Washington, DC * ISSN: 1540 – 7780
Online Computer Library Center * OCLC: 805078765
National Library of Australia * NLA: 42709473
Peer-Reviewed Scholarly Journal
Most Trusted. Most Cited. Most Read.
All submissions are subject to a double blind peer review process.
The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Journal of American Academy of Business, Cambridge will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work. All submissions are subject to a double blind peer review process. The Journal of American Academy of Business, Cambridge is a refereed academic journal which publishes the scientific research findings in its field with the ISSN 1540-7780 issued by the Library of Congress, Washington, DC. The journal will meet the quality and integrity requirements of applicable accreditation agencies (AACSB, regional) and journal evaluation organizations to insure our publications provide our authors publication venues that are recognized by their institutions for academic advancement and academically qualified statue. No Manuscript Will Be Accepted Without the Required Format. All manuscripts should be professionally proofread / edited before submission. After the manuscript is edited, you must send us the certificate. You can use www.editavenue.com for professional proofreading/editing or other professional editing service etc... The manuscript should be checked through plagiarism detection software (for example, iThenticate/Turnitin / Academic Paradigms, LLC-Check for Plagiarism / Grammarly Plagiarism Checker) and send the certificate with the complete report.
The Journal of American Academy of Business, Cambridge is published two times a year, March and September. The e-mail: firstname.lastname@example.org; Journal: JAABC. Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via the e-mail address above. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above. Address advertising inquiries to Advertising Manager.
Copyright 2000-2019. All Rights Reserved
The Impact of Frequency of Use on Service Quality Expectations: An Empirical Study of Trans-Atlantic Airline Passengers
Dr. Kien-Quoc Van Pham, Pacific Lutheran University, Tacoma, Washington
Dr. Merlin Simpson, Pacific Lutheran University, Tacoma, Washington
While the academic debate continues relative to the conceptual validity, reliability of the SERVQUAL model to assess service quality, the paucity of empirical studies addressing service quality antecedents indicates the need to revisit the causal relationships between these antecedents and the “corollary” service quality assessment. In today’s globally competitive marketplace, the fostering of customer loyalty reigns undisputed as the most important goal for all commercial enterprises, with repeated use or purchase as one of the primary indicators of customer satisfaction, and yet frequency of use has not been addressed in terms of its impact on the means to achieve and maintain such customer loyalty. The airline industry, with its Frequent Flyers Mileage reward system, its success and that of similar promotional programs in promoting loyalty (bankcards purchase cash rebate, Starbuck’s patrons’ cards), is a natural venue to further investigate this service quality conceptual antecedent (past experience) construct. The economic paradigm shift from industrial to customer-value has made service a focal point of all corporate efforts to improve profitability (Albrecht, 1992). The U. S. economy, as is the case with other developed economies, has become a predominantly "service economy" (Albrecht and Zemke, 1985), in which virtually all organizations compete to some degree on the basis of service (Zeithaml, Parasuraman and Berry, 1990). Service-based companies are therefore compelled to provide excellent service in order to prosper in increasingly competitive domestic and global marketplaces. Service quality has become the significant strategic value adding/enhancing driver in achieving a genuine and sustainable competitive advantage in a global marketplace (Devlin et al, 2000). While many "quality-focused" initiatives have often failed to enhance overall corporate performance, customer-perceived service improvements have been shown empirically to improve profitability (Buzzell and Gale, 1987).
Whistleblowing: International Implications and Critical Case Incidents
Dr. Steven H. Appelbaum, Professor, Concordia University, John Molson School of Business, Quebec, Canada
Kirandeep Grewal, Concordia University, John Molson School of Business, Quebec, Canada
Hugues Mousseau, Concordia University, John Molson School of Business, Quebec, Canada
This article will examine the following: (1) motivation of whistleblowers; (2) international implications; (3) consequences for the individual and organization; (4) selected mini-case studies and (5) solutions for organizations. An employee’s decision to report individual or organizational misconduct is a complex phenomenon that is based upon organizational, situational and personal factors. Recommendations include: Employees should be encouraged to communicate their ethical concerns internally. Employees need to believe that their concerns will be taken seriously. Employees need to feel that they will not suffer any retaliation for their action. According to Miceli and Near (1985), “whistle blowing is the disclosure of illegal, immoral, or illegitimate practices under the control of their employers, to a person or organizations that may be able to effect action”. (Vinten, 1995). Whistle blowing is the voice of conscience”. (Berry, 2004). Whistle blowing is a new name for an ancient practice. The fist time the term whistle blowing was used was the 1963 publicity in the USA surrounding Otto Otopeka. who was an American public servant had given classified documents to the chief counsel of the Senate Subcommittee on Internal Security, which could pose as a threat to the government administration. (Vinten, 1995) Mr. Otpeka’s disclosure gesture was severely punished by the then Secretary of State who dismissed him from his functions for conduct unbecoming. The term whistle blowing is sometimes perceived negatively, while it is also very often viewed in a positive, even heroic fashion. In fact, this perception is highly influenced by the perspective from which one looks at it and by the circumstances surrounding the disclosure by an employee. The main reason why whistle blowing is such an important issue, amongst other elements, is that it has to do with the fact that many public and corporate wrongdoings are never disclosed. Most people agree that estimating the percentage of situations for which the whistle is blown in comparison to when it is not would be a very hazardous undertaking, for obvious reasons. However, it can be said with conviction that “the majority of employees who become aware of individual or corporate wrongdoing never report or disclose their observations to anyone” (Qusqas and Kleiner, 2001) A study conducted in the United States by the Ethics Resource Center and reported in the January 2005 edition of Strategic Finance pointed out “that 44% of all non-management employees don’t report misconduct they observe”. The top two reasons for not reporting were a belief that no corrective action will be taken and fear that the report will not be kept confidential”. (Verschoor, 2005) “Another reason why employers are reluctant to hire whistleblowers is because their action is seen as a breach of loyalty”. (Qusqas and Kleiner, 2001)
Sales Growth versus Cost Control: Audit Implications
Dr. Ray McNamara, Bond University, Australia
Dr. Catherine Whelan, Georgia College & State University, GA
Concerns raised by regulators, investors, and researchers over the independence implications of audit firms providing both auditing and consulting services, has led to the discontinuation by some firms of their consulting activities. The resulting decline in expertise may lead to an impairment of the ability of audit firms to adequately audit the revenues of listed firms. This research investigates the moderating effect of sales growth and cost control on the value-relevance of earnings and book value. The results demonstrate that the market responds differently to the revenue and cost component of earnings. In particular, the market perceives enhanced earnings quality in the presence of both sales growth and cost control. Consequently, audit procedures should provide assurance of both completeness and existence of revenue and expense items on the income statement. The approach of the new millennium saw an increasing emphasis on the audit profession’s need to broaden its activities into a range of assurance services (Elliot 1998). One area of potential profitability was revenue and cost assurance in those industries with large customer bases, complex revenue schema, and advanced revenue cycle technology such as in the telecommunications and health care industries (Connexn 2003). Firms such as Price Waterhouse Consulting (PWC) became the leaders in the revenue and cost assurance area because of their accumulated auditing, information systems, and statistical analysis expertise (Cullinan 1998). This expertise resided in the audit and consulting arms of the firm. The audit division’s focus was on assuring that the reported revenue was not overstated and on internal controls that reduce the likelihood of overstatement. The consulting arm focused on the understatement of revenue and the implementation of internal control methods and procedures to recover unrecorded revenue (Connexn 2003). The existence of dual relationships with clients has raised the question of audit independence, particularly as the assurance services may provide a greater revenue stream than traditional audit services. However, the additional insights gained through the revenue assurance process would undoubtedly contribute to audit quality. This trade-off between independence and audit quality is of concern to all market participants.
E-Local Government Strategies and Small Business
Dr. Stuart M. Locke, University of Waikato, New Zealand
The New Zealand Government like governments in many countries recognises the importance of small business in the economic and social structure of the country. It has implemented a number of policies, in recent years, to assist small, medium enterprises (SMEs). The extent to which these initiatives are successful, in terms of generating the outcomes purported as the rationale for their implementation typically does not receive detailed scrutiny. This paper reports upon an investigation into one element of government programmes directed toward the promotion of greater broadband internet coverage and the encouragement of the adoption of internet technologies have been promoted. In particular the E-Government single access portal for central government and a similar e-local government strategy have been promulgated. An empirical investigation of the progress made by the territorial local government authorities in implementing the e-local government strategy and the impact upon SMEs is presented. It is observed that at the policy formulation stage the nexus between policy and SME outcome is not made explicit and second that the monitoring of policy is lacking which has potentially negative implications for SMEs. I is suggested that the level of public administrative accountability as it relates to the monitoring of this policy is inadequate and to the extent that this observation is generalisable SMEs may not be reaping the gains that could be achieved. In March 2001, central Government launched an e-government strategy, aiming to create a public sector, including local government, which will meet the needs of New Zealanders in the information age. At the local government level, under the umbrella of the local government association, a range of objectives in terms of the breadth of services and timing of e-delivery development are proposed in the e-local government strategy document. The majority of the objectives have tangible targets and time periods associated with them. These make suitable reference points for evaluating progress made toward the implementation of the policy. The importance of high level information communication technology penetration into the business and household sectors of New Zealand has been stressed in successive government reports, culminating in a digital strategy (MED 2004a).
The Effects of Humor and Goal Setting on Individual Brainstorming Performance
David W. Roach, Ph.D., Arkansas Tech University, AR
L. Kim Troboy, Ph.D., Arkansas Tech University, AR
Loretta F. Cochran, Ph.D., Arkansas Tech University, AR
The efficacy of goal setting is widely accepted by researchers, managers, and the “man-on-the-street.” Given this agreement, the simple maxim to “set goals” seems obvious. However, individual, task, and context characteristics affect the characteristics of goals that lead to high performance. The primary purpose of this study is to examine the effects of goal characteristics and a specific context variable, humor, on an individual brainstorming task. With respect to goal characteristics, we examine the effect of goal specificity (vague goals, specific attainable goals, and specific stretch goals) on individual brainstorming performance. With respect to humor, we examine the effect of the presence or absence of humor and the interaction of humor and goal characteristics on individual brainstorming performance. In addition, we examine the interaction between goal characteristics and humor on the task. We found that performance on a brainstorming task was highest when goals were both specific and challenging (stretching). While humor did not affect performance on specific goals, humor did improve performance with vague goals and humor radically improved performance with stretch goals. The research results suggest that humor may be an effective managerial lever for certain tasks and contexts. This paper reports on a study that examines the effects of goal characteristics and a specific context variable (humor) on an individual brainstorming task. The literature abounds with research on this topic, so we cite just a few specific studies on the relationships among goal setting (specificity and difficulty), performance, and humor. Next, we present the procedures, methods, and results of our study. Finally, we will discuss the implications and limitations of this research and present ideas for future research in this area. The impact of goal setting on performance is well established in organizational behavior and management research (Ambrose and Kulik, 1999; Locke, 2004; Latham, 2004). Performance is higher for specific, difficult goals than easy goals, "do your best" goals, or no goals (Locke, Shaw, Saari, & Latham, 1981). Reviewing extant literature, Locke et al. (1981) found that 99 out of 110 studies empirically demonstrated the effect of goal-setting on task performance. Specific, clear goals establish and communicate expected performance levels. When people know what is expected, they can focus their efforts on the target (Latham, 2004). Moreover, knowing performance expectations reduces anxiety concerning the performance appraisal process (Latham, 2004). Goal difficulty moderates the relationship between goal setting and performance (Wright, 1990; Ambrose and Kulik, 1999; Campbell and Furrer, 1995). People are motivated to exert more effort over time when presented with difficult goals (Latham, 2004).
A Comparison of the Solicited and Independent Financial Strength Ratings of Insurance Companies
Dr. Martin Feinberg, University of Texas-Pan American, Edinburg, TX
Dr. Roger Shelor, College of Business, Ohio University, Athens, OH
Dr. Mark Cross, Miami University, Oxford, OH
Axel Grossmann, University of Texas – Pan American, Edinburg, TX
This study provides a comparison of the life/health and property/casualty insurance company ratings of a solicited ratings agency, A.M. Best, versus those of an independent ratings agency, Weiss Ratings Inc. for the time period 1998 – 2001. Financial strength ratings assess the company’s overall claims paying ability. The results provide further evidence that A.M. Best ratings are higher than Weiss ratings. Although previous studies have indicated this result, they did not fully account for any lack of correspondence between ratings and possible sample selection bias, as this study does. The finding of no difference in rating changes with respect towards timing is inconsistent with previous research. The results add evidence to the argument that there should be concern among consumers regarding the closeness and unique nature of the relationship between the solicited rating agency and the insurance company being rated. The results remain consistent across both life/health and property/casualty insures. Insurer financial strength ratings provide the rating agency’s assessment of overall financial strength and the insurer’s ability to meet policyholder obligations. Consumers, insurance agents and brokers, corporate risk managers, regulators and investors use financial strength ratings to assess insurer’s insolvency risk. Individual consumers utilize the financial strength ratings to determine which companies are preferable and insurers often utilize those ratings in their advertising. Insurance agents and brokers typically are reluctant to recommend coverage with insurers that are either unrated or poorly rated. In addition, many corporate insurance buyers require a good rating. Ratings also help regulators in assessing the financial strength of insurers. In addition, strong financial ratings give insurers better access to capital markets and help them to lower their firm’s cost of capital. An insurer’s financial strength rating is an important part of the selection process, but not the only factor to be considered.
The U.S., Japan in the Global Semiconductor Industry
Dr. Farahmand Rezvani, Montclair State University, NJ
Dr. Ahmat Baytas, Montclair State University, NJ
Dr. Serpil Leveen, Montclair State University, NJ
As we enter the 21st century, the global electronics market is an approximately $1 trillion industry and is expected to double during the early years of this century. Technologically-related industries currently account for more than a ninth of the U.S. domestic product, as compared to just a twentieth of U.S. domestic product only ten years ago (Simons 1995). Even though the invention of the transistor and the integrated circuit (IC), as well as the equipment to manufacture them were almost totally product of U.S. innovation (Spencer 1993), during the 1980’s the U.S. ceded its global domination of the field to the Japanese. However, by the 1990’s the U.S. market share managed to rebound significantly and this favorable trend appears to be continuing. This paper will explore the factors that resulted in the loss of world domination by the U.S. semiconductor industry and will also analyze the factors that contributed to its revival and the gradual American regaining of semiconductor leadership. The birth of the modern semiconductor industry was marked by the invention of the transistor at Bell Laboratories in 1947. The transistor consisted of tiny silicon crystals which were classified as “semiconductor” because an electric current could pass through them in one direction but not the other. Soon after in 1958, the integrated circuit was invented by Texas Instruments and represented a significant breakthrough because all of the functions that previously required distinct devices were now integrated into an under-layer of the semiconductor itself. The so-called miniaturization of circuitry today has reached a point where a complete road map of all of Manhattan can be placed on a chip the size of the head of a pin (Standard & Poor’s 1995).
The Relationship of Personal Characteristics and Job Satisfaction: A Study of Nigerian Managers in the Oil Industry
Dr. John O. Okpara, Briarcliffe College, New York
The purpose of this study was to examine the effect of personal characteristics on job satisfaction of Nigerian managers employed in the oil industry. Stratified sampling techniques were used to select the managers for this research. A total of 550 questionnaires were distributed, and 364 were returned, representing a 66.18% response rate. The key findings of this study were that job satisfaction is strongly associated with personal characteristics of managers surveyed for this paper. Results also show that older managers were overall more satisfied than were their younger counterparts. Experience and education affect satisfaction with present job, pay, promotions, supervision, and coworkers. Findings of the study provide management and human resources professionals with key information that would assist them in recruiting, rewarding, promoting, and retaining their workers. This paper offers realistic suggestions to the management of oil companies for how to enhance the job satisfaction of their most valuable workers, thus improving their efficiency and effectiveness. It also offers tools for how to establish a comparable pay policy, create equal opportunity for promotion, and provide a favorable work environment for all workers. Job satisfaction has been a major research area for scholars, practitioners, and organizational specialists, and it is one of the most frequently researched areas of workplace attitude. The consequences of job dissatisfaction include high turnover, lateness, absenteeism, poor performance, and low productivity. According to Al-Ajmi (2001), excessive turnover, absenteeism, and low productivity result in a waste of human power and unnecessary loss in productivity and profit. Studies conducted in the West have shown that many individual variables influence job satisfaction (Ang et al., 1993; Hulin & Smith, 1964; Lee & Wilbur, 1985).
A Fuzzy Logic Approach to Explaining U.S. Investment Behavior
Dr. Tufan Tiglioglu, Alvernia College, Reading, PA
This paper uses a non-linear fuzzy logic procedure to empirically investigate the links between the real interest rate and aggregate investments in the United States from 1959 to 2000. In an interesting paper, “A fuzzy design of the willingness to invest in Sweden ,” Tomas Lindström utilized a fuzzy logic approach to explain willingness to invest in Sweden during the period 1950-1990. I examine whether or not his results in Sweden can be replicated in the United States, focusing on both the real interest rate and its variability. The paper provides a brief overview of fuzzy set theory and logic, then discusses the Lindström model and results. It concludes with the results of this approach using interest rate, real output, and investment data from the United States.Fuzzy logic has been widely used by scientists, mathematicians and engineers, among others, as a means of designing decision and control systems where “rules of thumb” are easier to conceptualize and implement than precisely delineated decision making criteria. This practice may result from the inherent complexity of the decision problem at hand, which makes analytical modeling difficult. In this vein, a highly complex system gives rise to considerable (non-stochastic) uncertainty, since the complexity itself makes it too difficult or costly to specify exact relationships among critical variables. Confronted with the necessity of making a decision, decision makers in these circumstances may opt to simplify the process into a series of rules of thumb. Economic decision makers are often faced with a high level of complexity and thus uncertainty relevant to their decision-making problem. Moreover, variables such as price or output can be thought of as low or high, without precisely defined lines of demarcation. Perhaps the concept of reservation price could be fruitfully treated in a fuzzy model. Fuzzy logic is well suited to modeling human processes of decision making in the context of complexity and/or lexical uncertainty. Applications of fuzzy logic to economic decision-making would thus appear to be worth investigation. In an interesting paper, “A fuzzy design of the willingness to invest in Sweden ,” Tomas Lindström utilized a fuzzy approach to explain willingness to invest in Sweden, focusing on both the real interest rate and its variability, during the period 1950-1990. In this paper, I investigate whether or not his results in Sweden can be replicated in the United States.
Union Leaders' Value Systems: The Lack of Change Over Time and Scope
Dr. David Meyer, Central Connecticut State University, CT
Dr. William E. Tracey, Jr., Central Connecticut State University, CT
Dr. G. S. Rajan, Concordia University, Montreal, PQ
Vincent Piperni, Concordia University, Montreal, PQ
Two studies of union leaders’ value systems were conducted two years apart. One was of local union leaders, the other was of national union leaders. England's Personal Value Questionnaire (PVQ) (1967) was modified to focus on the union as the organization being studied. A measure of the priority of each value was also obtained. Analysis revealed no significant differences across time or scope of leadership. Union leaders are very pragmatic and much more socially concerned than managers. England, Agarwal, and Trerise (1971), comparing union leaders' value systems with managers' value systems, concluded that union leaders were moralistically oriented, whereas managers were pragmatically oriented. They suggested that union leaders occupying higher level positions would be more pragmatic than their counterparts at lower levels. Since then, a number of articles have focused on managers' value systems. Whitely and England (1977) and England (1978) compared managers' value systems across cultures and countries. Lusk and Oliver (1974) measured the change in managers' values from 1966 to 1972 in order to test England’s supposition that values are stable over time. Their study supported that contention. The selection and development of union leaders is different from managers and possibly results in changing or evolving values. Herman (1998) points out that “union members usually distrust leaders who have not come up through the ranks” (p. 91). Union leaders were described by Holley, Jennings and Wolters (2001) as trying to “achieve something I personally valued” (p. 119) and believing in the goals of the union. One of the questions that cries out for study is whether union leaders will show the same stability of values over time that managers have shown. The environment that most affects union leaders was discussed by Miles and Ritchie (1968). They found that leadership values were more strongly affected by the union leaders' jobs than by the theoretical ideals of democracy and participation within the union. The importance of the job in forming a person's values was also discussed by England (1973, p. 2): "the requirements and constraints that the job of managing places upon the managers." We expect that as the requirements and constraints of the job change, the values of the person holding that job will change also.
Collaborative Systemic Training (CST) Model in Los Angeles for Adult Learners
Dr. Deborah LeBlanc, National University, Los Angeles, CA
Enrollment management is vital to the survival of institutions of higher education. While enrollment management is not a direct part of the scope, duties and responsibilities of faculty; yet faculty can play a critical and significant role indirectly in the process of enrolling adult learners into colleges and universities. This study has demonstrated a collaborative approach in Los Angeles which accomplished three major achievements during academic year of 2003-2005: SOBM-LA better served adult learners; (2) SOBM-LA provided greater collectivity between faculty and student services unit; and (3) Enhanced SOBM-LA faculty opportunities in the provision of quality community service using a collaborative method. The goal of the study was to provide greater internal (inreach) and external (outreach) opportunities for faculty and staff to better serve adult learners was obtained. Lastly, this CST study has shown that adult learners require activities that are meaningful and relevant. The topics that that faculty presented on were as followed: Sports Management, Career Management, and Time Management. Self-reports from those who attended session revealed: (1) Empowerment through the collaboration of faculty; and (2) Positive group interactive presentations and discussions. National University is distinguishing itself ‘through its leadership in the field of adult learning through continued growth in improved effectiveness of operations, student support and academic quality’ (NU, 2010, Strategic Direction One). New approaches in student enrollment management are essential to continued academic program growth, development and vitality in meeting the needs of adult learners. This study was developed to provide a descriptive analysis of a team building approach utilized through collaboration designed to increase student enrollment and enhance services for adult learners within the School of Business and Information Management at National University in Los Angeles during the 2003-2005 academic year. Findings and Recommendations from this study can be useful in the following three areas: (1) to better serve adult learners; (2) to provide greater collectivity between faculty and student services unit; and (3) to enhance faculty opportunities in the provision of quality community service using a collaborative method. Overview of chapter one included the following sections: background of the study; statement of the problem; purpose; research questions; assumptions; delimitations; definitions and summary.
On Financing with Convertible Debt: What Drives the Proceeds from New Convertible Debt Issues?
Dr. Camelia S. Rotaru, University of Texas – Pan American
Despite extensive research, it is not clear why companies issue convertible securities and what drives the variation of proceeds on the convertible market. In this paper I use a sample of 509 convertible securities issued between 1980 and 2003 to show that companies do not issue convertible securities to mitigate adverse selection costs. Rather, the variation of convertible proceeds suggests that managers time the convertible issue for periods when investors are optimistic. This is consistent with the findings of Loughran et al. (1994) and Lowry (2003) which show that IPOs are issued during periods of high investor optimism. The dollar volume of outstanding convertible debt securities has grown tremendously over the past several years. (1) By 2001, the size of the US convertible market had reached $200 billion. However, not all companies issue convertible securities, and despite extensive research, we still do not know why only some companies choose to finance through convertible securities. Some authors suggested that companies issue convertible securities in order to reduce the bondholder-stockholder agency costs (Green, 1984; Brennan and Kraus, 1987), or to hedge against the impact of uncertain risk (e.g., Brennan and Schwartz, 1987), while other argued that convertible issuers are trying to reduce adverse selection costs and financial distress (Stein, 1992), or to take advantage of time-varying selection costs (Choe et al., 1993; Bayless and Chaplinsky, 1996; Lewis et al., 2003). This paper’s contribution to the existing body of literature is that it shows that the convertible market is driven by investor sentiment, rather than by adverse selection. Investor sentiment and its impact on the IPO market has been extensively analyzed, but previous research on convertible securities ignores investor sentiment as a potential factor driving the convertible market. For the IPO market, Lee et al. (1991) conclude that changes in investor sentiment significantly affect IPO volume over time. For SEOs,
Do the Stars Foretell the Future?: The Performance of Morning Star Ratings
Philip S. Russel, Philadelphia University, PA
The mutual funds industry has emerged as a major player in the financial system with net assets of over $6 trillion and serving nearly 100 million investors. The latest institution to capitalize on the popularity of mutual funds is the mutual fund rating agencies. Naïve investors are increasingly relying on “star-ratings” provided by mutual fund rating agencies to guide their selection of mutual funds. However, do mutual fund ratings provide any information of value to investors? We investigate this question by evaluating the performance of premier mutual fund rating agency, Morning star. Mutual funds have become a popular avenue for investors and the net assets of mutual funds have grown exponentially from a mere $17 billion in 1960 to over $8 trillion in 2005. The number of mutual funds has grown to more than ten thousand, exceeding the number of stocks listed on the organized exchange, making the selection of mutual funds an onerous task for the average investor. In response to investor demand for some simple strategy to screen the numerous funds, several independent agencies have started offering ratings on the mutual funds, Morningstar being the most prominent among them. Morningstar rates mutual funds on a scale of 1 to 5 stars, with 5 stars being the best. Other organizations (such as Lipper, Value Line) also provide similar rating services. While the rating methodology varies from organization to organization, the ultimate purpose is the same: to simplify investors’ decision making process by providing a composite measure of mutual fund performance. Mutual fund ratings, though based on seemingly complex analysis, are not necessarily credible in forecasting future performance. Indeed, they might be misguiding investors, as five stars do not necessarily guarantee superior future performance. While Morningstar does not claim to forecast performance, anecdotal evidence suggests that naive investors are increasingly relying on mutual fund ratings to make their investment decisions.
Mobility of Technology Positioning with Change in Patent Portfolio
Dr. Shann-Bin Chang, Ling-Tung University, Taiwan
Dr. Kuei-Kuei Lai, National Yunlin University of Science and Tech., Taiwan
Shu-Min Chang, National Yunlin University of Science and Tech. & Nan Kai Institute of Tech., Taiwan
Patents are an important indicator of R&D performance. A researcher can determine the technological competence of an enterprise by researching the patents it holds and so determine its technological position. Over time and with a changing environment, a firm may change strategies and improve their technological position. The purpose of this study is to discuss the impact of changes in the patent portfolio on the mobility of technology positioning. This study researched 37 firms that are representative of the business method patent of US Class 705. These firms all experienced the three stages of the Internet life cycle. Five technology groups were formed based on cluster analysis. This study also examined possible group movements among the 37 companies during the three stages. This study combined these five technology groups into three technologies oriented: postage metering technology, information and Internet technology, and business model development technology. Furthermore, this study discusses the trend of technology group development, and makes suggestions regarding how to develop “co-petition” strategies between or within technology groups. Patents are important indicators of a firms’ R&D performance. Some researches of patent analysis focus on certain industries such as the tool machine manufacturing and electronic industry (Ernst, 1997; 1998). Others utilize the patent data to evaluate the ability of technological development and innovation. Therefore, companies plan their technology strategy correspondingly (Mogee, 1991; Archibugi & Pianta, 1996). Not only can a corporation’s technological competence be analyzed by its patents, but it also affects the mobility of position and the strategy group in high-tech industry, e-commerce and business methods (Stuart, 1998; Lai, Chang and Wu, 2003). Indeed, a company’s objective may be adjusted because of a change in environment which will also influences its technology strategy.
Managing Corporate Legitimacy: Nonmarket Strategies of Chinese Firms
Dr. Zhilong Tian, Huazhong University of Science &Technology, Wuhan, PRC
Haitao Gao, Huazhong University of Science &Technology, Wuhan, PRC
In recent years, Chinese firms had met a lot of nonmarket obstacles on the global market. The fact indicates the deficiency of legitimacy management of Chinese firms. However, in spite of hostile institutional environment, Chinese private firms had changed their political status successfully and explored their living space. The article identifies nine legitimacy-building strategies that Chinese private firms had employed in Chinese transitional economy, and compares them with the legitimacy strategies of western firm. And discuss the implication for Chinese firms to go aboard. On September 17, 2004, leather shoes of Wenzhou shoe firms were burned by the local extremist in Elche in Spain, which cause a loss about $984,000. It was reported that a lot of similar events had happened in the internationalization process of Chinese firms in the past several years. The accident draws public attention. It is widely believed that the reason why the Elche Accident had happened is that a few extremists try to demonstrate their hatred by making violence event, hoping the government to pay more attention to them, as conventional shoes industry in Spain was facing a great pressure in fierce international competition. Of course, it is true, but we can find another explanation of the conflict from the news report: the shoes businessmen from Wenzhou city did not conform to the prevailing norms, such as evading duty, saling fake commodities, exploiting workers, not conforming to the schedule and so on, which leaded to disgust and resentment of the local people. Moreover, they didn’t communicate with the local people. Thus, the accident happened. The accident proposes an unnoticed but very fundamental topic: corporate legitimacy. The fact that Chinese firm had met a lot of nonmarket obstacles on the global market indicates the deficiency of legitimacy management of Chinese firms. But things are different as to the private firms in China. In spite of hostile institutional environment, Chinese private firm had changed their political status successfully and explored their living space. Rather than solely establishing their products and reputation in the marketplace as western firms, private business in China are mainly pursuing a legitimate status, in the political status. In a capital country with market economy, this is not a major issue worth discussing. The determination of its degree of legitimacy is not a problem in capitalist countries because private business is both legal and in harmony with the prevailing ideologies of these countries, Therefore, it has the status of full legitimacy.
Effort Analyzed by the Balanced Scorecard Model
Jui-Chi Wang, Hsing-Wu College, Taiwan
The Balanced Scorecard (BSC) model requires corporations to evaluate their organizational performance from four different perspectives—financial, customers, internal businesses, and learning and growth. Its utility lies in the prioritization of key strategic objectives that can be allocated to these four perspectives and the identification of associated measures that can be used to evaluate organizational progress in meeting the objectives (Kaplan & Norton, 1992, 1993). Through subsequent modifications and improvements, researchers and business specialists have found that the BSC could be used as an effective strategic management tool. More specifically, by determining the existence of strategic linkages between the strategic objectives and measures of the four perspectives, managers can take into account both the organizational objectives and the business processes in their creation of a BSC. Therefore, the BSC can not only be used as an evaluation of the organization’s performance, but also to manage business processes within the organization (Cobbold & Lawrie, 2002). A case study analysis of the business re-engineering efforts of two high-tech companies—Compaq and Acer—was conducted. The business re-engineering efforts of the corporations, which led to their alignment to their work processes with organizational goals, were analyzed within the context of the four perspectives of the BSC. It was evident that the BSC could be used as a strategic management tool. The presentation of the business re-engineering efforts within this model offered a clear overview of the performance of the two corporations and showed how they overcame their problems by forging connections between their organizational goals and their business processes. Many corporations have begun to focus their attention on integrated strategic management tools that link performance measurements to organizational management due to increasing competition and globalization (Hannula, Kulmala, & Suomala, 1999).
An Empirical Study of Using Derivatives on Multinational Corporation Strategies in Taiwan
Yi-Wen Chen, Hsing Wu College, Taiwan, R.O.C.
This study is very much a preliminary one and arrives at through the open-ended interviews conducted in conjunction with the analysis of the literature review on the derivatives market. At the same time, it sets up a solid basis for (a) laying out the parameters and boundaries of the given field of study; and (b) stimulating further studies. It also seems to agree with most of the major studies in the literature with reference to the importance of the derivatives market around the world and its effect on firms of a particular size (Bryan, 1993; Steinherr, 1999; and Fornari & Jeanneau, 2004). The study makes fairly clear what must be done in Taiwan with respect to improving the chances of its firms to compete globally: improve the financial derivatives market through further deregulation and a less hands-on attitude on the part of the government. In this way, Taiwan’s economy can become more integrated with that of the rest of the world (Chelley-Steeley, 2003). While past performance is no indication of future trends, it is well-known fact that traditional financial services have undergone massive changes in the last 30 years, thanks in part to rapidly developing technologies, demographic shifts, economic globalization, opening of dormant and emerging markets, and increased competition among financial institutions. These forces have led to dramatic “change in how financial firms make money; in how and by whom they are regulated; in where they raise capital; in which markets they serve; and in what role they play in society” (Bryan, 1993, p. 59). There is little doubt that derivative, defined as “financial contracts whose values depend on—and are derived from—the value of an underlying asset, reference rate, or index” (Bullen and Portersfield, 1994, p. 18), have become extremely important in the world financial markets. In fact, some argue that they have become indispensable. It is generally argued that markets, countries, regions, and individual firms looking to capitalize on globalization and the freer flow of financial instruments can ill afford to ignore the derivatives market if they want to take full advantage of all the opportunities to maximize their profit and expansion potential. As with any financial instrument, even one designed to spread risk more evenly, there are potential downsides, downsides that can quickly become exponential if care is not taken. Risks are placed in two categories: those experienced by individual firms and those by the financial system as a whole. Among individual risks: credit, default, legal, market, liquidity, and/or management risk. Systemic risk has to do with increased competition, greater linkages across the board, less disclosure of financial information through the use of so-called off-balance sheet transactions (a la Enron and WorldCom et al., see Barreveld (2002);
Market Entry Patterns: The Case of European Asset Management Firms in the U.S.
Dr. Jafor Chowdhury, University of Scranton, Scranton, PA
The process theory of internationalization posits that the foreign expansion process of firms follows an incremental and sequential path. However, the entry patterns of a fairly large sample of European asset managers establishing presence in the U.S. market in the last two decades show that the entrants had taken a predominantly acquisitive approach, skipping the intermediate steps in the expansion process. The objective of this study is mainly two-fold. First, to describe the firms’ entry patterns with regard to their choice of market entry vehicles. Second, to explain the entry patterns in terms of the internal factors prompting the firms to deviate from the process theory’s predicted path. The implications of the findings of this study for both theory and research are explored. The process theory of internationalization posits that the foreign expansion process of firms generally follows an incremental and sequential path (Johanson & Weidersheim-Paul & Johanson, 1975, Johanson & Vahlne, 1977). However, the entry behaviors of a sample of 54 European asset management firms engaging in 152 entry incidents in the U.S. market during the period 1984-2004 show that the entrants had taken a predominantly acquisitive approach. To a degree, the entrants had employed all three available entry modes— build (internal expansion), buy (acquisition), and partner (strategic alliance). However, buying was by far the most frequently utilized entry mode accounting for nearly three-fourths of all entry incidents. In terms of deal value and assets involved, the acquisition incidents represent significantly larger transactions than those involving either building or partnering. In addition, partnering was used mostly as an adjunct to buying for tapping into certain location-specific resources and capabilities that the entrants needed for bolstering their European and global competitive positions. The preponderance of acquisitions among the incidents implies that the entrants had skipped some intermediate steps in their expansion process to accelerate the speed of their market entry. Overall, the observed entry patterns are not consistent with the predictions of process theory. This study makes no attempt to test process theory empirically. Instead, by assuming that the theory explains the “pure” or “most basic” case, a research question is posed that has largely been neglected in the extant literature: What factors enable the entrants to deviate from the process theory’s predicted path? This paper focuses solely on the internal (i.e., firm-specific) factors prompting the European firms to pursue a largely acquisitive approach in accessing the U.S. market. The asset management industry is a vast field in terms of number of firms competing in the market, size of investor assets it handles, and the range of critical value-added services it offers to investors.
The Discussion of Media Selection and Accessible Equity in Distance Education
Dr. Jack Fei Yang, Hsing-Kuo University, Taiwan
Is the role of media in distance education important? Is the medium the message? The impact of media on instructional outcomes continues to be debated. The cost for new distance-system development and training is a major challenge for institutions in developing countries that want to be competitive in the global society. To adopt the newest distance media does not always directly result in a proportional increase in student learning outcomes and learning achievements. A choice of a high quality media for a few people may miss the challenge of serving the greatest number with an effective delivery approach within economic reach. Media influences learning by introducing different levels of learning objectives, learning activities, and learning outcomes. However, beyond media consideration, factors such as instructional methods, learning styles and teaching strategies need to be high priority to policy makers. The less economically fortunate people of developing countries need the concern of the high technology world to assure that large populations are excluded from the distance learning society. Development of distance education across the world is influenced by economics, politics, technology, and societal issues, it is important for distance program designers to be aware that technology applications in distance education may provide quality education for mass populations, radically increasing equal access to opportunity. Clear educational purposes‚ and careful decision-making and program design are key factors to develop a good distance education program to serve an increasing mass population. When determining instructional media in distance education‚ elements such as access‚ cost‚ and level of interaction are key factors need to be considered. Jones Shoemaker (1998) pointed out that in order to evaluate the sources and impacts of change in continuing education, political, economic, social, physical, and technology issues need to be considered. When economic and political conditions change in developing countries, demand for higher education and more educational opportunities will result in the expansion of higher education. As Harry and Perraton (1999) indicated, “Distance education at the end of the twentieth century reflects international economic, political and related ideological change and is shaped by technology opportunity” (p. 2). There are many factors that influence media selection strategy: learning objectives, subject content, teaching methods, learner expectations, time, facilities, and societal expectations that drive the education market (Romiszowski, 1988; Strauss &Raymond, 1999). In developing countries, it is more critical and important to adopt appropriate distance media/methods within available resources because of limited economic support. However, cheap or simple distance media should not be directed to poor nations. Expensive, fancy distance media are not necessarily equated to produce the best educational quality.
The Value Added Tax applied in the Member States of European Union: The Case of Spain
Dr. Maria Luisa Fernandez de Soto Blass, San Pablo-CEU University, Madrid, Spain
The Value-added tax (VAT) was introduced in the European Economic Community in 1970 by the First and Second VAT Directives and was intended to replace the production and consumption taxes which had hitherto been applied by the Member States, which hampered trade. The following text summarises a consolidation of existing Directives in this field. The present paper introduces new figures and formulas never seen before at book of taxes, analyses the concept of Value Added Tax., makes a brief approach to the history of the VAT, studies the elements of this tax as the beneficiary, taxable person, territoriality, basis of assessment, exemptions, explains the basic mechanism for VAT: VAT NET, OUTPUT TAX and INPUT TAX, deductions, the taxable base, the VAT rates, place of taxable transactions, chargeable event and chargeability of tax, special schemes VAT Invoice, Collections and example at European Union and the case of Spain. This paper is the result of three researches that I´m carrying out at The Institute for Fiscal Studies, Ministry of Economy and Finance, Spain, University of San Pablo-CEU, Madrid, Spain, from 2003 to 2006, and at University of Leeds, United Kingdom from 1st July to 1st September of 2004, 2005 and 2006 that is going to continue at the same time and place. European action in the area of indirect taxation has its legal basis in articles 90 and 93 of the Treaty establishing the European Community (EC Treaty). It is subject to the unanimity rule and has always been governed by the subsidiarity principle: its aim is to harmonise national systems of indirect taxation , not to standardise them. In other words, its aim is to ensure that national systems are not only mutually compatible, but also comply with the objectives of the EC Treaty (European Commission, 2006). The Value-added tax (VAT) was introduced in the European Economic Community in 1970 by the First and Second VAT Directives and was intended to replace the production and consumption taxes which had hitherto been applied by the Member States, which hampered trade. In 1977, The Sixth VAT Directive 77/388/EEC harmonised this tax. It introduced a common assessment for VAT, and represented a body of law laying down Community definitions of important concepts. It also paved the way for subsequent measures working towards a goal set as early as the First VAT Directive: the abolition of tax frontiers. Further amendments to the Sixth VAT Directive in 1991 and 1992, Directives 91/680/EEC and 92/111/EEC concerned the abolition of tax frontiers and were intended to adapt VAT to the requirements of the new single market. rts of buildings and the land on which they stand; the supply of building land (Fernández de Soto Blass, M.L, 2006).
Causes and Consequences of High Turnover by Sales Professionals
Dr. Phani Tej Adidam, University of Nebraska at Omaha, Omaha, NE
Retention of sales professionals is becoming one of the most challenging issues facing current sales managers, especially since the cost-effects of high turnover on the corporate bottom line is devastating. In this scenario, this paper investigates the costs and reasons behind high sales professional turnover, and offers some suggestions on how to increase sales professional retention, and thereby lowering the turnover rate. Emphasis must clearly be placed on recruiting the right kind of sales professionals by offering realistic job previews, providing appropriate training and developmental opportunities, engaging salespeople by developing high trust and commitment levels, establishing reasonable and equitable sales quota setting procedures, designing appropriate compensation structures, and individualizing the motivational incentives for each sales professional. One of the most important issues facing businesses is finding and keeping good sales professionals. After all, sales professionals are the most valuable organizational resource, and good sales professionals should be thought of in terms of investments needing frequent rewards. Sales human resource professionals find themselves trying almost anything to retain their best sales people, especially when such sales professionals are being ensnared by their competitors in a tight labor market. Retaining top sales people may indeed be hard. It requires being alert to organizational problems and difficulties which may drive sales people out the door (Brashear, Manolis, and Brooks, 2005). It also means being sensitive to their hopes and dreams, needs and desires, and managing sales force in a manner that lets them achieve their own goals (Schwepker, 1999). Savata (2003) opines that “Losing staff is always a part of doing business.” He also says that turnover higher than 20% is unnecessary and wasteful. He indicates that employees' personal reasons for leaving are beyond a firm's control, but it often can do something about work-related issues that cause staff to move on. US Department of Labor (www.bls.com) provides the numbers on total employee turnover rate, and Nobscot Corporation (www.nobscot.com), the pioneer in exit interview management software, offers average voluntary employee turnover rate in the US. The numbers are summarized below: Some turnover in a firm is even desirable, since new sales people bring new ideas, approaches, abilities, and attitudes and keep the organization from becoming stagnant (Holmes and Schmitz, 1996). However, high turnover sends a very clear signal that something is wrong somewhere in an organization.
Two Stage Residual Income Model for Evaluating Intrinsic Value of Stock Listed Firms An Empirical Analysis of Electronic Information Industry of Taiwan Fifty Index
Tao Huang, Ming Hsin University of Science and Technology, Taiwan
Shih-Chien Chen, Takming College, Taiwan
Dr. Jovan Chia-Jung Hsu, Kun Shan University of Technology, Taiwan
Along with the diversification of our securities market in recent years, as well as with the chaos of market order, corporate valuation has now become a critical issue for numerous public investors. In this thesis, we will carry out our study by means of the Residual Income Model as proposed by Ohlson, and try to understand the intrinsic value of our stock market through the analysis of electronic information industry of Taiwan Fifty Index, thereby providing references for our investors. Conclusions in this study are drawn as follows: Ohlson’s Residual Income Model is a good reference for the forecast of middle-term and long-term industrial rate of return; while the book to market price ratio (B/P) can have relatively more excellent predictive power in both short-term and long-term. And the earnings to price ratio (E/P) is a good index in the short term. In the case of the sales to market price ratio (S/P), it can’t be a suitable reference for short-term rate of return for electronic information industry. Since its inception in the year 1962, Taiwan Securities Market has had a history of 43 years up to now. On the other hand, excellent performances have been repeatedly achieved on Taiwan Stock Market since when the Taiwan Stock Exchange Weighted Stock Price Index increased over 1,000 points in the year 1986. With the experience of the so-called fast-growing period (1987-1990) and collapse period (1990-1991), our domestic market has gradually entered into a stage of maturity in its structure (1991-). Throughout this process, several fluctuations have taken place due to the inadequately healthy and complete order of this market, implicating that the stock values can differ or vary great from the real corporate values. Our observation of the previous transactions on the Taiwan Stock Market shows that there is an intense climate of short-term trading on this market in that at the time of making investments, the public tends to be in popular lack of those concepts regarding the investment value, causing the stock market to degenerate to a venue for speculation and gambling. In view of this reality, such a climate can be eradicated to foster both the healthier development of securities market and more solid security for the public investors only until a stock valuation model particularly applicable to the Taiwan Stock Market is established by way of studies to serve as a reference for companies, government and civilians.
The Determinants of Working Capital Management
Dr. Jeng-Ren Chiou, National Cheng Kung University, Taiwan
Li Cheng, National Cheng Kung University, Taiwan
Han-Wen Wu, China Development Financial Holding Corporation, Taiwan
This paper investigates the determinants of working capital management. We use net liquid balance and working capital requirements as measures of a company’s working capital management. Results indicate that the debt ratio and operating cash flow affect the company’s working capital management, yet we lack consistent evidence for the influence of the business cycle, industry effect, growth of the company, performance of the company and firm size on the working capital management. Corporate finance can be mainly categorized into three domains: capital budget, capital structure and working capital management. The raising and management of long-term capital belong to the domains of capital budget and capital structure. Source and use of long-term capital are traditionally aspects of much concern in finance, while management of the working capital that sustains the operation of an enterprise draws relatively little attention. Working capital, including current assets and current liabilities, is the source and use of short-term capital. In addition to company characteristics, working capital is also related to the financial environment, especially the fluctuation of business indicators. Since the poor performance of the global economy during the late 1990s, financial institutions have in general adopted a tighter credit policy to lower their deposit/loan ratio. Thus enterprises have had to manage their working capital more prudently to adapt to the changing financial environment. Kargar and Blumenthal (1994) demonstrated that many enterprises go bankrupt despite healthy operations and profits owing to mismanagement of working capital, so it is a topic that deserves increased investigation. The existing literature on the management of working capital is limited in scope, and most prior studies use variables such as current ratio, quick ratio, and net working capital to evaluate enterprises’ management of short-term working capital. This study uses the net liquid balance (hereafter referred to as NLB) (1) and working capital requirements (hereafter referred to as WCR) (2), both used by Shulman and Cox (1985), as the proxy of working capital management. We will investigate determinants of the management of working capital, including business indicator, industry effect, debt ratio, growth opportunities, operating cash flow, firm performance and firm size. We use 35 quarters’ data from the first quarter of 1996 to the third quarter of 2004. The study reveals that debt ratio and operating cash flow can affect management of working capital, whether NLB or WCR are used as proxies.
Exploring Customer Satisfaction, Trust and Destination Loyalty in Tourism
Heng-Hsiang Huang, Ching Kuo Institute of Management & Health, Taiwan
Chou-Kang Chiu, Ching Kuo Institute of Management & Health, Taiwan
As the concept of relationship marketing has motivated management of travel agents to seek fresh and creative ways of establishing long-term relationships with their tourist customers, it is important to explore the tourists’ destination loyalty given a competitive market in tourism around the globe. This study proposes a model of tourists’ satisfaction, trust and destination loyalty in tourism. In the proposed model, the perceived cultural differences, perceived safety and convenient transportation indirectly influence destination loyalty through the mediation of relationship quality comprising satisfaction and trust. Finally, the discussion and limitation about the proposed model are also provided. Tourists’ decisions to choose destinations and spots have been one of the significant issues usually discussed by researchers (Ajzen and Driver, 1991; Chen, 1998; Fesenmaier, 1988; Um and Crompton, 1990). Such decisions have been also linked with the topics of decision rules, decision-making processes, and choice factors (Chen and Gursoy, 2001). Despite the substantial contributions from previous research on decisions to choose tourist destinations (Crompton, 1992; Crompton and Ankomah, 1993; Fesenmaier, 1990; Woodside and Carr, 1988), research pertaining to the linkages between decisions to choose a tourist destination and tourists’ destination loyalty from a perspective of relationship marketing is rather limited, which deserves a close attention as this study attempts to demonstrate. The concept of relationship marketing has promoted management of travel agents to seek fresh and creative ways of establishing relationships of mutual-benefit with their customers. Specifically, customer loyalty nowadays becomes critical to many service industries, including the area of tourism. Travel agents are busy searching customers by offering highly competitive services in order to achieve customer loyalty towards a specific destination, but such loyalty relies on achieving relationship quality with that destination in order for tourists to willingly visit the same destination in the future. Previous research has discussed about the importance of relationship marketing in some service industries and their impact on firm profitability and customer retention (e.g., Crosby, Evans and Cowles, 1990; Tam and Wong, 2001), but the modern approach to relationship quality and loyalty in the service sector borrows heavily from the marketing theory and science that has been in use for decades in the general industry (Lin and Ding, 2005, 2006).
The Effects of Individual and Joint Gift Giving on Receipt Emotions
Dr. Shuling Liao, Yuan Ze University, Taiwan
Yu-Huang Huang, Yuan Ze University, Taiwan
This paper employs the concepts of hedonic framing and mental accounting to interpret the effects of individual and joint gift giving on receiver’s emotional responses. The moderating effects of situational factors including distance of social relationships and types of gift are also investigated. The results show that receivers respond more positively to joint gift giving. Gifts from close members produce better affect. When the gifts come from intimate relationships, receivers respond no differently to nonmonetary gifts from individual or group but negatively toward someone who sends the monetary gift alone. By contrast, individual gift giving from distant relationships generates the worst emotions for a nonmonetary gift. The findings provide insights to the appropriate behavior and practice of gift giving. Gift giving is a kind of common behavior in daily life and is highly promoted by business marketing activities. People are motivated to give gifts for the purposes of social exchange, economic exchange, and love sharing (Belk & Coon, 1993). People also love to receive gifts when the gifts are appropriate to indicate the perceived interpersonal connection (Neisser, 1973; Shurmer, 1971). However, the course of paying a man back in his own coin between men not merely involves the exchange of the entity goods; the implicit psychological factor and influence behind this interaction are even more intricate and thought-provoking. For this reason, the past research on gift giving behavior has integrated perspectives of different domains in social sciences. Among them, the gift giving theory developed by Sherry (1983) has received wider attention for its conceptual completeness. Sherry (1983) originally incorporated concepts from anthropology, sociology and psychology to sketch up the gift giving behavior. A number of following studies on gift giving stem from Sherry’s work and bring in abundant and various explorations to the stream of gift giving research. Nevertheless, the past research mostly focuses on the reasons of gift giving (e.g., Belk, 1976, 1979; Caplow, 1982; Cheal, 1988; Brunel, Ruth & Otnes, 1999), the giver’s motivation (e.g., Murray, 1964; Belk, 1979; Caplow, 1982; Solomon, 1990; Goodwin, Smith & Spiggle, 1990; Wolfinbarger & Yale, 1993; McGrath, 1995; Wolfinbarger, 1990), or the timing of gift giving (e.g., Mauss, 1967; Lowes et al., 1968; Belk, 1993; Sherry, 1983).
A Study of Implementing Six-Sigma Quality Management System in Government Agencies for Raising Service Quality
Dr. Li-Hsing Ho, Chung Hua University, Taiwan
Chen-Chia Chuang, Chung Hua University, Taiwan
In an age of thin profit margin, the corporations are diligently looking for ways to differentiate themselves from the competitors, to beat the competition, to expand market share, to create quality differences, and even to achieve zero quality defects. Regardless of the industry type, continuous improvement on quality is the irreplaceable part of the entire production activities. Although there are many ways to solve the problems in product quality, the six sigma quality management system can effectively solve the core issue in production quality. This quality management system is highly integrated, contains detailed problem solving procedures, and has been tested by multination corporations like Motorola and General Electric. It is also a system that emphasizes on fundamental educations, changes in organization culture, the quantification of the effective productivity through the discussion of the core problems. Taiwan’s government agency has realized the importance of the six sigma quality management system, and by implementing the six sigma quality management system, the government agencies are able to increase the qualities of the services provided. With an increased and improved government service quality, the general public will have a greater confidence on out government. The main tasks for the health administrative agencies are providing vaccination shots, controlling the drug usage, examining and testing food quality, providing smoke hazard-prevention and public health educations, managing the medical administrative tasks, and etc. Therefore, the quality of the services provided by the health bureaus will have immediate impact on the health safety of the public health. How to effectively implement the six sigma quality management system in order to increase and improve the health service quality and to promote a healthy general public, then, should be the key topic the health administrative agencies should actively and aggressively discuss. The six sigma quality management system provides a brand new vision, concept, and methodology for corporate management strategy. errors out of every one million operation. In order to achieve the near-perfect goal of six sigma quality, the corporations must undertake a series of improvement measures toward their goals.
Influence of Audio Effects on Consumption Emotion and Temporal Perception
Dr. Chien-Huang Lin, National Central University, Taiwan
Shih-Chia Wu, National Central University, Taiwan
Consumer expectations towards retail stores exceed selling functions. Audio arrangements and the atmosphere of the shopping environment have become key influences on consumer satisfaction. Previous studies used actual retail stores as the investigation bases. This study adopted a computer graphic design tool to avoid previously uncontrolled variables in actual stores. A virtual realty shopping environment was built to identify audio effects in all aspects on consumers’ shopping behavior. The findings of this study demonstrated that consumer consumption emotion and temporal perception toward a shopping environment are significantly influenced by low music volume, and music type. Audio effects exert a greater influence on consumption emotion and temporal perception. The study of retail environment in the past research was focused on actual retail stores. Variables, including climate, shopper traffic, the attitudes of sales persons, and special events, are difficult for the researchers to control. Moreover, the validity of the findings is easily impeded by imprecise variables. Recently, with rapid advances in technology, virtual reality has created more opportunities for consumers in shopping and also become an effective research tool. This study was attempting to achieve variable control by using the computer graphics design software, named “Space Magician V2.0,” to create a virtual electronic appliance store, similar to a real bricks-and-mortar store, as an experimental and test venue. In this study, objects in virtual electronic appliance stores can be adjusted and moved based on individual needs in terms of color, location, and size. Subjects can also manipulate music broadcasting, for example: volume and broadcast timing. Emotion has been shown to influence consumption perceptions, assessment and behavior, and has also been identified as a key mediator in assessing service quality (Taylor, 1994). Thus, by precisely controlling the previous variables, this study attempts to identify the impact of music familiarity to customers, music type, radio broadcasting programs, volume effects in the store, on consumption emotions and temporal perception. Seidman (1981) explored the use of music in movies and education media, and found that music significantly influences cognition and attention. The findings have already been adopted in the movie and TV production industries. A scholar specialized in both music and neuro-physiology, Manfred (1982), indicated that music structure would stimulate the nerves in the brain and thus provoke emotional responses. Previous literature also revealed that the relationship between individual music preferences and the complexity of music demonstrated a U shape, and the complexity of music would gradually increase to suit human beings. In a study of the effect of music on the emotions, Wang (1992) identified emotion is connected to the speed of rhythm, and an allegro tempo is associated with a livelier and happier effect. Field research in department stores showed that music with a faster tempo tended to produce more positive emotions.
Multi-Criteria Analysis of Offset Execution Strategies in Defense Trade: A Case in Taiwan
Dr. Chyan Yang, National Chiao Tung University, Taiwan
Tsung-cheng Wang, National Chiao Tung University, Taiwan
In international trade the offset practices have received increased attention over the past twenty years. In the coming ten years, the Taiwanese government may expend roughly US$16 billions for purchasing military equipments through the Foreign Military Sales (FMS) program, and can achieve US$8 billions offset credit. Consequently, this paper is to discuss with Taiwan’s optimal offset execution policy and propose a framework of drawing on offset credit in future. In order to help the decision makers determine the optimal offset strategy, the TOPSIS method by incorporating AHP method is applied to determine the best compromised offset execution strategy. The potential applications and strengths of Multi-Criteria Decision-Making (MCDM) in assessing the offset strategies are highlighted. Offset is one alternative marketing strategy recently introduced in the international marketplace. Offset, or its military conunterpart, is a commitment associated with a sale where the seller will provide the buyer with an offsetting agreement to purchase other products. The basic philosophy of an offset agreement or countertrade is to structure the commitment so the seller will fulfill a contract that rewards the buyer. This reward may have the potential for economic, social or technological growth, or increased sales of other domestic goods in exchange for the buyer’s purchase. This contract increases the competitive value of the seller’s product. In theory, this agreement allows the buyer to purchase additional units since the sale is more economically, socially, or politically attractive with the offset agreement, making a product more affordable or competitively attractive. This philosophy allows arrangements to create a multiplier effect. Many methods satisfy offset requirements including co-production, direct offset, indirect offset, technology transfer, et al. However, one or a few practically methods may be adopted or implemented in a single government procurement program. In the coming ten years, the Taiwanese government may expend roughly US$16 billion for purchasing Patriot-III missiles, P-3 long-range anti-submarine planes, and diesel-engine submarines from United States through the Foreign Military Sales (FMS) program, and can achieve US$8 billion offset credits the largest procurement in Taiwanese history (MND, 2005).
Convergence of Learning Experiences for First Year Tertiary Commerce Students – Are Personal Response Systems the Meeting Point?
Brian Murphy, University of Wollongong
Dr. Ciorstan Smark, University of Wollongong
This paper reflects on the need for interactivity in first year lectures. This need is suggested to arise from first year students’ diminishing tolerance for impassivity and also from the increasing accessibility of Personal Response Systems (PRS) in terms of cost, user-friendliness and students’ level of technological savvy. The ways in which PRS can enhance interactivity and the importance of increasing interactivity in first year students’ learning outcomes is discussed in terms of factors supporting good learning and enhancing their overall learning experience. A fundamental shift in the outlook of commerce students coming into Universities today from the outlook of first year students ten years ago has been argued by many authors (for example, Tapscott, 1998; Friedlander, 2004; Davis, 2005). This shift in outlook is related to the fact that the bulk of first year students coming into Australian university courses in 2006 are both familiar with technology and (in a related development) are reluctant to suffer impassive learning environments silently. This shift in outlook has been accompanied (at least in the field of commerce) by generally increasing student numbers (Freeman and Blayney, 2005) and a realization that the large lecture format of instruction is less draining of resources than smaller forums such as tutorials and seminars. The result is that, at a time when our students demand more interactivity, Australian Universities are anxious to provide a teaching environment (large lectures) which has traditionally allowed little interactivity (Draper and Brown, 2004, 81). This paper argues that a judicious use of Personal Response System (PRS or “clicker”) technology can help to promote the intellectual engagement of our first year students in lectures. PRS can engage the “Net-Generation” or “Millennial” student through interactivity. The importance of interactivity to people as accustomed to the two way conversation of the internet (as opposed to the one-way broadcasting of knowledge in the traditional lecture format) is mentioned by several authors (Biggs, 2003; Tapscott, 1998; Mazur, 1997; Hake, 1998). Tapscott (1998, 22) refers to those born between 1977 and 1997 as the Net Generation (or N-Geners) and argues that their exposure to the internet in their formative years has led to this group being the antithesis of the couch-potato generation that preceded them. They are used to interactive, participatory, investigative enquiry. They have a very limited tolerance for knowledge transmission systems which require them to be passive observers (such as traditional lectures at university). ‘The students like active learning, not passively listening to a teacher drone on. They absorb a variety of information from different multimedia.
The Influence of Cultural Factors on International Human Resource Issues and International Joint Venture Performance
Dr. Lung-Tan Lu, Fo Guang University, Taiwan, ROC
The numbers of international joint ventures (IJVs) have rapidly increased during the past decade, providing Multinational Enterprises the opportunity to stay competitive and manage complex international business activities. This paper proposes a conceptual model, distinguishing between culture theory, international human resource (IHR) issues and IJV performance. An IJV mixes the IHR activities of parent firms from different nations, and, therefore poses more complex management challenges than other entry modes in foreign markets. Many IJV failures, according previous research, point out the importance of (IHR) activities. Theoretical perspectives are used to generate four propositions of concerning cultural factors and IHR issues in IJV performance in this paper. Several suggestions involving culture, management styles, role stress, and conflict resolution strategies are made for future research. The issue of cultural factors for multinational firms investing in foreign countries has increasingly attracted academic attention in the field of international business (Buckley 2002). Cultural background has been suggested to be influential in the entry mode decision for internationalizing firms (the choice, for example, between wholly-owned and joint venture activity). And cultural distance between firms of different nationalities has been argued to influence co-operative strategy, and the success or otherwise of international joint ventures and other co-operative modes. A cooperative strategy (of which an international joint venture, IJV, is one example) offers many advantages to a company, since the host country partner can supply familiarity with the host country’s culture and market. An IJV mixes the management styles of at least two parent companies. It therefore poses more complex management challenges than domestic managerial activity, or than the transposition of domestic routines into a foreign wholly-owned subsidiary (WOS). An IJV manager faces a difficult juggling act, trying to cope with the different management styles of the two parents, and trying to meet the possibly conflicting criteria for success that these parents impose.
Measuring Goodwill: Rationales for a Possible Convergence between the Excess Profits Estimate and the Residual Value Approach
Dr. Marco Taliento, University of Foggia, Italy
This study examines the two principal approaches to goodwill valuation which are widely accepted in academic literature and in the best accounting practice: (i) the discounted excess earnings technique and (ii) the residuum estimation procedure. The specific attempt of this study is to verify whether – and under which technical conditions and limits – it is possible to achieve a quantitative convergence between the results coming from both methods. In particular, since the former (‘direct’) method finds in firms’ net income power its explanatory variable, any investigated economic / financial convergence will be retained unlikely, or merely casual, if the other estimation (‘indirect’) approach is incoherently based on some alternative value drivers (e.g. cash flow, unless measuring involves the direct goodwill by some kind of ‘abnormal’ cash streams or similar performance indicators). Therefore, attention is paid to the determination of the above normal earnings capacity (excess earnings) and, on the other hand, to the allocation of the value of the firm, assessed as a whole, firstly to every identifiable assets and liabilities of the business enterprise, and then, as residual value, to goodwill. Against this backdrop, numerical illustrations and methodological remarks on time periods and discount rates are provided. Goodwill is a crucial business concept (1) that it is not easy both to qualify/quantify and to correctly account for (2) (3). In general terms, it represents a significant intangible asset which reflects the transferable and sustainable competitive advantage of firms. Financial statements usually exhibit the value of goodwill only after a business combination occurs (e.g. mergers, acquisitions, takeovers, demergers, etc.); nevertheless, it is not uncommon today to see firms that estimate their own earning power, future economic performance or capacity to create business wealth over time. In fact, nowadays a growing number of firms utilize suitable metrics focused on the ‘goodwill’ concept/measure – or upon some equivalent topic like the market value added (4) –, also in order to correctly adopt (and then control) effective and efficient management decisions (M&A projects, future investments, capital budgeting, corporate restructuring, appropriate financing, etc.), take valid corporate / business strategies, support or improve modern value-oriented disclosure schemes, new information sets for equity investors and other stakeholders, etc (5). Within this context, it is worthwhile drawing attention to a specific issue: the question of choosing among the various methods of goodwill estimate. It is common knowledge that, with regard to the longstanding academic controversy between the direct procedure for determining goodwill, founded upon the anticipated ‘above normal’ earnings, and the indirect ‘differential’ or ‘residuum’ model, which is rather based on the excess of the price of an (hypothetical or real) acquisition over the fair (market) value of the entity’s net assets, the prevailing opinion seems to lean towards the latter approach, whereas the former – de facto – is regarded as a mere tool of validation of the other one.
Research Discussion of Independent Mechanism in an Industrial Area Developed by the Government
Dr. Li-Hsing Ho, Chung Hua University, Institute of the Management of Science and Tech., Taiwan
Chao-Lung Hsieh, Chung Hua University, Institute of the Management of Science and Tech., Taiwan
Our country has made regulations to investment since the 49th year of the Republic of China. This is by the joined strength of the government and the chinese people in making investments and production space together. Over the past forty years, our country has developed the industrial area and is up to more than 13,000 hectares in total, with a great contribution to the development of Taiwan. The conditions for sale in the industrial sector have been poor in recent years, primarily as a result of economical recession. Funding sources for management of development in the industrial sector has been reduced.The government has invested in order to fullfil the demand from the manufacturers for industrial development. Investments are made to promote the relevant coaching and services and to continously decrease the costs for administration and management. However, the develpment of management in the industrial sector is in a difficult phase. Bringing new life to the government in terms of organization and management in the industrial area, with regard to the way of operation is already initiated, An idea for independent operation in the industrial sector has been generated. This enables the industrial area to act according to the tendency of today with regard to service efficiency, strength of pluralism in guest service functions and decrease in the financial shortfall in development of management. From historical background outlined in the abstract, it is clearly seen that the government needs a reform of its organisation. This should improve the operation and financial issues related to the management in the industrial sector..A socalled “responsibility center system” as well as “public services outsourced by the private sector“, have been proposed among other strategies. However, these strategies seem unable to completely solve all the existing problems.In recent years, the financial costs of development in management within the industrial sector have expanded. One of the reasons is the high costs of human labor. The management maintenance fees and waste water processing fees collected cannot balance these expenses. The National Asset Commission points out that there is a problem in funding of development and operation in the industrial sector. This is described in the “general list of recognized parts for the improvement of revision results and the non-operational special fund deposit of the central government” from The National Asset Commission. It states that: “there are 60 sites within the industrial jurisdiction of the development funds for industrial areas, 47 sites have service centers and 36 sewage treatment plants…whose functions are limited in only maintaining the environment inside the industrial areas, the functions are insufficient, besides the bad fund finances...”.
Financial Management in the Nonprofit Sector: A Mission-Based Approach to Ratio Analysis in Membership Organizations
Dr. Anne Abraham, University of Wollongong, Australia
Nonprofit organisations (NPOs) are melting pots combining mission, members and money. Given that the mission of a nonprofit organisation is the reason for its existence, it is appropriate to focus on financial resources in their association with mission and with the individuals who are served by that mission (Parker 2003; Wooten et al 2003; Colby and Rubin 2005). Measurement of financial performance by ratio analysis helps identify organizational strengths and weaknesses by detecting financial anomalies and focusing attention on issues of organizational importance (Glynn et al 2003). Questions have been raised that relate the performance of NPOs to their financial resources, their mission and their membership. Addressing these questions is the key to analysis and measurement of financial and operational control (Turk et al 1995) and provides an appropriate analysis for past performance which will help an organisation chart its future direction. This paper analyses financial performance by concentrating on ratio analysis in order to identify anomalies and focus attention on matters of significant concern to NPOs. It discusses the centrality of mission in the use of financial ratio analysis and extends previous financial performance models to develop one that can be applied to individual NPOs thus ensuring that financial performance analysis is not carried out in isolation from any consideration of an organization’s mission. The paper concludes by identifying the limitations of such an analysis and makes suggestions for further application of the model. Nonprofit organisations (NPOs) are melting pots combining mission, members and money. Mission is the central thrust of an NPO, the very reason for its existence (Drucker 1989; Oster 1994).
Chinese Currency Forecasts and Capital Budgeting
Dr. Ilan Alon, Rollins College, Winter Park, Florida
Dr. Ralph Drtina, Rollins College, Winter Park, Florida
Forecasting the Renminbi (also called RMB or Yuan) is crucial for any type of capital budgeting or long-term investment assessment in China. This article makes a dual contribution by first showing how to use exchange rates in a capital budget model and, second, by forecasting the 10-year RMB-US dollar exchange rate with the help of scenario analysis and using this forecast in a capital budget model. We believe that the RMB will appreciate against the dollar in the decade to come making the net present value of long-term invested capital higher in US dollar (USD) terms. China’s urban middle class, estimated at 50 million, is growing rapidly, and has disposable income to buy consumer products favored by developed economies worldwide (Browne, 2006). Foreign direct investment in China during 2005 reached $60.3 billion and this number is expected to grow in the future. For many companies, investing is not an option, but a necessity because the opportunity cost of not investing can be quite high. China is now the leading host of foreign direct investment in the world. Investing in China, however, does require the company to take risks. Among the risks are political, economic, country and project related. Changes in the exchange rate, in particular, are relevant to almost any type of foreign direct investment and with the liberalization of the Chinese Yuan, currency conversion is now a growing concern in China. This paper makes a dual contribution by first presenting a model for including changes in exchange rate in the capital budgeting process and, secondly, developing a forecast for the Chinese Yuan. Given the importance of China to today’s world investment and the changes that are occurring in the currency, the article discusses a salient global management issue. For the capital budgeting process, foreign companies investing in China must make an estimate of the long-term expectation for conversion of Chinese Yuan. Such an estimate is needed in order to calculate the profitability, required return, and present value in the home currency. Currency conversion is particularly critical in developing capital budgets since calculations depend on the accuracy of forecasting the amount and the timing of project cash flows. In this article, we offer a practical means for evaluating the potential for long-term revaluation of the currency when making capital investments in China.
A Study on the Information Transparency of the Involvements by Venture Capital—Case from Taiwan IT Industry
Dr. Dwan-Fang Sheu, Takming College, Taiwan
Hui-Shan Lin, Deloitte, Taiwan
A series of financial scandals of Tung Lung Hardware, Central Bills, Godsun and Taichung Commercial Bank were occurred in Taiwan due to unsound corporate governance, especial the weak information transparency. This study explores the role of venture capital in the information transparency of IPO companies of IT industry from 2001 to 2003. Regression analysis is used to explore the relation of information transparency of the invested companies depending on variables of whether involved by venture capital, shareholding rate of venture capital, number of venture capital and whether venture capital are appointed as directors and supervisors. This study will be discussed year by year based on “Criteria Governing Information to be Published in Annual Reports of Public Companies” as amended by Securities and Futures Bureau in 2001, 2002 and 2003. Empirical results indicate the information disclosures are significantly positively correlative to investments of venture capital, their shareholding rate and number of venture capital in all samples of 2001, 2002 and 2003. Only in samples of 2001 and 2002, venture capital as directors and supervisors are significantly positively correlative to information disclosures. Because of their involvements, venture capital companies have ability to ask the invested companies to disclose more relative information, strengthen information transparency, and minimize the possibility of concealment of information by insiders. The financial crisis caused by US Enron bankruptcy at the end of 2001 shocked the confidence of US stock market. The amounts in a series of subsequent fraudulent expenditures occurred in larger companies such as Worldcom and Merck are larger and larger. Investors became aware of the discrepancy in the reliability of financial statements and the values of companies. As affected by such, not only the financial crisis of individual company that is known emerges, but also the faith confidence values crashed. In Taiwan, a series of scandals such Tung Lung Hardware, Central Bills, Goldsun, and Taichung Commercial Bank occurred in 1998 indicated that alert function of the financial examination did not work, resulting in that responsible persons of enterprises utilized their affiliates and group financial institutions to manipulate capitals, get excessive loans, oversell assets of their company. These fraudulent practices not only caused financial crises of the company, but also brought worries to the financial market. In particular, after Enron case happened, investors suddenly were aware of the importance and value of information transparency of the company, in which strict and fair accounting system should play an important role in information transparency. When financial information disclosed to public by a company cannot completely satisfy their desires of “awareness”, how to increase the transparency of a company by “value report” method is an issue of implementing corporate governance. Venture capital(VC) are promoters of high tech industry in Taiwan, mainly because of their aggressive and strategic functions of “post-investment management”. In addition, the managements of venture capital focus on promoting corporate governance principles such as regulatory compliance, information disclosure and transparency, in order to protect their rights and further increase values of the invested companies. Barry et al. (1990), Megginson and Weiss (1991), and Lin (1996) suggested that venture capital companies have the functions of supervision and verification. The function of supervision argues that the aggressive involvement and participation in the decision-making of the invested company will be recognized by the market participants, resulting in better performance of the invested company and thus eliminating the asymmetric information between issuer and investors. The function of verification means that the outside economic entities, because of two features – understanding the information about quality or perspective of a company, maintaining their reputation for perpetual operation and development, have ability and incentive to faithfully disclose and reflect the true value of a company, therefore, the interested parties of a company can acquire private information about quality or perspective of a company through their economic act, just as such information is assured by them. This study intends to explore the affection of the involvement by venture capital on information transparency of invested company. IPO (Initial Public Offering) companies in IT industry are selected as objects of this study. Megginson and Weiss (1991) suggest that the company invested by venture capital will better attract reputable underwriter to undergo the underwriting, and the time and cost of listing will be decreased. The existence of venture capital can eliminate the asymmetric information between listed company and financial specialist, and between listed company and investors. Admati and Pfleidere (1994) indicate that if lack of such insider investment as venture capital, a company will not disclose all private information, such as formation of new securities, underwriting price, and selection of underwriter, resulting in the agency problems of over- or under-investment. This study argues that the involvement of venture capital in a company will enhance the soundness of corporate governance mechanism and the level of information transparency. Therefore, the following hypotheses are developed: Information transparency of a company with involvement by venture capital is better than that without involvement by venture capital. The higher the shares held by venture capital, the better the information transparency of the invested company. The more number of venture capital, the better the information transparency of the invested company.
The Effect of Corporate Identity Changes on Firm Value An Empirical Investigation
Dr. Ceyhan Kilic, New York Institute of Technology, New York, NY
Dr. Turkan Dursun, New York Institute of Technology, New York, NY
The objective of this study is to examine the value creation effects of the company name change announcements. Additionally, the wealth creating effects of the company type (consumer versus industrial goods companies), and the type of name change (partial versus complete name changes) are investigated. An event-study methodology is utilized using a multi-variate regression model. The final sample included 44 name change announcements made by U.S. companies. The empirical results of this study indicate that name changes add to firm value. Furthermore, it was found that the name changes made by industrial goods companies with monolithic identity reduce shareholders’ wealth significantly. However, the name changes made by consumer goods companies with branded identity do not affect the investor’s perception of firm value. In terms of the type of a name change, partial name changes generate positive and significant stock returns. Managerial implications and future research suggestions were also provided. Corporate name change is a major strategic decision. The last two decades have witnessed a continued increase in name changing activity among U.S. corporations. Every year hundreds of companies confront the challenge of changing their names and facilitating additional activities associated with corporate name changes. Name changes have often resulted from friendly or hostile takeovers, corporate acquisitions, or spin-offs, restructuring, mergers, and new strategic direction (Morris and Reyes 1991; Morley 1998). According to a recent statistics by the Schecter Group Name Change Index, the number of name changes by publicly traded companies went up, reaching 197 in 1992 or 28.9% higher than in 1991. Financial services led in name changes (Marketing News 1993, p.1; Slater 1989). Wall Street Journal reports that, in 1995, a total of 1,153 companies changed their names. This figure is up 4% from that of 1994. More than half of these companies changed their names since they involved in some kind of merger, restructuring, or acquisition activity. The name and logo are the two basic elements of corporate identity. Corporate identity is “a visual statement of who and what a company is” (Gregory and Wiechmann 1991, p.61). The effect of a name change, or identity change can be dramatic for the company’s shareholders in both positive and negative ways (Ferris 1988). The potential effect of identity change via name change on firm value needs to be investigated. A stream of research on company name changes has focused on examining the impact of company name change on the company’s stock prices (e.g., Ferris 1988; Horsky and Swyngedouw 1987; Madura and Tucker 1990; Morris and Reyes 1991). These studies have utilized an event-study methodology to investigate the possible association between various aspects of change in a company’s name and its common stock’s performance in financial markets. Horsky and Swyngedouw (1987) examined the effect of a new name on the firm’s profit performance, and what type of firm is more likely to be successful in doing so for a sample of 58 companies made name changes. Ferris (1988) views a corporate name change as a signal sent by the firm’s management to its current and potential owners/ shareholders about future firm profitability. The corporate name change announcements between 1983 and 1985 were used to examine the impact of a name change on the firm’s performance. Madura and Tucker (1990) attempted to find how the stock market reacted to savings institutions that remove their “savings and loan” designation in their names. Their sample included only 12 savings institutions that engaged in mainly “cosmetic” name changes from 1987 through 1989. Morris and Reyes (1991) explored whether there is a significant difference between excess stock return rates of those companies whose new names represent five functional characteristics (distinctive, flexible, memorable, relevant, and positive) of a well-chosen name suggested by the relevant literature. A random sample of 28 firms undertaken “pure” name changes during the period of 1979-1985 was selected. The general objective of this study is to investigate the announcement effect of corporate identity changes through company name changes on firm value by using more current data for a relatively longer time-period. More specifically, this study attempts to identify whether there is any difference in the reaction of the financial market in terms of type of companies (industrial versus consumers goods companies) and some name characteristics (partial versus complete name changes). The both management and marketing literatures have not produced substantial conceptual and empirical research on the effects of corporate name changes. In this area of research, it is important to conduct more replication studies to ensure the consistency, or inconsistency in the empirical results of the similar studies over time. So far, this need has not been fulfilled by the current literature. In this study, we partially aim to fulfill this need as well. Ferris (1988) argues that one crucial aspect of the agency theory is the information asymmetry available to both a firm’s owners and managers. The agency theory proposes that the principal contracts with an agent are for conducting the necessary transactions of the business in an integrated fashion. The transactions are regarded as the foundation for the profitable conduct of the firm. Also, the transactions must be based on decisions that are rationally made to accomplish the objectives of the principal (Winfrey and Austin 1996). Ferris (1988) characterizes a firm “where the shareholders are the owners and the managers are agents hired to serve the owners, but are motivated by their self-interest” (p.41). The typical agency model deals with a principal and an agent in clearly superior-subordinate roles. The traditional roles played by the partners may be reversed. As a result of this, the agency relationship becomes reciprocal. According to Winfrey and Austin (1996), the reciprocal nature of this relationship may cause information asymmetry and monitoring problems (Winfrey and Austin 1996). In a firm, the owners (shareholders) cannot control or observe all the transactions of a manager directly. Both parties act on their behalf or self-interest. The agents have an interest in eliminating or reducing the conflict between the principal and agents, given the asymmetry of information between two parties (Ferris 1988). According to signaling hypothesis, economic information exclusively possessed by management will be conveyed to the shareholders via various signals. Such signals might reduce the problem related to the asymmetry in information. Corporate name changes can be regarded as such a signal to the current shareholders and potential investors in the financial market. A company name change can communicate a variety of messages or information to the financial community. A corporate name change may carry simultaneously both negative and positive messages to the members of the financial market and each member perceives and evaluates this information differently, and responds to it.
Estimating Costs of Joint Products: A Case of Production in Variable Proportions
Dr. Ying Chien, University of Scranton, Scranton, PA
Dr. Roxanne Johnson, University of Scranton, Scranton, PA
The purpose of this paper is to offer two different joint cost allocation models to add to the traditional cost allocation methods with which we are most familiar. Cost accounting, of which this concept is an essential component, is becoming more and more important as the need for the control of costs, no matter how marginal such controls may be, are recognized within the business community. The importance of cost accounting is evident in that cost information is essential for production planning, budgeting, product pricing, and, inevitably, inventory valuation for financial reporting purposes. As in all matters significant to the preparation and dissemination of financial statements, one must recognize the use of estimates, arbitrary allocations, and alternative methods of constructing the final information that constitutes the building blocks resulting in these statements. This is nowhere more evident than in the various methodologies used to attribute joint costs to product lines for purposes as varied as decision-making, planning, and inventory valuation. In this paper two new methods using a multiplicative total joint cost model and an additive total joint cost model respectively are proposed to estimate production costs for joint products produced in variable proportions over a period of time. The multiplicative total joint cost model and additive total joint cost model, their characteristics, and the procedures for estimating production costs for individual joint products are described. Numerical examples are presented to illustrate the application of the models. Cost information is vital for production planning, cost control, product pricing, inventory valuation, and, ultimately, financial reporting. When a group of different products is produced simultaneously with a single production process or a series of production processes, joint costs occur up to the point where the joint products are separated. Joint costs as a component of cost accounting are becoming more and more important as companies in a variety of industries join preexisting firms manufacturing joint cost based products such as oil, beef, chemicals, etc. Previously, the purpose of joint cost allocation has been to attribute joint costs to major product lines for the purposes of meeting financial reporting requirements. The purpose of this paper is to add two new, unique approaches to the preexisting techniques for joint cost allocation currently in use. These approaches are at the same time both sophisticated and straightforward to apply. Traditional cost allocation methods, such as the current sales method, the physical units method, and the relative sales value method are mainly for the purpose of inventory costing and product pricing [Barton and Spiceland, 1987; Biddle and Steingberg, 1984; Horngren et al., 2006, pp. 565-573]. All assume a linear data base, and all restrict cost allocation techniques based on “how it’s always been done.” Thus, most of these traditional methods generally attempt to assign costs to the individual joint products in relation to their relative revenue-generating power or simply to get it done. The calculations of cost allocations are based on a single observation of cost data at a given point in time. In the rapidly changing world we live in, this assumption cannot be maintained as an accurate basis for decision-making, planning and cost control, even though it will obviously still meet the requirements of financial reporting. The purpose of this paper is therefore to propose new, more appropriate methods of estimating production costs for joint products produced in variable proportions over a period of time. There are many situations where a firm has the ability to vary, at least to some extent, the proportions in which joint products are produced. For example, the refinery manager may regulate the quantity of petroleum products such as gasoline and fuel oil produced from a given amount of crude oil. As a result, a joint cost comprising crude oil cost and manufacturing cost must be allocated among the consequent petroleum products in order to derive such cost information as individual average costs and marginal costs for production planning and pricing decisions. The very use of the word allocation alludes to the inevitable but heretofore acceptable inaccuracies of the resulting calculations. The value of this attribution for decision-making, production, planning and cost control is therefore dubious as the requirements for financial accounting do not necessarily fully meet the needs of cost accounting. However, when considering the use of any of the joint cost allocation methods under the cost accounting umbrella, each should be evaluated with the resultant need to consider cost-benefit analysis when deciding how many resources to dedicate to a particular technique. The more complex the technique becomes, it must be evaluated in conjunction with the need to consider the perceived additional accuracy that will be gained as well. When a firm is able to vary the proportions of production for the joint products over time, it is possible to measure production costs for each of the joint products by examining the mathematical relationship between joint costs and the production of the joint products.
Board Control and Employee Stock Bonus Plans: An Empirical Study on TSEC-Listed Electronic Companies in Taiwan
Chiaju Kuo, MingDao University, Taiwan
Dr. Chung-Jen Fu, National Yunlin University of Science & Technology, Taiwan
Yung-Yu Lai, The Overseas Chinese Institute of Technology, Taiwan
This study examines the correlation between board control and employee stock bonus of TSEC-listed electronic companies in Taiwan, in both corporate governance and regulatory perspectives. In addition, the soundness of regulations regarding board control is also examined. This empirical research differs from previous studies in that not only the different roles played by board directors and supervisors are identified more clearly, but also that there is more accurate data with regard to board control. The evidence provides support for our argument that, owing to the different characteristics, it is inappropriate to combine directors’ ownership with supervisors’ ownership, or to combine the number of directors and supervisors as an explanatory variable, as adopted by previous studies. The main contribution is that we examine the influence of directors and supervisors separately, taking into account the two-tier structure of the corporate governance system in use in Taiwan. In Taiwan, in order to assist companies listed on the Taiwan Stock Exchange Corporation (TSEC) and the GreTai Securities Market (GTSM), collectively referred to as "TSEC/GTSM-listed companies"), to establish a sound corporate governance system, and to promote the integrity of the securities market, the TSEC and GTSM jointly issued “Corporate Governance Best-Practice Principles for TSEC/GTSM Listed Companies” on October 4, 2002, to be followed by TSEC/GTSM-listed companies. Many executives credit employee stock bonus plans for the recruitment of innovative employees and to help Taiwan’s high-tech companies become globally competitive. While a number of studies have examined the relationships between employee stock bonus plans and variables such as firm performance, corporate value (e. g., Sue 2004; Wu 2004), and compensation contracts (e. g., Chang 2004; Li 2003; You 2003) in Taiwan, relatively few studies have investigated the correlation between board characteristics and employee stock bonus plans. As such, the study will focus on the correlation between board control and employee stock bonus plans in corporate governance and regulatory perspectives. Under the taxation rules in Taiwan, the total amount of cash compensation is taxed according to the personal income tax rate. Yet, the bonus shares are taxed at par value while market prices are higher than the par value. A high level of employee bonus grants will benefit employees at the expense of stockholders’ wealth, considering that distribution of employee stock bonuses will result in dilution of the firm’s EPS. Furthermore, it is the case in Taiwan that qualification requirements of employees who are entitled to receive a dividend bonus, including the employee of subsidiaries of the company meeting certain specific requirements, may be specified in the articles of incorporation. The arguments about the ways of distribution and the amounts of dividend bonus paid include the transparency of the decision-making process, the independence of related decision-makers, and the rationality of the amount distributed. Because the plans of surplus earning distributions are proposed by boards of directors, this study will develop and then test the theoretical hypothesis related to the level of employees stock bonus plans and both the governance effects of directors and supervisors. With the Introduction as the first section, this paper is further organized into five sections: Section 2 develops the hypotheses; Section 3 describes the sample selection and empirical design; Section 4 shows the empirical results mainly surrounding the association between the percentage of employee stock bonus granted and the board and ownership structure variables; and Section 5 contains sensitive tests to determine the robustness of the results to alternative specifications. A summary and conclusion are then provided in Section 6. Agency theory has been one of the most important theoretical paradigms in finance and accounting during the past two decades. It explicitly deals with conflicts of interest, incentive problems, and mechanisms for controlling agency costs. Compensation contracting is one of the mechanisms to induce employees to act in the best interests of the firm. Possible factors which may influence the percentage of employee’s stock bonus grants can be divided into five categories: 1. firm performance; 2. industry specification; 3. firm size; 4. risk; and 5. board control. Since the research is focused on high-tech firms in Taiwan, we do not make further control for industry specification, whereas we control firm performance, firm size, and risk by treating them as control variables in the regression model. Taking into account that the board of directors makes the proposal for earnings distributions, and then shareholders ratify the proposal in the shareholders’ meeting in Taiwan, board control may be the most important factor influencing the percentage of employee stock bonus granted. In this reseach, board control is divided into eight essential elements based on the research of Ittner et al. (1997):
Measuring the Performance of ERP System --- from the Balanced Scorecard Perspectives
Mei-Yeh Fang, Chihlee Institute of Technology, Taipei, Taiwan, R.O.C.
Dr. Fengyi Lin, Chihlee Institute of Technology, Taipei, Taiwan, R.O.C.
Enterprise resource planning (ERP) systems are commercial software systems that can be defined as customizable, standard application software which integrates business solutions for the core processes and the main administrative function of an enterprise. Traditionally, ERP performance measures focus on financial indicators which tend to reflect on past performance, this study therefore proposed a Balanced Scorecard (BSC) approach; a framework provides a comprehensive set of key perspectives to simultaneously evaluate the overall ERP system performance. Moreover, we empirically investigated Taiwan public companies with ERP system implementation to explore whether different corporate ERP objectives could have affected the post-ERP performance and cou7ld translate a company’s vision and strategy through all levels of organization. Adopting the Balanced Scorecard increases the completeness and the quality of ERP implementation reports and raises the awareness for relevant factors. Based on the research finding, we provided a regression model to measure the performance of ERP systems and found that financial perspectives have closed relationship with non-financial perspectives. Enterprise resource planning (ERP) systems are commercial software systems that can be defined as customizable, standard application software which integrates business solutions for the core processes (e.g production planning and control, warehouse management) and the main administrative function (e.g. accounting, human resource management) of an enterprise (Rosemann & Wiese, 1999; Skok, W. and M. Legge, 2002). Companies that implement ERP systems have the opportunity to redesign their business practices using templates imbedded in the software (DeLone and Mclean, 2003; Chesley, 1999; Huang et al., 2004). Many companies are implementing ERP packages as a means or strategic objectives to reengineering its existing processes, performing supply chain management, requiring for e-Commerce, integrating ERP with other business information systems, reducing inventory costs, changing existing legacy system, requiring for multinational enterprise competitiveness, enhancing enterprise images, and evoluting e-business (Minahan, 1998; Mirani and Lederer, 1998; Pliskin and Zarotski, 2000; Davenport, 2000). Because of ERP’s broad functionality, a company typically can replace much of their legacy systems with ERP applications, providing better support for these new business structures and strategies. However, the advanced IT technology implemented not simply for more and faster data processing, but a management philosophy that could address the measurement of an organization, allow feedback, and facilitate communication between all management levels. In order to structure the management of ERP software, the related tasks can be divided into the process of implementing ERP software and the operational use of ERP software. For the evaluation of both tasks the Balanced Scorecard – a framework to structure the relevant key performance indicators for Performance Management (Kaplan and Norton 1992; Kaplan and Norton 1993) – can be applied (Walton 1999, Reo 1999, van der Zee 1999, Rosemann and Wiese, 1999; Brynjolfsson, E. and Hitt, L., 2000). The Balanced Scorecard enables translation of a company’s vision and strategy into a coherent set of performance measures that can be automated and linked at all levels of the organization. Organizations have come to realize the importance of a strategic feedback and performance measurement/management application that enables them to more effectively drive and manage their business operations (Edwards, 2001). Besides the traditional financial measures, the Balanced Scorecard accounts for a wider range of ERP effects (Maloni and Benton, 1997) as it consists of four perspectives: financial, internal process, customer, and innovation and learning. Thus, it also includes non-financial and less tangible aspects such as implementation and response time or the degree of ERP-supported business functions. This study selected ERP performance measures from the related literature (DeLone and McLean, 2003; Mirani and Lederer, 1998; Mabert et al., 2000). We proposed Balanced Scorecard approach to measure the implementation ERP system performance from four abovementioned perspectives. The primary objective of this research is (1) to examine the ERP system performance by using the Balanced Scorecard approach. (2) to explore whether different corporate objectives of ERP implementation could have affected the post-ERP performance so as to provide an insight into ERP implementation. (3) to study on the relationship between financial and non-financial performance measures of ERP system. The rest of the present study is organized in the following manner. Section 2 provides the relative literatures review of ERP systems and discusses how the Balanced Scorecard approach can be used to evaluate the implementation of ERP software. The research methodology and the analysis of the performance measures of ERP implementation with the Balanced Scorecard is given in Section 3. Section 4 presents the findings of our study. ERP systems can push an organization towards generic processes even when customized processes may be a source of competitive advantage (Davenport, 1998).
Endless Surpluses: Japan’s Successful International Trade Policy
Dr. James W. Gabberty, Pace University, NY
Dr. Robert G. Vambery, Pace University, NY
In 1991, authors Akio Morita, chairman of Sony and Shintaro Ishihara, member of the Japanese Diet, published a book titled “A Japan That Can Say No”. This work called for the nation of Japan to take a much more self-assertive and aggressive attitude toward the rest of the world (especially the U.S.) in its diplomatic and business relations. The caustic tone of the book, with chapter titles such as “America Itself is Unfair”, “American Barbaric Act!”, and “Let’s Become a Japan that Can Say No” caused much consternation in the West. Nonetheless, before and after the publication of this book, the U.S. - Japan trade deficit was (and still is) enormous [Morita, Shintaro]. Consequently, Americans who express concern about large and persistent trade deficits are not engaged in Japan bashing, but rather may be strong supporters of free trade who are analyzing the effects of large-scale adverse economic phenomena that may need remedy [Lincoln]. In the 1950s and 1960s, the U.S. was the world's leading export powerhouse. The Marshall plan helped provide the capital needed to rebuild Europe and Japan, and fueled a tremendous demand for U.S. exports. During this period, the U.S. ran a substantial trade surplus of about one percent of gross domestic product. The U.S. also benefited initially from strong export demand in a wide range of industries, from low-tech textiles and apparel to sophisticated aircraft and machine tools. Since the 1970s the U.S. moved from a trade surplus to a deficit position, as Europe and Japan began to compete effectively with the U.S. in a range of industries. Now, China has come online as a major trade partner with the U.S. However, China has become not only a surplus trading partner with the U.S, but has been so successful at penetrating the U.S. market that it has eclipsed Japan in its trade surplus position. The notion of (now) these two countries intensely selling their exports into the U.S. market unabatedly is frightful as perpetual calls by noted economists continue to warn that this trade deficit position is simply untenable. The longer the deficit problem is ignored, the harder it becomes to deal effectively with curtailing the deficit’s growth. For many, the trade imbalance with Japan seems to have always been a feature of Japan - U.S. trade relations. Indeed, it is the almost historical reflex reaction by U.S. consumers to cite the higher-quality features of Japanese products as the root causes of the trade imbalance. This, it is assumed, is the real culprit of the trade imbalance, buttressed by the lack of similar quality products produced by U.S. manufacturers and made available to Japanese customers. Although partly a false perception, this perception nonetheless helps to perpetuate the trade deficit with Japan. A historical glance backward to the early days of the trade imbalance helps put the current trade deficit into perspective. The trade relationship evolved as follows: In the 1970s, the time at which Japanese dominance in certain industries became evident, the quality of certain U.S. products in industries such as automobiles began to slip. The domestic automobile producers at that time enjoyed a market share of approximately 90% and the lack of useful foreign competition through which improvements to their products would have been facilitated was not a reality. This caused many U.S. producers to fall into a false sense of market supremacy and the self-absorbed attitude caused production quality to slip. New automotive entrants by Japan at the same time were lacking in major quality or technological sophistication but they had fewer defects and were cheap to produce - a chilling hallmark of Chinese imports into the U.S. witnessed today. U.S. automobile producers all but ignored repeated calls by the marketplace to improve the worsening frequency of quality defects in their own products. Similarly, American consumers, unwilling to live with pervasive quality problems, shifted their purchasing patterns away from nationalist tendencies to ”buy American” and began to purchase the less expensive, less complex vehicles produced in Japan that were beginning to appear on U.S. showroom floors. The gradual winning of market share by the Japanese in the automobile sector continued at a not-so-gradual pace. In the span of a few short years, the popularity of the Japanese automobile radically increased and the moniker of Japanese quality for other products was transcended into the minds of U.S. consumers. The awareness of the availability of the alternative products emanating from Japan were beginning to cause these consumers to initiate purchases of other Japanese imports during the 1980s, when the birth of the video game, personal computer, video recorder and handheld electronic industries began to flourish. American consumption of all things Japanese available for import were snapped-up by U.S. consumers as U.S. manufacturers watched in awe of this flurry of economic activity and massive increase in (mostly imports) trade [Porter]. By the time U.S. automobile producers became very concerned about the slippage of their corresponding market share, it was too late: too much time had passed and the Japanese automobile product range increased dramatically. The devastation caused to the domestic automobile industry some twenty years ago has taken nearly a quarter of a century to repair and was extremely costly (in terms of lost income and jobs) for the U.S. automobile industry and was only partially successful in countering the prevailing mindset of U.S. consumers about Japanese product supremacy. Although U.S. automobile manufacturers now produce products that consistently rank near or on par with their Japanese counterparts, the quality stigma associated with Japanese products remains. Moreover, attempts made by other U.S. manufacturing firms to compete against Japanese products as late entrants into the consumer electronics industry have proved futile as witnessed by the attempt of U.S. television manufacturers to move into the burgeoning flat panel display sector. Though not comprehensive in the historical context of the various reasons why trade with Japan brought about a globally-accepted acknowledgment of the inequality of many of its manufactures, Japan’s trade deficit with not only the U.S. but also with the world has other roots. It is necessary to look back a little further in time than the 1970s and 1980s. It was during the 1950s that similar events helped destroy former U.S. supremacy in certain industries and some further discussion is helpful in broadening the understanding of how U.S manufacturing supremacy began its downturn.
Economic Convergence in the Old and the New Economies of the OECD
Dr. Bala Batavia, DePaul University
Dr. P. Nandakumar, Indian Institute of Management, Kozhikode, and Sodertorn University of Stockholm
Dr. Cheick Wague, Indian Institute of Management, Kozhikode, and Sodertorn University of Stockholm
The optimistic belief that incomes per capita will converge in course of time, voiced by a number of economists and soothsayers, has been belied by developments in the last few decades. Instead of catching up with the affluent west, the less developed countries of the southern hemisphere have fallen still further behind in terms of income per resident. In this paper, we address the same issue for the OECD group of countries, and also analyze the impact on the convergence of the income-catch up process of certain fresh factors which have emerged or have become relevant recently in this regard. In particular, a distinction is made between the convergence process in traditional sectors of the economy, and the ‘new economy’ - i.e., the sectors which use significant inputs of information technology. The notion that relatively backward countries, with a comparatively low income per capita, will grow faster than the richer nations, thus effectively closing the income gap, seems to have been prevalent for several decades. Such a process of income catch-up had clearly occurred in the aftermath of the war years, when barriers to trade and capital flows fell rapidly, heralding a golden age for international commerce which lasted well into the 1960s. In fact, this may have been the most expansive era for trade since the classical age that was spearheaded by Great Britain. The income-catch up hypothesis basically postulates that countries with lower per capita incomes will grow faster than the leader with the highest income per capita in a group of trading nations. The rate of growth will be related to the income gap relative to the leading nation. This hypothesis has been considered even in the analysis of the productivity slowdown in the OECD countries since early 1970s (Lindbeck, 1983). Generally speaking , the hypothesis has been considered relevant only in explaining the catch-up process within the group of industrialized nations, which are said to belong to a ‘convergence club’ in the words of Baumol (1986). Baumol et al., (1994) have also postulated that the convergence process among the OECD countries may have run it’s full course by now, after an extended process of strong convergence in the post-war decades, and echoes in this respect, the study by Lindbeck (1983). Testing of the catch-up hypothesis has not been limited to the use of the variable income per capita. The convergence processes with respect to labor productivity levels as well as total factor productivity has been the subject of scrutiny in recent years, and are important in their own right as indicators of international competitiveness. Normally, convergence in income per capita would imply catch-up also in productivity terms, but there need not be a one to one correspondence. It may be noted that (OECD, 2002) total factor productivity (TFP) has been growing at different rates in the OECD countries and that the convergence process in TFP has bee quite strong even during periods of relatively weak income convergence. Also, the process may differ between the aggregate economy and different sectors. The importance of such a distinction can be seen in Pilat et al. (2002), who show that labour productivity growth in IT producing and IT using sectors have been greater than in other sectors for virtually every country in the OECD area (with some nineteen countries included in their sample). But this also means that the degree of catch-up can vary between sectors, depending on their intensity of IT inputs usage. This point is further developed in the next section, with supporting data. It seems worthwhile to emphasize while dwelling at length on the role played by IT inputs in pushing up productivity growth that, other factors have also played key roles in the growth process in OECD countries in the past decades. Thus, to get a complete picture or model of economic growth, one may to have to adopt a growth accounting approach - which may have to be extended in an appropriate manner to include factors other than just the traditional inputs. In this paper, the catch-up process of both income per capita and labor productivity are modeled. As noted in the previous section, there will not be a one to one correspondence between these processes. To see this, we may write the expression for output growth as In 1), output growth is decomposed into a weighted average of the growth rates of labor, i, and of capital, k, with a representing the wage share . g (= y - i) and h (= y – k) are the rates of productivity growth of labor and capital respectively. From 1) it can be seen that while labor productivity growth increases the rate of growth of output, this effect can be reduced by a fall in capital accumulation or in the rate of growth of the productivity of capital. The productivity growth of labor and capital are also affected by technological change. Disembodied technical progress will also serve to bring about differential developments of income per capita growth and labor productivity growth.
An Evaluation of Investment Performance and Financial Standing for Life Insurers in Taiwan
Dr. Shu-Hua Hsiao, Leader University, Taiwan
Dr. Shu-Hui Su, Fortune Institute with Tech,, Taiwan
Life insurers in Taiwan should set a goal of a higher efficiency of investment performance and profitability because the whole market structure changed after the insurance market opened in 1987. Facing more highly intensive competition, insurers may become insolvent when inefficient performance of investment occurs. Hence, to achieve a financial solvent objective and a competitive advantage, life insurers should maintain their relative investment efficiency and performance. The main purpose of this study is to determine the capital investment efficiency based on the DEA results and MPI. Some hypotheses were created to test if there is a statistically significant difference among the original domestic life insurers, new entrant domestic life insurers, and foreign branches of life insurers. Results expressed that there is no significant difference among those three groups for MPI. Nan Shan and Hontai are found to have an efficient investment performance for the overall efficiency and scale efficiency. In addition to Nan Shan and Hontai, Cathay, American, and Manulife are efficient for pure technical efficiency. As the insurance market structure has been changed, more competitive environments will impact financial profitability. It is important to study the profitability and investment performance for life insurers, because companies may become insolvent when failure leads to declining profit, and even to serious interest spread loss. To evaluate the efficiency of investment performance, and to guide companies to financial improvement areimportant. In Taiwan, the main sources of a life insurer’s profit, financial receipts, depend on the investment performance. Obviously, premiums received only cover commission and business expenses, although this amount is about eighty percent of the total income (Yen, Sheu, & Cheng, 2001). Thus, whether the investment performance is efficient or not is a key factor that relates to the whole performance of business management. To achieve these objectives and competitive advantages, a life insurer should maintain their financial relative efficiency and performance. The DEA has been used frequently to make performance measures for banks (Asmild, Paradi, Aggarwall, & Schaffnit 2004; Krishnasamy, Ridzwa, & Perumal, 2004), insurers (ex Hewlitt, 1998), hospital (ex. Hu & Huang, 2004), and investment (ex. Chen & Zhu, 2004). Prior studies mainly focus on measuring the business performance using Data Envelopment Analysis (DEA). However, fewer papers used the DEA to evaluate the investment performance measurement of life insurers. Lin (2002) applied the DEA to measure efficiency scores and to examine whether life insurers in Taiwan have faced the new market structure after deregulation. Results showed no change for overall efficiency change, no pure technical efficiency change, and no scale efficiency change after deregulation. The findings also suggested for incumbents that innovation is the most important factor leading to productivity improvement. Furthermore, Brockett, Cooper, Golden, Rousseau, & Wang (2004) applied DEA to examine the effect of solvency on efficiency for insurance companies. Output variables of that study involved solvency, claims paying ability, and return on investment. Furthermore, Barr, Siems, & Thomas (1994) used DEA to predict bank failure. Hu & Huang (2004) use both the Mann-Whitney test and Tobit (censored) regression to find the effects of environmental variables on these efficiency scores. Apart from DEA, the MPI can further provide the measurement of productivity changes. The main studies which focus on investment issues are: Chen & Zhu (2004), Sathye (2002), Ramanathan (2004), as well as Asmild, Paradi, Aggarwall, and Schaffnit (2004). Ramanathan (2004) applied MPI to provide a further investment improvement in the technical efficiency change. Asmild, Paradi, Aggarwall, & Schaffnit (2004) assessed the productivity changes of the banks by MPI and concluded that “the shift of the best practice frontier over time are typically due to changes in technology.” Sathye (2002) analyzed the productivity change of Australian banks from1995 to 1999, and he found that the technical efficiency and the Total Factor Productivity (TFP) index have declined by 3.1% and 3.5% individually. To measure the relative efficiency and investment performance by DEA and MPI for life insurers in Taiwan is the main purpose of this study. DEA and MPI model were developed by Charnes, Cooper, and Rhodes (1978) as well as Fare, Grosskopf, Lindgren, and Ross (1989) respectively. The MPI can provide an information including: technical efficiency change, technological change, pure technical efficiency change, scale efficiency change, and total factor productivity (TFP) change, life insurers can revise their input and output factors. In addition, the investment performance of life insurers was compared among the domestic original, new entrant, and foreign companies. Finally, results can provide information of strategies raising their competitive ability. The participants of this study, based on an annual report of life insurers in Taiwan, were classified in the following groups: eight year old domestic companies, nine year old new domestic, and nine year old foreign group life insurers. The Kuo Hua Life Insurance Companies were eliminated because of missing data or incompleteness in their financial annual report. The annual report of life insurers was published by the Republic of China in conjunction with the Life Insurance Association of the Republic of China. This database contains records obtained from insurers’ statutory annual statements.
Dolorous Songs and Blessings of the Curses
Dr. Kazi Saidul Islam, University of Wollongong, Australia
Dr. Kathie Cooper, University of Wollongong, Australia
Dr. Jane Andrew, University of Wollongong, Australia
The latest trend in accounting arises from the spate of pathetic exodus of sprinkling stars from the corporate sky around the globe. The direct and domino effect of the corporate collapses are dreadful. The neurotic curses arising out of gripped collapses remind us of the fact that there is the other side of a coin. Severity in accounting scandals and commonality in the nature of collapses have bought in a number of blessings by triggering global consciousness and consensus to root out the diagnosed disease, setting celestial attributes in the governance process, bringing harmony as well as transparency in the disclosure regime and building a strong knowledge-base through continuous education to be provided by the higher educational institutions and professional bodies. Regulatory changes, emergence of corporate governance codes, mandatory compliance with accounting standards for greater transparency and thus emergence of a new accounting order were not possible so rapidly without such a severity in the corporate ruins. Songs reflect mind. Birds and people sing to express their jovial feelings. Again the songs of cuckoo in the spring remind the sorrows of loosing her pair. Sometimes songs of people cause tears. Songs on drums can not be as pathetic as those on a violin or piano. So songs can be dolorous or delightful. Blissful or brutal events determine the sweetness of songs. Our concern is about the songs in the corporate world. Corporate bodies are artificial entities governed and surrounded by many people. Management, regulators and stakeholders are the birds who live on the branches and leaves of corporate entity to care for their interest and eat apples. When a company runs well, the sweet wind touches everybody living on the tree. The management throws complacent smile for effective efforts, the regulators for good control and the stakeholders for having their shares from the company assets and profit. Their songs are then played on the drums followed by dances or Champaign. To the contrary, when a company runs badly and ultimately collapses for a range of reasons, the high sounding drums are replaced by buzzwords and the shocking songs are played on violins or pianos. The story of songs-dolorous or delightful-in the corporate world can be traced throughout history. Our paper embarks on a story based on the scandal games in the collapse tournament of the new millennium. There is a plethora of studies that have inquired into the causes of accounting scandals, impact of corporate collapses on the society and remedies for these. But those studies do not properly address that every cloud has a silver lining. The present paper aims to evaluate the two sides of a coin with special emphasis on the blessings of the curses arising out of scandals and collapses: Specific points to be addressed are: The curses attributed by Accounting Scandals and Corporate Collapses The Dolorous Songs – the negative impact of the curses The Blessings – the positive impact of the curses. Because of chronological emergence of the events, the above points can be shown with the help of a diagram as above: Literally the term “curse” is featured by nuisance, blight, annoyance or irritation. Perspective determines the meaning and magnitude of the curse. Starvation, health hazards and deprivation are curses in the least developed countries. Corruption, ethical failure, terrorism, military aggression and deaths are the curses of the day to the mankind. This paper deals with accounting scandals and corporate collapses caused by corporate corruption and ethical failure. Scandals are the catastrophes nobody wants to endure. Scandals refer to human characteristics that create anarchy and lend irregularities in a social system. These are the events that happened in the past and caused harms to self-image and others. There are many faces of scandals like political scandals (Water Gate scandals), organized crimes ( Mafia and Yakuza), money laundering, sexual harassment, racism, embarrassing emails, outrageous extravagance, and accounting, financial or corporate irregularities. Accounting or corporate scandals are not new. Accounting is as old as a civilization. Related scandals and collapses are also as old as accounting. Shakespeare’s ‘Merchant of Venice’ (written in 1596-1598) depicts about greed and scandalous business environments of his time. Johnston of NabarroNathanson identified 400 years of financial scandals (http://www.nabarro.com). At least there is a two centuries of corporate panic and collapses in Australia ( Sykes: 1998). The World witnessed lot of Scandals and collapses in the 80s and 90s. The corporate collapses of the new millennium give testimony to the curses happened by spectacular accounting scandals caused by incompetence or greed of the directors, auditors and CEO. They adopted the brilliant, creative and illegal means of creating money. When the vicious circle of poverty appears to the prime curses on the fate of the people of underdeveloped countries, and terrorism and atomic plants appear to be the prime threats to the mankind, the appalling accounting scandals and spectacular corporate collapses appear to the dreadful curses throwing disastrous blow to the economies of the first-world countries.
Process and Quality Model for the Production Planning and Control Systems
Dr. Halim Kazan, Gebze Institue of Technology, Turkey
Dr. Ahmet Ergulen, Nigde University, Turkey
Dr. Haluk Tanrýverdi, Sakarya University, Turkey
Over the last decades, many industrial sectors have been experiencing profound changes involving both the business environment and the internal organisation. This process has been so deep and radical as to suggest that a new operations management paradigm has emerged. In this new competitive and turbulent environment, effective production planning and control systems have become extremely important to drive improvement efforts. We consider production planning models from a different perspective, assuming that both production and quality are decision variables. Within this class of models, we consider various degrees of process on the part of the producer including the quality, process technology and control system to determine the acceptance or rejection of how the system is designed, implemented, run, improved and measured the quality of the outputs. Our intent is to provide an overview of applicable process and quality model; we present briefly how the quality is identified, designed, implemented, run, improved and measured in terms of the appropriate quantity, the appropriate time, the appropriate level of quantity. Especially, our purpose of the PP&C is to ensure that manufacturing run effectively and efficiently and produces products as required by customers. In this article we focus on process and quality model for the production planning and control systems. We also have organized the article into two major sections. In the first section we present a framework for the process technology and system. In the second section we discuss control system and quality models for production planning. Over the last decades, many industrial sectors have been experiencing profound changes involving both the business environment and the internal organisation. Especially, today’s changing industry dynamics have influenced the design, operation and objectives of production planning and control systems since CAD, CAM and CIM systems used in industry. These systems affected the production planning and control systems by increasing emphasis on: integrated information technology and process flows, flexibility of product customization to meet customer needs improved quality of products and services, reduced costs, planned and managed movement, and reduced cycle time, improved customer service levels(Bardi, E. J., Coyle, J. J., and Langley, Jr., C. J. 1996). On the other hand, typical decisions include work force level, production planning and control, assignment of overtime and sequencing of production runs. Process models are widely applicable for providing decision support in this context. In this article we focus on process and quality model for the production planning and control systems. We also have organized the article into two major sections. In the first section we present a framework for the process technology and system. In the second section we discuss control system and quality models for production planning. Our purpose of the PP&C are to ensure that manufacturing run effectively and efficiently and produces products as required by customers. We do not cover detailed scheduling or sequencing models (e. g., Graves, 1981), nor do we address production planning for continuous processes (e. g., Shapiro, 1993). We consider only various degrees of process on the part of the producer including the quality, process technology and control system to determine the acceptance or rejection of how the system is designed, implemented, run, improved and measured the quality of the outputs. And do not include continuous-time models such as developed by Hackman and Leachman (1989). Our intent is to provide an overview of applicable process models; we present briefly How the quality is identified, designed, implemented, run, improved and measured in terms of the appropriate quantity, the appropriate time, the appropriate level of quantity. The process and quality model (PAQM), production planning & control and competitive advantage, effective PP&C and steps in setting up an effective PP&C system (Bardi, E. J., Coyle, J. J., and Langley, Jr., C. J. (1996),) “Production Planning and Control technology combine the physical and information flows to manage the production system. As with any complex entity, PP&C has several distinct elements. In figure 1 we superimpose these elements on the physical flow of a production system. We position these elements at different places along the physical flow route. Interaction between the elements is not shown. The PP&C function integrates material flow using the information system. Integration is achieved through a common data base. Interaction with the external environment is accomplished by forecasting and purchasing. Forecasting customer demand starts the production planning and control activity. Purchasing connects the production system with input provided by the external suppliers. Extending production planning and control to suppliers and customers is known as supply chain management Some elements are associated with the production floor itself. Long-range capacity planning guarantees that future capacity will be adequate to meet future demand, and it may include equipment, people, and even material.
Analysis of Financial Performance by Strategic Groups of Digital Learning Industry in Taiwan
Wen-Long Chang, Shih Chien University, Taiwan
Kevin Wen-Ching Chang, Abacus Distribution System Ltd., Taiwan
Jasmine Yi-Hsuan Hsin, Taiwan Federation of Industry, Taiwan
The research focuses on digital learning providers in Taiwan. The digital learning providers are categorized into different strategic groups depending on the strategic dimensions they conduct. Factor analysis is applied to determine the measurement index for the financial performance of these providers, and to further examine the divergence among their financial performance. As a result of the research, digital learning providers in Taiwan can be divided into four strategic dimensions which are ‘leading group with integration of marketing and sales abilities’, ‘leading group with external relation management, and research and development abilities’, ‘leading group with human capital and financial management abilities’, and ‘leading group with Niche market management and product innovation abilities’. Among all four, the leading group with integration of marketing and sales abilities shows the best profit earning ability. Digital learning (also known as e-Learning) industry has been expanding with rapid acceleration in recent years. While people reply on the internet to read, to shop, to talk, and to learn, many countries have enclosed the barrier-free digital learning as one of their competitive essentials. Through digital learning, knowledge can be obtained easier, faster, and cheaper. It is believed that digital learning will ultimately provide us life-long learning experience with great learning efficiency and quality. Since digital learning was first initiated in Taiwan in 1998, many researches have been conducted on its management models (Barron, 2002; Close, Humphreys and Ruttenbur, 2000), key success factors (Rosenberg, 2000), and system regulations (Anido and Llamas, 2001). Today, the major task for digital learning providers is to design the best competitive strategies, including studying their financial performance. Past researches on digital learning industry have not included topics regarding financial performance because there were not enough data from the small number of digital learning providers, and many of the providers did not want to share their financial data at their germination. Now, these providers are at their steady growth with more new providers joining the market. This year, there are 135 digital learning providers registered in Industrial Development Bureau, Ministry of Economic Affairs, and some of them are already listed in Taiwan stocks. It is much easier to have access to their financial performance and other business performance information now. As the growth of internet and fiber communication industries have slowed down since 2000, digital learning providers have been in their four years of self-adjustment period since then, and today the competitions have intensified. Therefore, it is the perfect time to study the strategic groups of digital learning in Taiwan, their formation, financial performance, resource allocations, and best strategy. Based on the industry background mentioned above, this paper tends to achieve the following objectives: 1. Discover the characteristics and differences between different strategic groups with analysis of their strategic dimensions. 2. Suggest future investment trend with comparison of financial performance by different strategic groups. A strategic group refers to business providers with same or similar strategy (Harrigan, 1985; Hitt and Hoskisson, 2001; McGee and Thomas, 1986; Peteraf and Shanley, 1997; Thomas and Venkatraman, 1988; Peng, Tan and Tong, 2004). The structures of strategic groups change as time goes and further lead to expansion and growth in the industry; therefore, different strategic groups show different financial performance regarding the competitive strategies they adopt (Asker, 1995; Cool, 1985; Fiegenbaum and Thomas, 1990; Hunt, 1972; Newman, 1973, 1978; Schendel and Patton, 1978). Understanding the formation of strategic groups helps business providers act upon the most suitable strategy and to better allocate their limited resources (Asker, 1995; Cool and Schendel, 1987; Galbraith and Schendel, 1983). The value of strategic groups comes from strategy choices, which is regarded as effective allocation of strategic dimensions or strategic variables. Strategy is the combination of strategic valuables (Hunt, 1972). Strategic dimension is a way to describe different business providers. With the description and measurement of strategic dimensions, characteristics of business providers and their resource allocation can be identified. Moreover, the difference between business providers can be classified to study the competition model within the industry (Porter, 1980). Because the choice of strategic dimensions can influence the study of strategic groups directly, it has to be made with consideration of industry characters and possible growth rate in order to measure the true existence of strategic groups (Houthoofd and Heene, 2002). There are two analytical models of strategic dimensions in the past. The industrial organization model is based on the choice of industrial economics. Porter (1980) is one of the exponents for this model; he believes that environment – including industry and market, is the major strategy for any business provider. The other is the resource-based model. Its exponents believe that long-term advantage can not be achieved if there is any environmental restriction. Favorable profit performance can be achieved if business providers choose resources to be their major strategy. Grant (1991) and Barney (1991) are the two exponents for this model. With some mergers and some co-opetitions, digital learning providers in Taiwan have started to think about the integration of their dominant resources and abilities to achieve their long-term competitive advantages (Chang W. L., 2006). With steady external environment, this research paper claims that resource-based model is most suitable for analyzing the strategic choices made by strategic groups within digital learning industry. The model will examine the present development for digital learning industry in Taiwan to a thoroughly understanding. Based on the developmental trend mentioned above, the study proposed the first hypothesis:
Duality of Alliance Performance
Dr. Noushi Rahman, Pace University, New York, NY
While alliance research has proliferated and branched out to several areas in the past decade, alliance performance remains a misunderstood and limitedly studied area. A review on alliance performance suggests that it comprises two elements: goal accomplishment and relational harmony. Both are necessary to ensure alliance performance. This paper reviews four theoretical streams in organization research that are relevant to alliance performance. Apparently, extant research has attended to alliance relationship management much more than it has attended to alliance goal accomplishment. This review highlights the need to extend existing theoretical streams in certain directions to further explain alliance performance. The literature on strategic alliances has flourished tremendously over the past decade. Strategic alliances are enduring, yet temporary, interfirm exchanges that member firms join to jointly accomplish their respective goals. In his review of the state of alliance literature, Gulati (1998) wrote about five avenues in which alliance literature has spread out: formation, governance, evolution, performance, and performance consequence. Of these five paths, research on alliance performance has received the least attention: “the performance of alliances remains one of the most interesting and also one of the most vexing questions” (Gulati, 1998: 309). Strategic management research is generally geared toward better performance of the firm. While conceptualizing and measuring firm performance is quite straightforward, the involvement of more than one firm and the permeable boundary of the alliance entity (with the exception of joint ventures) make conceptualizing and measuring alliance performance a messy and daunting task. Performance of an alliance is conceptualized as the extent to which member-specific goals are accomplished by the alliance. However, alliance members may find it difficult to work with each other for a lack or trust and threat of opportunism. Consequently, an alliance may fail to perform despite its ability to accomplish alliance-specific goals. Given the importance of maintaining a good working relationship between partner firms, many studies have focused on relational issues arising within alliances. Ironically, as it would become evident toward the end of the paper, the current state of strategic management research seldom focuses on goal-accomplishing or task-oriented aspect of alliance performance. The purpose of this article is to review how major theoretical streams in organization management research explain alliance performance and how these theories can be extended to further our understanding of alliance performance. The paper is divided into four parts. First, I delineate the nature of alliance performance. Second, I review major theoretical streams in organization management as they pertain to alliance performance. Third, I discuss the research implications of this paper. Finally, I describe how alliance managers can benefit from the theoretical conclusions drawn here. Alliances are unique in that they are the only form of economic organization that requires maintaining a relationship, in addition to concentrating on performance issues. Independent firms or firms engaged in spot transactions do not have to maintain relationships. This peculiarity of alliance has drawn tremendous research attention to this topic. Therefore, it is not surprising that lately the majority of research seems to be focusing on relational angles of alliances, such as trust (Gulati, 1995; Perry, Sengupta and Krapfel, 2004), relational risk (Delerue, 2004; Nooteboom, Berger and Noorderhaven, 1997), opportunism (Parkhe, 1993; Provan and Skinner, 1989; Brown, Dev and Lee, 2000), commitment (Gundlach, Achrol and Mentzer, 1995; Perry et al., 2004), reciprocity (Kashlak, Chandran and Di Benedetto, 1998; Wu and Cavusgil, 2003), relational capital (Heide, 1994; Kale, Singh and Perlmutter, 2000), and relational quality (Arino, de la Torre and Ring, 2001). While the relational issues are critical to alliance effectiveness, another critical element of alliance performance is goal accomplishment. Existing theoretical streams explain alliance performance in terms of either relationship maintenance or goal accomplishment. Of course, conceptualizing alliance performance is different from measuring alliance performance, which can take various paths as well. To avoid the mess of explaining relational and goal-based conceptualization of alliance performance, scholars have adopted alliance satisfaction as a measure of alliance performance (Habib and Barnett, 1989; Killing, 1983; Lui and Ngo, 2005). Alliance satisfaction is, however, reflective of more than just alliance performance. In the words of Hatfield, Pearce, Sleeth and Pitts (1998: 368): “Because the respondents were those individuals in the partner firm who were closest to the joint venture operation, the positive relationship between partner satisfaction and JV survival may reflect a bias for maintaining one’s sphere of influence and power.” Hatfield et al. (1998) argue in favor of goal accomplishment as the preferred measure of alliance performance.
Does Cooperative Learning Enhance the Residual Effects of Student Interpersonal Relationship Skills?: A Case Study at a Taiwan Technical College
Kai-Wen Cheng, National Kaohsiung Hospitality College, Taiwan, R.O.C.
The relative effectiveness of cooperative learning instruction and traditional lecture-discussion instruction were compared for Taiwan technical college students to determine the residual effects of interpersonal relationship skills on accounting courses. A pretest-posttest control group experimental design involving two classes was used. The experimental group students (n=53) received the cooperative learning instruction, and the control group students (n=45) received the traditional lecture-discussion instruction. The “Interpersonal Relationship Skills Test (IRST)” was used as the research instrument. A multivariate analysis of covariance (MANCOVA) suggested that students taught using the cooperative learning instruction scored significantly higher than did students in the traditional lecture-discussion group. The research results showed that the cooperative learning indeed enhanced the residual effects of student interpersonal relationship skills and that cooperative learning could serve as an appropriate and worthwhile reference that schoolteachers could apply to their teaching instruction. Cooperative learning instruction plays an important role in contemporary teaching instruction. Many teachers and researchers have used cooperative learning to enhance learning effectiveness and interaction in the classrooms during the last few decades. Cooperative learning incorporates five basic elements: Positive interdependence, face-to-face interaction, individual and group accountability, collaborative skills, and group processing (Johnson & Johnson, 1999). Positive interdependence is structured once group members understand that they are linked together for the same goal. Face-to-face interaction means that group members need to be collaborative in fulfilling the assigned tasks. They need to encourage each others’ efforts. Individual and group accountability means that the whole group must be held accountable for achieving its goal, and each group member must be held accountable for making his or her own contributions to the group and the goal. Collaborative skills mean that teachers should incorporate various social, decision making, and communication skills into their instruction. Group processing means that group members are allowed to discuss together in terms of what group decisions are helpful. As a result, if teachers adopt cooperative learning appropriately, their students’ collaborative skills and interpersonal relationship skills are likely to improve. In the competitive arena of modern society, two vital attributes required by businesses in their efforts to outperform their competitors is pervasive team spirit and cohesive force, both of which require employees to have excellent interpersonal relationships skills to facilitate communication. Interpersonal relationships play such an important role, primarily because in modern society, most jobs rely on the cooperative efforts of groups; few jobs now can be accomplished by individuals alone (Olson & Zanna, 1993). However, in the context of Taiwan’s educational instructions, where traditional independent learning is the rule, it is difficult for students to cultivate the excellent interpersonal relationship skills they will need. Under the circumstances of such an imbalance of supply and demand from Taiwan academic units and enterprises, a radical change in teaching instruction is the best way to solve the problem. Therefore, it’s worthwhile to explore the relative efficiency of cooperative learning and traditional teaching instruction in terms of their effects on the residual effects of student interpersonal relationships in typical classroom settings. The purpose of this study was to document and investigate such a comparison. Cooperative learning means making students learn by way of cooperating with a small group; collaborative skills and social skills are listed as one of the learning targets; evaluations are made based on the performance of the group. Hence, in cooperative learning with a small group, students acquire collaborative skills and develop the notion of cooperative learning (Vaughan, 2002). The study of cooperative learning has flourished since the 1970s, and based on the theory of cooperative learning, different scholars have created different teaching methods. Among them, the most often adopted method is the Student Team Achievement Division (STAD). STAD was developed by Slavin in 1978. The content, standard, and method of evaluations it employed are similar to those in traditional teaching, so STAD is the easiest change to implement. In addition, its range of application is the broadest and its effect is outstanding. Consequently, this research adopted the teaching method of cooperative learning (STAD) in the experimental design. Despite constant support for implementing cooperative learning in schools, provoking research results on the comparative efficacy of cooperative learning versus traditional instruction are present in the relevant literature. Most research shows that students’ learning effectiveness and interaction favor cooperative learning over instructions reflected in lecture-discussion classrooms (Lazarowita, Baird, & Bowlden, 1996; McManus & Gettinger, 1996; Ciccotello, D’Amico & Grant, 1997; Gillies & Ashman, 1998; Gillies, 1999; Mueller & Fleming, 2001; Gillies, 2002; Vaughan, 2002). However, there is very little research available on the long effect of cooperative learning (Gillies, 1999; Gillies, 2002). In particular, only one research reported on the residual effects of cooperative learning, and that research was in Australia (Gillies, 2002). So the purpose of this study was not only to investigate the comparison between cooperative learning and traditional teaching instruction, but also to investigate their effects on student interpersonal relationship skills at the end of the next semester following the initial experimental semester.
Measuring Efficiency and Productivity Change in Taiwan Hospitals: A Nonparametric Frontier Approach
Ching-Kuo Wei, Oriental Institute of Technology, Taiwan
This research investigated the productivity of hospitals (total 550
decision-making units) in Taiwan and its changes during 2000~2004, and applied
Data Envelopment Analysis for analysis, as well as the Malmquist Productivity
Index to evaluate the status of annual productivity changes. The results showed
that the return-to-scale of medical centers was overly large, and there should
be room for downscaling. As evident from the MPI analysis, from 2003 to 2004,
the productivity of all levels of hospitals had significant growth, as a reason
ofdue to improved technical efficiency. Furthermore, this research also found
out that after the first year of implementing the National Health Insurance
Global Budget System, the productivity of all hospitals showed deterioration. In
recent years, the management of hospitals in Taiwan has suffered a major impacts
from changes in the macro environment, in which, the changes of the National
Health Insurance Payout Scheme was most influential. In the past, with
fee-for-service payouts, the hospital could increase its service quantity to
increase its income, but after implementation of the Global Budget System in
July 2002, which intended to control the rise of medical fees through budgets,
the operational efficiency of hospitals was greatly impacted. The management of
many Many hospitals managements were faced with the crisis of losses or
bankruptcy. Thus, the efficiency of hospital management has become a problem
worth exploring. Based on ownership, Taiwan hospitals can be divided into three
categories, public hospitals, proprietary hospitals, and private hospitals.
Public hospitals are profit oriented. Proprietary hospitals are a kind of
private hospital, but they are not profit oriented either. Private hospitals, on
the other hand, are mainly profit oriented. In terms of accreditation level,
hospitals can be categorized into three main types, medical centers, regional
hospitals, and local hospitals. Medical centers are large-scaled hospitals
mainly responsible for education, research, training, and highly complicated
medical treatments. Regional hospitals are medium-sized hospitals responsible
for education, training, and complicated medical treatments. Local hospitals are
small-scaled hospitals mainly for training and ordinary medical treatments.
Many researches have applied Data Envelopment Analysis (DEA) models to study
hospital efficiency (such as Sherman, 1984; Ferrier and Valdmanis,1996;
Chang,1998; Puig-Junoy, 2000). The fact shows that DEA is an excellent
analytical tool in evaluating a hospital’s operational efficiency. However, most
researches focused on cross-section data analysis, and seldom discussed the
impact on hospital efficiency before and after implementing a major policy. In
general, all DEA studies would consider performance analysis at a given point of
time. However, extensions to the standard DEA procedures, such as the Malmquist
Productivity Index (MPI) approach, have been reported to provide performance
analysis in a time-series setting (Charnes et al., 1994). This paper will employ
both DEA and MPI models to analyze hospitals’ efficiency and productivity
change, and compare their discrepancies before and after implementing
implemention of the Global Budget System. DEA is a non-parametric linear
programming model for frontier analysis of multiple inputs and outputs of
decision-making units (DMUs, e.g., hospitals), developed by Charnes et al. (CCR
model) (Charnes et al., 1978) and extended by Banker et al. (BCC model) (Banker
et al., 1984). Detailed introduction of DEA theories is provided by Cooper et al
(2000). The CCR model, which assumes constant returns to scale (CRS), and the
BCC model allows for variable return to scale (VRS). The input oriented linear
programming of CRS model is shown below: Through the CRS model, DMU’s technical
efficiency () can be calculated, while λ is the weight, are slack and surplus,
respectively, is a non-Archimedean figure, x is the input (there are m input)
and y is the output (there are s output), Banker et al. (1984) proposed the VRS
model to calculate pure technical efficiency to separate it with from the
technical efficiency and scale efficiency. Banker (1984) proposed the most
productive scale size (MPSS) to examine the production scale of inefficient
unit. Banker & Thrall (1992) proved with a theorem that, when the sum of weights
(λ) of a certain DMUo’s reference set equals 1, that is when = 1, indicating
that the input of one unit of production factor can produce one unit of output,
the returns to scale remains constant. When < 1, it indicates that the DMU is in
the situation of increasing returns to scale, meaning the input of one extra
unit of production factor can produce more than one unit of output. Therefore,
in order to promote an organization’s operational efficiency, the facility scale
should be expanded to increase more input so as to gain more output. On the
opposite side, if > 1, it indicates the situation of decreasing returns to
scale, meaning the input of one unit of production factor will produce less than
one unit of output. Therefore, input should be cut down and the facility scale
should be adjusted to reach the level of the most productive scale size.
According to Fare, Grosskopf and Lovell (1994),the input-oriented Malmquist
productivity change index can be written as: Where E is Efficiency Change, and T
is Technology change. If the Malmquist Productivity Index and its components are
greater than 1, equal to 1, or less than 1, they indicate progress, no change,
or regress, respectively.
Capital Structure: Asian Firms Vs. Multinational Firms in Asia
Dr. Janikan Supanvanij, St. Cloud State University, MN
The study analyzes the financing decisions of Asian firms and multinational firms investing in Asian countries during 1991-1996. The results show that some factors are correlated with firm leverage similarly in both groups. Overall, leverage increases with tangibility and size in Asian firms. For MNCs in Asia, the explanatory variables are related with short-term financing decision, not long-term decision. Empirical work in the area of capital structure is largely based on firms in the US and G-7 countries. Without testing the robustness of these findings somewhere else, it is difficult to determine whether these empirical regularities can support the theory. Very few studies expand the test to Asian countries because of data limitation. This paper is the first study that compares the financing decision of Asian firms to MNCs investing in Asia. It analyzes whether capital structure in Asian firms is related to the factors similar to those appearing to influence the capital structure of the US firms, and whether the financing choice is similar to MNCs investing in the area. The determinants of capital structure choice are investigated by analyzing the financing decisions of firms across industries in Japan and other Asian countries including Hong Kong, Singapore, Korea, Thailand, Malaysia, Taiwan, and Philippines during 1991-1996. This section presents a brief discussion of the attributes that different theories of capital structure suggest may affect the firm’s debt-equity choice. Harris and Raviv (1991) find evidence that leverage increases with fixed asset, nondebt tax shields, investment opportunities, and firm size; and decreases with volatility, advertising expenditure, the probability of bankruptcy, profitability, and uniqueness of the product. Booth et al. (2001) examine the capital structure determinants in ten developing countries during 1980-1990 and provide evidence that the determinants are similar to those in developed countries. In this study, I focus on five factors: tangibility, investment opportunities, firm size, profitability, and volatility. The reasons are that: 1) These factors have shown up most consistently as being correlated with leverage in previous studies; and 2) The data severely limit their abilities to develop proxies for the other factors. Rajan and Zingales (1995) also note that the magnitude of nondebt tax shields other than depreciation is not available, and advertising expenditure and R&D expenditure are rarely reported separately. Myers and Majluf (1984) suggest that firms may find it advantageous to sell secured debt. Since there may be costs associated with issuing securities about which the firm’s managers have better information than outside shareholders, issuing debt secured by property with known values can avoid these costs. Hence, firms with assets that can be used as collateral may be expected to issue more debt to take advantage of this opportunity. Harris and Raviv (1991) find that leverage increases with fixed assets. Tangible assets are easy to collateralize and thus reduce moral hazard and the agency costs of debt (Wald, 1999). If a large fraction of a firm’s assets are tangible, then assets should serve as collateral, diminishing the risk of the lender suffering the agency costs of debt. Assets should also retain more value in liquidation. Therefore, the greater the proportion of tangible assets on the balance sheet, the more willing should lenders be to supply loans, and leverage should be higher (Rajan and Zingales 1995). Tangibility is measured by the ratio of fixed assets to total assets. Highly levered firms are more likely to pass up profitable investment opportunities (Myers, 1977). When firms have more growth assets, the market value and firm risk are more easily changed to benefit the shareholders. Firms that expect high future growth or have valuable growth opportunities should issue equity when they raise external fund (Jung, Kim, and Stulz, 1996; Rajan and Zingales 1995). Smith and Watts (1992) and Lang, Ofek, and Stulz (1996) also provide supportive evidence and report a negative relationship between leverage and firm growth. Thus, the expected future growth should be negatively related to long-term debt levels because the cost associated with the agency relation is likely to be higher for firms in growing industries, which have more flexibility in their choice of future investments (Titman and Wessels, 1988). The market-to-book value of assets is a proxy for growth opportunities and is expected to be negatively related to leverage (Myers, 1977; Gaver and Gaver, 1993; Rajan and Zingales, 1995). Firm size should have a positive impact on the supply of debt (Harris and Raviv, 1991). Larger firms tend to be more diversified and fail less often. Titman and Wessels (1988) cite evidence from Warner (1977), and Ang, Chua, and McConnell (1982) that suggests the direct bankruptcy costs appear to constitute a larger portion of a firm’s value as that value decreases. Therefore, size may be an inverse proxy for the probability of bankruptcy. As suggested by Titman and Wessels (1988) and Rajan and Zingales (1995), I will use the natural logarithm of net sales as indicators of size. Myers and Majluf (1984) predict a negative relationship between leverage and profitability because firms will prefer finance with internal funds to debt. Pecking Order Theory suggests that firms prefer raising capital, first from retained earnings, second from debt, and third from issuing new equity. Rajan and Zingales (1995) report a negative relationship between profitability and leverage.
Influence of Instructors in Enhancing Problem Solving Skills of Administrative and Technical Staff Candidates
Dr. Nesrin Özdemir, University of Bahcesehir, Turkey
Dr. Ozge Hacifazlioglu, University of Bahcesehir, Turkey
Mert Sanver, University of Bahcesehir / (Stanford Master Candidate), Turkey
Communication problems seem to serve as one of the main problems encountered at educational institutions. This is mostly observed at institutions where future middle management employees are trained. Vocational Schools in this respect have fundamental importance to train the necessary technical and administrative staff for the companies. Atmosphere of the class, democratic attitude and leadership qualities of the instructor and the program enhancing the creativeness of the students form the basis of a successful model. The purpose of this study is to determine the perceptions of students about problem solving and communication skills practiced by their instructors in a classroom atmosphere. A questionnaire was devised as a tool for data collection. 422 students chosen from Vocational School of Education constitute the sample of the study. SPSS (Statistical Package for Social Sciences) was used in the analysis of the data. Recommendations were made regarding the communication and problem solving process. Problem solving skills helps the individual to effectively accommodate to the environment in which he/she is living in. It can be said that, all generations have felt the need to learn problem-solving techniques in order to adapt to their environment. Some of the problems have certain right answers and precise solutions. It is possible to reach the result by carrying out certain strategies in such kind of problems. However, solutions to some problems are not as straight forward. They do not have one right answer. Inter disciplinary knowledge and creativity should be used to solve them. Ultimate goal of the educational programs is teaching students, dealing with problems in their major subjects in school, and problems they will face in life (Gagne, 1985). Problems are challenges for which we need to spend effort in order to reach a goal, and we also need to define sub-goals in the process. ( Chi,Glazer, 1985) Problem solving is an activity in which both knowledge and appropriate mental strategies are utilized. The most important aspect in problem solving is determining the tools needed for the purpose. Some of the problems are one-dimensional; generally have one right answer and certain strategies that allow finding the right answer. They are specific to one knowledge area. However, there also exists multi dimensional problems, which requires multi dimensional thinking and do not have a certain path for deriving the solution. Regarding all these explanations, instructors in technical schools, our study investigates the strategies used by the instructors and measures the effectiveness of these strategies from students’ perspective. Problem should be presented in a way complying with mental schemas of students and then other steps are followed. As long as a problem is correctly understood, the solution efforts will be better satisfying. Mayer (1987) stated that, the furthermost difficulty for students in problem solving process is difficulty in understanding the verbal description of the problem. Students often cannot separate useless information from the problem itself. Neweil and Smen (1972) grouped problem solving strategies under four titles: Extract the useful information. Rearrange and illustrate the problem through schematizing or creating a mental picture of the problem. Making a tool-purpose analysis is the first step in problem solving. In other worlds, it is determining the purpose of the problem and expressing the possible ways to reach the solution. The following question should be answered in order to solve the problem; “What is the difference between where I am and where I want to be and what can I do to reduce this difference?” In the analysis, determining the constraints given and the expected outcomes, allows to decide what is need to be done. Practicing with problems requiring creative thinking skills is needed to learn how to understand a problem. Extracting the useful information; Ordinary problems of daily life is not clear and organized as the problems in textbooks. In that stage, separation of relevant and irrelevant information greatly reduces problem-solving efforts. Rearranging the problem; increases the understandability of the problem and the solution become easier to reach. Using illustrations like pictures and diagrams is helpful in this context. As stated above, tool-purpose analysis, unleashing the critical information and rearranging the problem are basic preliminary steps of problem solving. Students experienced in problem solving much easily applies these steps when compared to students that have weaker experience. In the first stage where tool-purpose analysis is performed, loud thinking, motivating students, promoting cooperation and concentrating in the process rather than the result are proven methods for supporting the emergence of creative solution ideas. In this stage, the ideas planned in previous steps are done and the results are obtained. In other words, the problem is solved. In this stage, it is essential that the instructor and students be using loud thinking, which also helps others to gain problem solving skills. In this stage, the students should be encouraged to identify the process of problem solving.
Relationships Among Public Relations, Core Competencies, and Outsourcing Decisions in Technology Companies
Dr. Chieg-Wen Sheng, Chihlee Institute Technology, Taiwan
Ming-Chia Chen, Da-Yeh University, Taiwan
Though the importance of public relations (PR) is rising, PR activities remain distinct from traditional management functions. It becomes a strategic decision to determine whether it is necessary (or possible) to internally direct or to outsource PR activities. The objective of this research is to examine PR activities in Taiwan’s technology industries and to determine through an informal survey and content analysis how this decision relates to core competencies. We provide theoretic analysis and develop four models quantifying relationships among core competency, PR functions needed, outsourcing channels, and outsourcing success. In addition, we discuss policies concerning outsourcing decisions and evaluate key decision-making criteria before and after outsourcing. Through rapid technological change, the global economy is quickly becoming one based on knowledge. Furthermore, regional commercial and financial activities are increasingly intertwined, and the context of economic exchange can no longer be characterized as a closed system. As members of the ever-expanding open system, enterprises must be increasingly aware of their environment (including particular production and value chains, the natural environment, and the surrounding community). In light of this changing reality, a company’s success hinges not only on economic profits; it also must demonstrate social responsibility [Hagen, Hassan & Amin, 1998; Sheng & Hsu, 2000]. Most moderate- to large-sized enterprises now depend on public relations (PR) departments to manage their corporate position and to produce a desirable image in the eyes of key community entities (including consumers, governmental bodies, and competitors). In this sense, the importance of PR activities is increasing [Hsu & Sheng, 2001]. Though the importance of PR is growing, PR activities remain distinct from traditional management functions. It becomes a strategic issue for firms to determine whether it is necessary (or possible) to internally direct or to outsource PR activities. According to some research [Mascarenhas, Baveja & Jamil, 1998], enterprises face a short-term incentive to manipulate external relationships in order to develop their core competencies but need to maintain a socially responsible image for long-term survival. In this context, enterprises considering outsourcing some activities, excluding core competencies, seek to strike a challenging balance. Sheng  points out that many specialized PR companies can handle “necessary trivial things,” including managing customer and public sentiment, for these firms. This observation helps to explain why PR recently became the fastest developing business in America [David, 2000]. It is illustrative to focus on technology industries. Even though specialized technology PR companies can handle “necessary trivial things” for their customers, this is not the main reason that technology companies outsource these activities. For example, according to Lee , PR activities are difficult to direct smoothly, the main difficulty being obtaining the proper balance between traditional and non-traditional professional knowledge and customer opinion concerning company behavior. The implication is that technology companies may prefer to outsource PR activities to concentrate on core competencies while at the same time enjoying a high-quality professional image. Sheng  also argues that core competency characteristics in PR companies are determined by whether they agree with their customers’ preferences and characteristics, including attitudes toward professionalism, creativity, and innovation. Considering this and related viewpoints [Mascarenhas, Baveja & Jamil, 1998; Lee, 1995; Sheng, 1999], we find that technology firms outsource PR activities mostly based on whether the candidate PR company’s core competencies complement or substitute for those of the technology firm. This finding motivates quantitative description of the relationships among technology enterprises and PR companies based on harmony of core competencies, types of PR activities, and outcomes of cooperation. Our research sample targeted technology industries in Taiwan. We collected data through semi-structured interviews and performed data validation by consulting secondary data. We performed theoretically driven content analysis and corrected the framework to construct a new model based on empirical data. Our research focuses on core competencies as we analyze PR outsourcing strategies in technology industries. In this section, we describe relevant literature concerning PR strategies, core competencies, and outsourcing results. First, we discuss literature concerning PR activities. Sheng  argues that organizations directing their own PR activities mainly seek to manage public opinion through press releases. For these purposes, organizations determine their PR strategies based on how they understand public opinion and the press release mechanism. Understanding public opinion is divided into two steps: characterizing current public sentiment and identifying (positive or passive) channels for its management. Press releases are classified into two types: messages with content and form and releases that serve as a reminder of presence and continued mass media access. Message with content and form, in turn, are separated into four categories.
Compliance with Disclosure Requirements by Four SAARC Countries—Bangladesh, India, Pakistan and Sri Lanka
Dr. Majidul Islam, Concordia University, Montreal, Canada
The purpose of this study is to empirically investigate the compliance with disclosure requirements by some South Asian Association for Regional Cooperation (SAARC) countries and to explore the possibility of standardization of accounting practice in the SAARC region. The reports of ten companies from the manufacturing sector of each of four SAARC countries—Bangladesh, India, Pakistan and Sri Lanka (BIPS) were collected. The reports were examined against 124 information-item requirements of the standards, company acts and listing rules of the stock exchange, which are commonly observed by the BIPS companies in their respective countries. The compliance with the obligatory information items was measured using a relative index for those four countries. The result shows that Sri Lanka complied, on average, with the highest requirements of the standards, acts and rules, followed by Bangladesh, Pakistan and India. The South Asian countries are gradually making contributions to world trade, and because these countries are dependent on aid and investments from beyond their borders, the accounting development processes and accounting systems need to be such that they satisfy the investors and donors and, at the same time, create an environment for useful reporting for user groups within the countries. The purpose of this paper is to investigate and assess empirically the degree of compliance with disclosure requirements of the listed Bangladeshi, Indian, Pakistani and Sri Lankan (BIPS) companies. As these countries belong to emerging capital markets (ECM) (Standard and Poor’s, 2001), it is particularly relevant for them to comply with financial reporting requirements of the standards. Accounting information plays an important role in emerging economies, especially if the countries are dependent upon foreign investment. Financial statements of companies reflect the aspirations of the users’ information; however, many players influence the quality of financial reporting and bring strengths and weaknesses to the accounting and reporting process (Gavin 2003). Ahmed and Nicholls (1994) argued that there are many incentives for disclosure in emerging economies; there are also considerable reasons for not complying with mandatory disclosure requirements. In their strategic policy formulation, large and multinational companies are focusing on the global economy. Globalization of trade and economies is changing economic growth and the world trading system. Globalization, in its turn, emphasizes the necessity of having standards to harmonize accounting practices, which would reduce diversity and improve comparability of financial reports prepared by companies from different countries. BIPS, being dependent on foreign investment and foreign assistance, should be trying to respond to the demands of the cross-border as well as domestic users of the information. By analyzing the financial statements of the BIPS sample companies, this paper will focus on some salient features that induce companies to, or not to, comply with disclosure requirements. Also, the paper will identify key issues for accounting practices and standards development in BIPS by analyzing economic, social and cultural backgrounds. The paper is organized as follows: the following sections review the reporting environment of BIPS and accounting standardization and its implications for developing and SAARC countries, followed by research design and methodology, results and analysis, discussion, conclusion and limitations of the study. Accounting principles allow the preparers of financial reports to increase the utility of information to external users, and they allow for users to have confidence in the accounting information. But accounting rules often differ from country to country, and even from company to company within the same country. This creates variations in financial reports that are based on the same economic transactions, thereby reducing their credibility and deterring international business investment and cross-border flows of capital. At the initial stages of the professional development of developing countries like BIPS, harmonization might be a viable alternative to establish credibility in financial reporting, which may be achieved through standard practices, because they limit the freedom of management to choose between alternative accounting methods favourable to management. The reasons for harmonizing the practice are to enhance overall market efficiency and reduce the cost of capital for companies. Provision of different figures in different environments is confusing for investors and for the public (The European Commission 2001). This is all the more true for the BIPS environments. The implementation of recognized standards gives credibility to accounting reports and is extremely important for developing countries. It is imperative for BIPS to have a body of accounting principles that governs the measurement of transactions and the disclosure and presentation of financial information. Currently, however, compliance with the IAS in BIPS is optional, and the financial statements of different companies are not comparable. The level of development of local standards and adoption and implementation of the IAS is not consistent, but it is growing. Of 41 IAS, Bangladesh has adopted 30; India, 18; Pakistan, 32; and Sri Lanka, 31. Adherence to international standards as well as local standards is in the best interests of both the users and the preparers of financial statements. External users will have more confidence in reports that are easy to analyze and understand. Internal users will have information that will help them make better investment and managerial decisions. Well-devised accounting systems and controls inspire investor confidence, stimulating the flow of capital. The systems and controls then ensure that this capital is used efficiently. The Financial Accounting Standard Board (FASB) states that “a reasonably complete set of unbiased accounting standards that require relevant, reliable information that is decision useful for outside investors, creditors and others who make similar decisions would constitute a high-quality set of accounting standards (FASB 1998).” Bangladesh, India, Pakistan and Sri Lanka were once British colonies, and all four achieved independence at around the same time (1947-1948).
How Firms Integrate Nonmarket Strategy with Market Strategy: Evidence from Mainland of China
Dr. Yuanqiong He, Huazhong University of Science & Technology, China
Integrating nonmarket strategy with market strategy is the new trend in the field of strategic management. However, it is still unknown how market strategy and nonmarket strategy will be integrated, special in an emerging economy setting such as mainland of China in the existing literature. Therefore, based on 438 usable questionnaires and in-depth interviews with 10 top managers from mainland of China, this research examines the integration between nonmarket and market strategies among firms with various ownerships. The study represents a promising step forward toward the new trend of strategic management and provides Chinese firms with implications of dealing with stakeholders in today’s Chinese business environment. Since Baron (1995a, 1995b) advocates the integration of nonmarket strategy with the economic “market” strategies of the firm, it is fair to say that two areas of market strategy and nonmarket strategy have often been treated as separate subjects in the academic literature. In the previous literature, nonmarket strategy of American firms is mainly taken as the sample of empirical researches so that there is lacking of evidence from an emerging economy setting such as China. Actually, being in a transitional period from a command economy to a market economy, Chinese firms adopt nonmarket strategy ubiquitously and integrate them with market strategies in their business operations (Tian et al., 2003). Because of different institutional background between China and other developed countries such as America, nonmarket strategies of Chinese firms and their integration with market strategies both have their own characteristics. Therefore, this paper is an effort to fill this gap through an empirical examination based on evidence from mainland of China. This paper is structured as follows. The following section describes the nonmarket environment in the transitional period of China and proposes the hypotheses. Then, research methodology along with collection procedures and measurement of the constructs are introduced. The results of the empirical study are discussed in section four. Finally, we conclude by noting the managerial implications of the study’s findings and provide directions for future research. The nonmarket environment in China is different from those in advanced Western countries in many ways (Nee, 1992; Hitt et al., 2002). The most salient difference lies in its authoritarian political system. In the economic reform process starting from 1978, the Chinese government remains the dominant policy designer and implementer. Government bureaux at all levels are powerful special interest groups and key stakeholders in business firms’ nonmarket environment (He & Tian, 2005), and have to be watched with a lot of attention by managers and scholars alike. Apart from making direct investment in state owned assets, government also controls firms’ investments through numerous approval mechanisms. The “Administrative Approval Law” (implemented in July 1, 2004) has significantly reduced the scope of government project checking duties. Now, only four responsibilities are explicitly stated: (1) approval, (2) check and admission, (3) registration and (4) certification and qualification, but the process may still look cumbersome to outside observers. During the process of marketlization, the business environment has been more and more regulated by laws and rules, which challenged and criticized the traditional ways of establishing relationship with Chinese government officials. For example, 30690 laws and regulations (including the laws issued by National People’s Congress, administrative regulations issued by the State Council, regulations issued by local governments and industrial regulations issued by national departments) have been issued in China in 1999. While in 2005, 141173 have been issued, which increased by 360% as compared with that of 1999. Besides what mentioned above, Chinese business environment is under the heavy influence of Confucianism, which has endured as the basic social and political value system for over 1,000 years (Hwang, 1987; Yum, 1988). Favor, trust and reciprocity are the common features of the relationship in China (Tong & Yong, 1998; Wong & Chan, 1999). Porter’s generic strategies focus on market components consisting customers, competitors, suppliers, while nonmarket strategy addressed by Baron highlights nonmarket component. Market strategy and nonmarket strategy have several differences in some aspects including environment focus, strategic making process and so on. See table 1. Market strategy and nonmarket strategy, although unique, are structurally similar and they are coordinated with each other, which is the base of their integration. The approaches of integrating market and nonmarket strategy could be classified into buffering and bridging based on Meznar & Nigh (1995)’s argument of roles of boundary-spanning function. “Bridging integrated strategy” is to integrate nonmarket issues (such as environment protection issue, public policy, etc.) into the process of making market strategy. “Buffering integrated strategy” involves trying to keep the environment from interfering with internal operations and trying to influence the external environment. For example, a firm actively influences its environment through such means as lobbying, being a member of CPPCC (National Committee of the Chinese People’s Political Consultative Conference) and NPC (National People’s Congress), providing government officials with industrial reports and so on (Tian, et al., 2003) in China. Blumentritt (2003) concluded that firms’ characteristics including technology, size, and economic spillovers could influence the choice of buffering or bridging. Also Hansen & Mitchell (2001) used the ownerships as the predictor of nonmarket strategy because ownership is the substitutable variable of firms’ characteristics.
Relationship between Organizational Socialization and Organization Identification of Professionals: Moderating Effects of Personal Work Experience and Growth Need Strength
Xiang Yi, Western Illinois University, IL
Jin Feng Uen, National Sun Yat-Sen University, Taiwan
“Organizational identification” (OID) has significant implications for managing professionals in fast changing organizations. This study focuses on the relationship between socialization tactics and the OID of professionals. Work experience and personal “growth need strength” (GNS) were included as moderators. Three main results were found. (1) Ignoring the moderating effect, the serial tactic has a significantly positive effect on the OID; (2) Considering the moderating effect of work experience, collective and fixed tactics are helpful to OID for the professionals with formal work experience. (3) Formal and sequential tactics have positive impacts on the OID despite the GNS the professionals have, but high GNS professionals commonly have higher OID than the low GNS ones. Rapid advances in science and technology have changed the world. The knowledge and skills of employees are key sources of productivity in knowledge-based economies. For this reason, how to attract and retain high-quality professionals will continue to be critical issues among researchers as well as practitioners. Employees with knowledge and skills are essential to an organization’s success. Professionals distinguish themselves from traditional employees in that professionals’ work requires highly complicated general and firm-specific knowledge and skills; while traditional employees tend to perform clearly demarcated jobs or jobs needing a high degree of supervision (e.g., Xu, 1996). They can solve the difficult and non-routine problems for the organizations. Therefore, professional employees can, and often do have more bargaining power when they negotiate with the employer. This puts more pressure on a firm to find ways both to promote professionals’ productivity and bind them to the organization. Another feature of professionals is that they must continually learn new knowledge and skills, and keep up with latest developments in their specialized areas if they are to maintain their own employability (Xu, 1996). Therefore, good professional employees usually value the opportunities organizations provide for acquiring new knowledge and expertise related to their career development. Researchers have suggested numerous ways to retain professional employees and reduce turnover by promoting loyalty and commitment. For example, King and Sthi (1998) and Baroudi (1990) found that reducing stressors such as role ambiguity and role conflict in the role adjustment process could help reduce employees’ intention to leave the organization. Arnett and Obert (1995) found that motivating constructive employee behavior could foster organizational loyalty. Others (e.g., Gillian, 1994) found that emphasizing teamwork, focusing on moral and stress management, and expanding career development could decrease turnover. In essence, keeping professional employees means finding ways to promote their organizational commitment and professional fulfillment. Research demonstrated that employees are more committed to organizations if they feel they are treated well (Jandeska & Kraimer, 2005). One method to achieve this commitment is establishing organizational identification through appropriate socialization tactics as a professional enters an organization (Ashford & Saks, 1996). Our study tried to explore the use of organizational socialization theory, which firms in the high-technology and knowledge-based industries may use to influence the formation of organizational identification for new-entry professionals. Additionally,, we discuss how organizational socialization tactics interact with work experiences and growth need strength of the new professionals and influence their organizational identification. The results indicate that some of the socialization tactics are especially effective on building new professionals’ organizational identification and personal work experience and growth need strength are meaningful moderators concerning the relationships between socialization tactics and organizational identification. Organizational socialization is the process in which a newcomer in an organization learns his or her roles and adapts to the new environments. (Van Maanen & Schein, 1979). The original definition of organizational socialization was to “know the rules” for a newcomer, but now has evolved into the process of making a person understand his roles or the organization’s values, philosophy, social network (Louis, 1980). This process of becoming acclimated to an organization is crucial to an employee’s future success. Research has shown that content of information absorbed by new employees positively correlates to outcomes of job satisfaction and organizational commitment (Cooper-Thomas, & Anderson, 2002). Van Maanen and Schein (1979) proposed that organizations can use six dimensions of socialization tactics, each of which is defined by a continuum of institutionalized versus individualized socialization as demonstrated in Figure 1 (Jones, 1986). The first socialization tactic is collective (vs. individual) socialization, putting new employees together so that they will undergo a similar experience and receive the same information. The second tactic is formal (vs. informal) socialization. Here, rather than commixing a newcomer with current employees, programs or activities are created exclusively for new employees during a specific period of time. The third tactic is sequential (vs. random) socialization. This method demands a newcomer go through a specified sequence of stages that leads to adjustment to the new job roles. The fourth tactic, fixed (vs. variable) socialization, refers to a schedule for assimilation into the organization. The fifth tactic is serial (vs. disjunctive) socialization, a process in which an experienced role model, usually a senior colleague, is used to help the new employee. Serial socialization is opposed to disjunctive socialization, in which no consistent help is provided by others. Finally, investiture (vs. divestiture) socialization affirms personal characteristics, ideas and experiences of the new employee, as opposed to denying and breaking these personal characteristics and focusing on building totally new experience and attitude to the new organization.
A Typology of Brand Extensions: Positioning Cobranding As a Sub–Case of Brand Extensions
Dr. Costas Hadjicharalambous, Long Island University, NY
This article treats cobranding as a sub-case of brand extensions and presents a typology of brand extensions. The underlying research is briefly outlined for clarity of meaning and intent of the terms of the typology. Brand extensions are classified (1) according to the number of brands involved in the extension and (2) the purpose of the extension. The rationale and the benefits of treating cobranding as a brand extension are presented. The paper concludes by discussing the importance of the typology. The typology presented can be used as an organizer of thought on the subject and a stimulus for future research. A relatively new phenomenon that caught the attention of academic researchers is cobranding. Cobranding is the use of two or more brands to name a new product. According to some estimates, recent cobranding and other cooperative brand activities have enjoyed a 40% annual growth (Spethmann and Benezra 1994). The basic premise behind cobranding strategies is that the constituent brands help each other achieve their objectives. Marketers recognized that, at least in some cases, using two or more brand names in the process of introducing new products offers competitive advantage. For example, ConAgra and Kellogg have joined efforts to market Healthy Choice adult cereals. In another cobranding effort, ConAgra has agreed to allow Nabisco to use the Healthy Choice brand in a new line of low fat, low cholesterol and low sodium snacks. The purpose of this double appeal is to capitalize on the reputation of the partner brands in an attempt to achieve immediate recognition and a positive evaluation from potential buyers. From a signaling perspective (Wernerfelt 1988; Rao, Qu and Ruekert 1999) the presence of a second brand on a product reinforces the perception of high product quality, leading to higher product evaluations and greater market share. Associating one brand with another involves risks that need to be addressed. Cobranding may affect the partner brands negatively. One need only consider the problems experienced by Dell and Gateway when it was reported that the design of Intel Pentium processors was defective (Fisher 1994).
The Dynamics of Corporate Takeovers Based on Managerial Overconfidence
Dr. Xinping Xia, Huazhong University of Science and Technology, Wuhan, PRC
Dr. Hongbo Pan, Huazhong University of Science and Technology, Wuhan, PRC
Using a game theoretical real option framework, this paper presents a dynamic model of takeovers based on the stock market valuations of the merging firms. The model incorporates managerial overconfidence on merger synergies and competition and determines the terms and timing of takeovers by solving option exercise games between bidding and target firms within the same industry. The model explains merger waves, abnormal returns to the stockholders of the participating firms around the time of the takeover announcement, and the impact of completion on the timing, terms and abnormal returns. The implications of the model for shareholder abnormal returns and the impact of competition on shareholder abnormal returns are consistent with the available evidence. The model generates new predictions relating shareholder abnormal returns to industry characteristics of the participating firms, and the level of managerial overconfidence. Mergers and acquisitions have been the subject of considerable research in financial economics. Yet, despite the substantial development of this literature, existing merger theories have had difficulties reconciling the stylized facts about mergers with the payment of cash (1). Two of the most important stylized facts about mergers are: First, the combined returns to stockholders are usually positive (see the recent survey by Andrade, Mitchell and Stafford (2001)); and second, acquirer returns are, on average, not positive (Andrade, Mitchell and Stafford, 2001; Fuller, Netter and Stegemoller, 2002). These two stylized facts are difficult to reconcile theoretically. The main aim of this paper is to provide a theoretical explanation for the above two stylized facts and examine the impact of competition on the abnormal returns of participant firms around the takeover announcement. The basic elements of our theory are the following: First, a takeover deal is an efficient response to industry shock, which usually results in positive merger synergies. Many academic literatures find that mergers concentrate on industries in which a regime shift of technological or regulatory nature can be identified, making mergers an efficient response (e.g. Mitchell and Mulherin (1996),
Copyright: All rights reserved. No part of the material protected by this copyright notice may be reproduced or utilized in any form or by any means, including photocopying and recording, or by any information storage and retrieval system, without the written permission of JAABC journals. You are hereby notified that any disclosure, copying, distribution or use of any information (text; pictures; tables. etc..) from this web site or any other linked web pages is strictly prohibited. Request permission / Purchase this article: email@example.com
Copyright 2000-2019. All Rights Reserved