The Journal of American Academy of Business, Cambridge

Vol.  10 * Num.. 1 * September 2006

The Library of Congress, Washington, DC   *   ISSN: 1540 – 7780

Most Trusted.  Most Cited.  Most Read.

Members  / Participating Universities (Read more...)

All submissions are subject to a double blind peer review process.

Main Page     *     Home     *     Scholarly Journals     *     Academic Conferences     *      Previous Issues     *      Journal Subscription

 

Submit Paper     *     Editorial Team     *     Tracks     *     Guideline     *     Standards for Authors / Editors

Members  *  Participating Universities     *     Editorial Policies      *     Jaabc Library     *     Publication Ethics

The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Journal of American Academy of Business, Cambridge will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work. All submissions are subject to a double blind peer review process.  The Journal of American Academy of Business, Cambridge is a refereed academic journal which  publishes the  scientific research findings in its field with the ISSN 1540-7780 issued by the Library of Congress, Washington, DC.  The journal will meet the quality and integrity requirements of applicable accreditation agencies (AACSB, regional) and journal evaluation organizations to insure our publications provide our authors publication venues that are recognized by their institutions for academic advancement and academically qualified statue.  No Manuscript Will Be Accepted Without the Required Format.  All Manuscripts Should Be Professionally Proofread Before the Submission.  You can use www.editavenue.com for professional proofreading / editing etc...

The Journal of American Academy of Business, Cambridge is published two times a year, March and September. The e-mail: jaabc1@aol.com; Journal: JAABC.  Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via the e-mail address above. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above. Address advertising inquiries to Advertising Manager.

Copyright 2000-2017. All Rights Reserved

The Impact of Frequency of Use on Service Quality Expectations: An Empirical Study of Trans-Atlantic Airline Passengers

Dr. Kien-Quoc Van Pham, Pacific Lutheran University, Tacoma, Washington

Dr. Merlin Simpson, Pacific Lutheran University, Tacoma, Washington

 

ABSTRACT

While the academic debate continues relative to the conceptual validity, reliability of the SERVQUAL model to assess service quality, the paucity of empirical studies addressing service quality antecedents indicates the need to revisit the causal relationships between these antecedents and the “corollary” service quality assessment. In today’s globally competitive marketplace, the fostering of customer loyalty reigns undisputed as the most important goal for all commercial enterprises, with repeated use or purchase as one of the primary indicators of customer satisfaction, and yet frequency of use has not been addressed in terms of its impact on the means to achieve and maintain such customer loyalty. The airline industry, with its Frequent Flyers Mileage reward system, its success and that of similar promotional programs in promoting loyalty (bankcards purchase cash rebate, Starbuck’s patrons’ cards), is a natural venue to further investigate this service quality conceptual antecedent (past experience) construct. The economic paradigm shift from industrial to customer-value has made service a focal point of all corporate efforts to improve profitability (Albrecht, 1992).  The U. S. economy, as is the case with other developed economies, has become a predominantly "service economy" (Albrecht and Zemke, 1985), in which virtually all organizations compete to some degree on the basis of service (Zeithaml, Parasuraman and Berry, 1990). Service-based companies are therefore compelled to provide excellent service in order to prosper in increasingly competitive domestic and global marketplaces. Service quality has become the significant strategic value adding/enhancing driver in achieving a genuine and sustainable competitive advantage in a global marketplace (Devlin et al, 2000). While many "quality-focused" initiatives have often failed to enhance overall corporate performance, customer-perceived service improvements have been shown empirically to improve profitability (Buzzell and Gale, 1987).  Service quality is considered to be an attitude resulting from a comparison of expectations versus performance (Parasuraman, Zeithaml and Berry, 1988).  While certain authors may contend that evaluations of service quality should be based on performance assessment only (Cronin and Taylor, 1992; Teas, 1993), the prevailing view supports a disconfirmation paradigm (Oliver, 1980), i.e., customers comparing the perceived service with their expectations of service (Parasuraman, Zeithaml and Berry, 1988; Parasuraman, A., L. Berry, and Valarie A. Zeithaml, 1985; Brown and Swartz, 1989; Grönroos, 1984; Congram, 1987).  This paradigm is the conceptual fundamental for the widely used SERVQUAL model for the measurement of service quality (Parasuraman, Zeithaml and Berry, 1988). Satisfying customers thereby depends critically on understanding what customers expect (Parasuraman, Berry and Zeithaml, 1991), expectations being construed as “predictions” (Oliver, 1997; Bridges, 1993; Cadotte, Woodruff, Jenkins, 1987).  Companies that exceed customer expectations without impairing profit margins have frequently been found to have developed a solid foundation of customer loyalty based on segmented service (Drucker, 1964; Porter, 1980; Porter, 1985; Farber and Wycoff, 1991).  Beginning with Oliver (1980), it is also widely accepted that expectations provide a base reference for the determination of levels of customer satisfaction, a concept related to, but not identical to service quality. Expectations can be defined as the desires or “wants” of customers, i.e., what the service provider should offer (Parasuraman, Zeithaml and Berry, 1988; Zeithaml, Parasuraman and Berry, 1990) or what the service provider will provide (Zeithaml, Parasuraman and Berry, 1990). Should expectations are described as “desired” expectations, i.e., what the customers believe they “deserve,” while will expectations can be equated to “predictions,” i.e., what the customers believe they will experience the next time they encounter the service provider (Boulding et al, 1993). Service quality literature depends significantly on expectation theory, with expectations serving as a “prediction” of future events. Expectations are also described in terms of desired (equating to “should”) or ideal (equating to “will”) standards, “normative” expectations of future events (Boulding et al, 1993). In the early 1990s service managers began understanding, as previously discovered in manufacturing, that quality does not improve unless it is measured (Reichheld and Sasser, 1990). Some authors assert that expectations are difficult to operationalize and measure (Brown, Churchill and Peter, 1993; Carman 1993); acknowledging that expectations are challenging to measure empirically, their theoretical importance is nonetheless recognized (Devlin et al, 2002). According to Bell and Zemke (1987), the customer’s experience must meet expectations in terms of process and outcome, which are embodied in the SERVQUAL model with the assurance, tangibles, empathy and responsiveness dimensions (process), and the outcome dimension of “reliability” (outcome) (Parasuraman, Zeithaml, and Berry, 1988).

 

Whistleblowing: International Implications and Critical Case Incidents

Dr. Steven H. Appelbaum, Professor, Concordia University, John Molson School of Business, Quebec, Canada

Kirandeep Grewal, Concordia University, John Molson School of Business, Quebec, Canada

Hugues Mousseau, Concordia University, John Molson School of Business, Quebec, Canada

 

ABSTRACT

This article will examine the following: (1) motivation of whistleblowers; (2) international implications; (3) consequences for the individual and organization; (4) selected mini-case studies and (5) solutions for organizations. An employee’s decision to report individual or organizational misconduct is a complex phenomenon that is based upon organizational, situational and personal factors. Recommendations include: Employees should be encouraged to communicate their ethical concerns internally. Employees need to believe that their concerns will be taken seriously. Employees need to feel that they will not suffer any retaliation for their action. According to Miceli and Near (1985), “whistle blowing is the disclosure of illegal, immoral, or illegitimate practices under the control of their employers, to a person or organizations that may be able to effect action”. (Vinten, 1995). Whistle blowing is the voice of conscience”.  (Berry, 2004).  Whistle blowing is a new name for an ancient practice.  The fist time the term whistle blowing was used was the 1963 publicity in the USA surrounding Otto Otopeka. who was an American public servant had given classified documents to the chief counsel of the Senate Subcommittee on Internal Security, which could pose as a threat to the government administration. (Vinten, 1995)  Mr. Otpeka’s disclosure gesture was severely punished by the then Secretary of State who dismissed him from his functions for conduct unbecoming. The term whistle blowing is sometimes perceived negatively, while it is also very often viewed in a positive, even heroic fashion.  In fact, this perception is highly influenced by the perspective from which one looks at it and by the circumstances surrounding the disclosure by an employee.  The main reason why whistle blowing is such an important issue, amongst other elements, is that it has to do with the fact that many public and corporate wrongdoings are never disclosed.  Most people agree that estimating the percentage of situations for which the whistle is blown in comparison to when it is not would be a very hazardous undertaking, for obvious reasons.  However, it can be said with conviction that “the majority of employees who become aware of individual or corporate wrongdoing never report or disclose their observations to anyone” (Qusqas and Kleiner, 2001) A study conducted in the United States by the Ethics Resource Center and reported in the January 2005 edition of Strategic Finance pointed out “that 44% of all non-management employees don’t report misconduct they observe”.  The top two reasons for not reporting were a belief that no corrective action will be taken and fear that the report will not be kept confidential”. (Verschoor, 2005)  “Another reason why employers are reluctant to hire whistleblowers is because their action is seen as a breach of loyalty”. (Qusqas and Kleiner, 2001)  In fact “An employee’s decision to report individual or organizational misconduct is a complex phenomenon that is based upon organizational, situational and personal factors”. (Miceli et al., 1987) Berry outlines many questions an employee witnessing an illegal or immoral wrongdoing may ask him or herself before deciding to blow the whistle, such as: will anyone believe me? who will listen to me? can I make a difference? will I ever be heard? what will happen if I go forward? will anyone support me? is it worth it? what if I am wrong? what can I afford to lose? (Berry, 2004) This is just a simple illustration as to why whistle blowing is such a complex and key issue in the area of organizational behavior.  According to Hugh Kanfman, a well-known whistleblower: “if you have God, the law, the press and the facts on your side, you have a fifty-fifty chance of defeating the bureaucracy”. (Qusqas and Kleiner, 2001)  This article will focus on conducting a contemporary in-depth literature review to assess the importance and scope of whistle blowing in the North American corporate world since it is only now being researched seriously.  The analysis will discuss the common characteristics featured by whistleblowers and describe how the professional and personal life of whistleblowers is affected by the act of disclosure itself. Research published in 1989 by Glazer & Glazer is quirt lucid: “sixty-eight per cent of whistleblowers will have difficulty finding employment in the public sector […] because the work done by [them] is not easily replaced and they are put on a blacklist”. (Vinten, 1995; Qusqas and Kleiner, 2001).This article will examine the following: (1) motivation of whistleblowers; (2) international implications; (3) consequences for the individual and organization; (4) selected mini-case studies and (5) solutions/recommendations for organizations.

 

Sales Growth versus Cost Control: Audit Implications

Dr. Ray McNamara, Bond University, Australia

Dr. Catherine Whelan, Georgia College & State University, GA

 

ABSTRACT

Concerns raised by regulators, investors, and researchers over the independence implications of audit firms providing both auditing and consulting services, has led to the discontinuation by some firms of their consulting activities. The resulting decline in expertise may lead to an impairment of the ability of audit firms to adequately audit the revenues of listed firms. This research investigates the moderating effect of sales growth and cost control on the value-relevance of earnings and book value. The results demonstrate that the market responds differently to the revenue and cost component of earnings. In particular, the market perceives enhanced earnings quality in the presence of both sales growth and cost control. Consequently, audit procedures should provide assurance of both completeness and existence of revenue and expense items on the income statement. The approach of the new millennium saw an increasing emphasis on the audit profession’s need to broaden its activities into a range of assurance services (Elliot 1998). One area of potential profitability was revenue and cost assurance in those industries with large customer bases, complex revenue schema, and advanced revenue cycle technology such as in the telecommunications and health care industries (Connexn 2003).  Firms such as Price Waterhouse Consulting (PWC) became the leaders in the revenue and cost assurance area because of their accumulated auditing, information systems, and statistical analysis expertise (Cullinan 1998). This expertise resided in the audit and consulting arms of the firm. The audit division’s focus was on assuring that the reported revenue was not overstated and on internal controls that reduce the likelihood of overstatement. The consulting arm focused on the understatement of revenue and the implementation of internal control methods and procedures to recover unrecorded revenue (Connexn 2003). The existence of dual relationships with clients has raised the question of audit independence, particularly as the assurance services may provide a greater revenue stream than traditional audit services. However, the additional insights gained through the revenue assurance process would undoubtedly contribute to audit quality. This trade-off between independence and audit quality is of concern to all market participants. The Sarbanes-Oxley Act (SOX) was introduced to restore investor confidence in corporate leadership and to protect shareholders’ interests. Guidelines were established to ensure the accuracy of reported financial statements and to enhance control over the processes and information on which the statements are based. Sections 302 and 902 establish rules for executive certification that financial reporting is complete and accurate. Section 404 requires corporate executives to file an internal control report along with an annual financial report. “Section 302 specifically states that application of GAAP alone may not fulfill the intent of presenting a materially accurate and complete portrayal of the company’s financial results…They must disclose financial information that is informative and reasonably reflects the underlying transactions and events” (Geiger and Taylor 2003).  SOX prohibits audit firms from undertaking consulting activities for firms which they audit. It is this prohibition that has led to the disposal of the consulting arms of many audit firms. Consequently, audit firms may no longer have the expertise to undertake revenue assurance. In order to address this issue of expertise, some audit firms now specialize in particular industries. Such auditor industry specialization has been shown to contribute to audit quality, highlighting the importance of auditor expertise (Solomon et al. 1999; Balsam et al. 2003). If SOX’s insistence on the completeness of financial data is correct, then an income statement that focuses on overstatement of revenues at the expense of understatement will fail to provide relevant information for investors. Similarly, if the audit focus is on the accuracy of the earnings number at the expense of the revenue and expense classifications then the statements will fail to provide investors with relevant information.  The general objective of this research is to assess the value-relevance of the income statement components relative to the summary measures presented in the financial statement. Specifically, we assess the moderating effect of sales growth and cost control on the value-relevance of earnings and book value. This paper presents an analysis of the relationship between financial statement summary measures and the market price of 763 firms listed on the Australian Stock Exchange for the period 1997 to 2001. Using the Ohlson (1995) valuation model we find a significant relationship between share price, earnings, and book value. For firms with above median sales growth we find that sales growth is value-relevant in its own right suggesting the need for an empirical adjustment to Ohlson’s model for growth. There is also a significant interaction between sales growth and earnings, and sales growth and book value. These results indicate that the sales revenue number contains information that is significant to investors. Firms with cost control and no sales growth have a significant cost control-earnings interaction suggesting that not only is the summary earnings number significant to investors but so too is the expense ratio. It follows that in providing a materially accurate and complete portrayal of the company’s financial results, the audit profession must focus on both overstatement and understatement of revenue. Similarly, an audit approach that focuses on the summary earnings number at the expense of revenue and expense classification may fail to communicate the true value of a firm to the market.

 

E-Local Government Strategies and Small Business

Dr. Stuart M. Locke, University of Waikato, New Zealand

 

ABSTRACT

The New Zealand Government like governments in many countries recognises the importance of small business in the economic and social structure of the country.  It has implemented a number of policies, in recent years, to assist small, medium enterprises (SMEs).  The extent to which these initiatives are successful, in terms of generating the outcomes purported as the rationale for their implementation typically does not receive detailed scrutiny.  This paper reports upon an investigation into one element of government programmes directed toward the promotion of greater broadband internet coverage and the encouragement of the adoption of internet technologies have been promoted.  In particular the E-Government single access portal for central government and a similar e-local government strategy have been promulgated.  An empirical investigation of the progress made by the territorial local government authorities in implementing the e-local government strategy and the impact upon SMEs is presented.  It is observed that at the policy formulation stage the nexus between policy and SME outcome is not made explicit and second that the monitoring of policy is lacking which has potentially negative implications for SMEs.  I is suggested that the level of public administrative accountability as it relates to the monitoring of this policy is inadequate and to the extent that this observation is generalisable SMEs may not be reaping the gains that could be achieved. In March 2001, central Government launched an e-government strategy, aiming to create a public sector, including local government, which will meet the needs of New Zealanders in the information age.  At the local government level, under the umbrella of the local government association, a range of objectives in terms of the breadth of services and timing of e-delivery development are proposed in the e-local government strategy document.  The majority of the objectives have tangible targets and time periods associated with them.  These make suitable reference points for evaluating progress made toward the implementation of the policy. The importance of high level information communication technology penetration into the business and household sectors of New Zealand has been stressed in successive government reports, culminating in a digital strategy (MED 2004a).  “This Strategy provides an ambitious plan for the development and implementation of policies aimed at achieving the ideal of all New Zealanders benefiting from the power of ICT to harness information for social and economic gain” (p1).  Components of the strategic initiative include legislative reform, e-government implementation, e-learning programmes for developing capabilities, and several others.  The telecommunication networks covering landline, mobile and satellite systems are owned and operated by the private sector.  Government is, as part of its “digital strategy”, is investing in a project of Provincial Broadband Extension known as PROBE.  PROBE, which is funded through the Ministry of Education, will ensure that “all schools and their surrounding communities have access to broadband by the end of 2004” (MED 2004a, p95). SMEs in the areas surrounding rural communities will have access at local charge rates to high speed internet. Government believes that there are e-regional development potentials from improved ICT adoption and has a range of initiatives related to develop e-regions NZTE.  E-region’s focus is on building relationships between the public and private sectors, based around regional needs, to make best use of broadband technology.  NZTE (1) seeks to work with regions and local authorities to help ensure they benefit from related synergies should government invest in an advanced network initiative. (MED, 2004b.) Howell and Marriott (2001) observe that this growth has been significant in NZ, with more than 50% of households and 95% of businesses connected by 2001.  Broadband services are available to around 80% of residential addresses, but fewer than 3% subscribe (OECD, 2001a).  In 2001 New Zealand users ranked among the most intensive users of the Internet, in terms of number of hours of use per month (OECD, 2001). Yet, as Howell (2002) observes, regarding the uptake, that regulatory and supply-side considerations seem unable to answer the question of why uptake has been so slow. The analysis reported in this paper reviews the progress of local government in moving toward the attainment of the more specific goals which are seen as most impacting upon small to medium size enterprises (SMEs).  Government is continuing to implement new policies to assist and enhance the small business sector.  A Task Force group has commented upon Governments progress in reducing compliance cost, improving regional development, promoting exporting etc.  Accordingly, it is opportune to review the e-local government strategy, considering the extent to which it may be of assistance to the SME sector.  First, the themes and goals are discussed in turn.  Second, the data for the study is discussed.  Third, the initial observation results, which are listed as appendices, are reported.  Fourthly, the policy relevance of the findings is commented upon in the concluding section.

 

The Effects of Humor and Goal Setting on Individual Brainstorming Performance

David W. Roach, Ph.D., Arkansas Tech University, AR

L. Kim Troboy, Ph.D., Arkansas Tech University, AR

Loretta F. Cochran, Ph.D., Arkansas Tech University, AR

 

ABSTRACT

The efficacy of goal setting is widely accepted by researchers, managers, and the “man-on-the-street.”  Given this agreement, the simple maxim to “set goals” seems obvious. However, individual, task, and context characteristics affect the characteristics of goals that lead to high performance. The primary purpose of this study is to examine the effects of goal characteristics and a specific context variable, humor, on an individual brainstorming task. With respect to goal characteristics, we examine the effect of goal specificity (vague goals, specific attainable goals, and specific stretch goals) on individual brainstorming performance. With respect to humor, we examine the effect of the presence or absence of humor and the interaction of humor and goal characteristics on individual brainstorming performance. In addition, we examine the interaction between goal characteristics and humor on the task. We found that performance on a brainstorming task was highest when goals were both specific and challenging (stretching). While humor did not affect performance on specific goals, humor did improve performance with vague goals and humor radically improved performance with stretch goals. The research results suggest that humor may be an effective managerial lever for certain tasks and contexts. This paper reports on a study that examines the effects of goal characteristics and a specific context variable (humor) on an individual brainstorming task. The literature abounds with research on this topic, so we cite just a few specific studies on the relationships among goal setting (specificity and difficulty), performance, and humor. Next, we present the procedures, methods, and results of our study. Finally, we will discuss the implications and limitations of this research and present ideas for future research in this area. The impact of goal setting on performance is well established in organizational behavior and management research (Ambrose and Kulik, 1999; Locke, 2004; Latham, 2004). Performance is higher for specific, difficult goals than easy goals, "do your best" goals, or no goals (Locke, Shaw, Saari, & Latham, 1981). Reviewing extant literature, Locke et al. (1981) found that 99 out of 110 studies empirically demonstrated the effect of goal-setting on task performance. Specific, clear goals establish and communicate expected performance levels. When people know what is expected, they can focus their efforts on the target (Latham, 2004). Moreover, knowing performance expectations reduces anxiety concerning the performance appraisal process (Latham, 2004).  Goal difficulty moderates the relationship between goal setting and performance (Wright, 1990; Ambrose and Kulik, 1999; Campbell and Furrer, 1995). People are motivated to exert more effort over time when presented with difficult goals (Latham, 2004). In other words, people are more motivated to accomplish difficult rather than easy goals. Much of the goal-setting research uses goals that are moderately difficult, usually 10% -30% higher than existing performance levels. Achievable goals, challenging but within the reach of an employee, have been shown to improve individual self-efficacy and also team performance (Hanlan, 2004). These performance levels are consistent with the notion that many (at least North Americans) are motivated to achieve moderately high goals (McClelland, 1961).  By contrast, even higher performance levels may be possible when individuals are “stretched” by extremely challenging goals (Hargrove, 1995). With stretch goals, the individual or group does not yet have the necessary knowledge or skill resources to meet the stretch goal but chooses to attempt it anyway in an effort to acquire the necessary resources. The purpose is to achieve more than they would with a lesser goal but also to thereby improve the effectiveness of the organization and foster growth and professional development of individuals (Kerr and Landauer, 2004). Sherman (1995) suggests that attempting a stretch goal is also intended to stimulate creativity and motivate people to search for unusual solutions outside previous tenets and accepted practices that would not lead to achieving the ‘impossible’ goal. Hanlan (2004, p 191) puts it this way, “To avoid these old patterns of thinking, it is critically important to set goals that are not achievable by conventional means.” Logically, stretch goals would therefore be pedagogically useful and appropriate with students in a classroom setting. This would challenge individuals to move behind a previously identified performance level. Six-sigma management, a method used in many businesses embodies the idea of stretch goals. In essence, six-sigma suggests that the occurrence of defective products or services is unacceptable. This stands in stark contrast to past management models that simply tried to estimate and “cover” the costs of poor quality (product replacement, repair costs, etc.).  Goal setting relates positively to performance only up to a point. Very difficult goals may have negative impacts on performance. Wright et al. (1995) found a U-shaped curve in that relationship so that extremely hard goals resulted in lower performance in a college student scheduling task. Staw and Boettger (1990) found that a very specific, focused goal led to lower performance than a more general goal. In their study, students who were instructed to improve grammar in a passage were less likely to correct errors in content than those who were given a general goal. Vance and Colella (1990) found that increasing the level of assigned goals led to a decrease in commitment to these goals on anagram task. When goals are too difficult, individuals may not have either the internal or the external resources to accomplish the goal, and commitment and performance levels are likely to decrease (Latham, 2004). When the incentive to achieve is high and extreme stretch goals are put in place, individuals may even be more motivated to cheat. Schweitzer et al. (2004) found unethical behavior was more likely when participants had almost reached their goals. Latham (2004) suggested that if the goals are too difficult, individuals may resort to unethical tactics or undesirable tradeoffs or shortcuts. Locke (1994) argued that motivation may be so strong that individuals may focus on short-term goals to the detriment of long-term goals or even cheat to accomplish the goal.

 

A Comparison of the Solicited and Independent Financial Strength Ratings of Insurance Companies

Dr. Martin Feinberg, University of Texas-Pan American, Edinburg, TX

Dr. Roger Shelor, College of Business, Ohio University, Athens, OH

Dr. Mark Cross, Miami University, Oxford, OH

Axel Grossmann, University of Texas – Pan American, Edinburg, TX

 

ABSTRACT

This study provides a comparison of the life/health and property/casualty insurance company ratings of a solicited ratings agency, A.M. Best, versus those of an independent ratings agency, Weiss Ratings Inc. for the time period 1998 – 2001. Financial strength ratings assess the company’s overall claims paying ability.  The results provide further evidence that A.M. Best ratings are higher than Weiss ratings. Although previous studies have indicated this result, they did not fully account for any lack of correspondence between ratings and possible sample selection bias, as this study does. The finding of no difference in rating changes with respect towards timing is inconsistent with previous research. The results add evidence to the argument that there should be concern among consumers regarding the closeness and unique nature of the relationship between the solicited rating agency and the insurance company being rated. The results remain consistent across both life/health and property/casualty insures.  Insurer financial strength ratings provide the rating agency’s assessment of overall financial strength and the insurer’s ability to meet policyholder obligations. Consumers, insurance agents and brokers, corporate risk managers, regulators and investors use financial strength ratings to assess insurer’s insolvency risk. Individual consumers utilize the financial strength ratings to determine which companies are preferable and insurers often utilize those ratings in their advertising. Insurance agents and brokers typically are reluctant to recommend coverage with insurers that are either unrated or poorly rated. In addition, many corporate insurance buyers require a good rating. Ratings also help regulators in assessing the financial strength of insurers. In addition, strong financial ratings give insurers better access to capital markets and help them to lower their firm’s cost of capital. An insurer’s financial strength rating is an important part of the selection process, but not the only factor to be considered. The informed insurance buyer should make adjustments for differences in ratings by various agencies. In addition, such buyers tend to examine more than just the rating. However, the individual consumer and less informed buyers might not have the time or expertise to closely compare potential insurers and must rely heavily on ratings.  Insurer ratings are different from corporate bond ratings and serve different purposes. A bond rating applies to a particular debt issue, while an insurer rating applies to the entire company and its ability to meet all policyholder claims. Insurer ratings are optional for solicited rating agencies. There is no regulatory requirement for insures to obtain a rating. The primary users of insurance ratings are consumers and independent agents. Corporate bond ratings are used primarily by investors. During the time period of this study there were five major rating agencies that provided financial strength ratings of insurance companies. These agencies were: A.M Best, Weiss Ratings Inc., Standard and Poor’s, Moody’s Investor’s Services, and Duff and Phelps Credit Rating Company. Only A.M. Best and Weiss specialize in the financial strength ratings of insurance companies. Weiss also rates banks along with some other entities.  It is important to point out that four of the five agencies—A.M. Best Company, Standard and Poor’s, Moody’s Investor’s Services, and Duff and Phelps—require the insurance companies to pay a fee in order to be rated. Of the five agencies, only Weiss Ratings accepts no fees from the insurance firms. Prior studies have compared A.M. Best to other rating agencies and found that A.M. Best life/health insurance company ratings were higher (1994 GAO Report). The 1994 GAO Report examined the ratings of life/health insurers for various rating agencies including A.M. Best and Weiss. The GAO study divided the rating scales into five categories. Weiss agreed with their rating assignments to various categories while A.M. Best did not. Using a bargraph pairwise comparison, the GAO Report concluded that A.M Best assigned higher rates than Weiss. In addition, Weiss reported financial vulnerability of insurers that became financially impaired much sooner than A.M. Best. Other studies of rating agencies found that ratings can differ between agencies due to different factor weights, cutoff points and sample selection bias (Pottier and Sommer 1999). This study compares the property/casualty and life/health insurer ratings of A.M. Best with those ratings of Weiss. A.M. Best is the largest, best known solicited rating agency specializing in insurer financial strength ratings. Weiss is the largest, best known independent rating agency specializing in insurer financial strength ratings. The purpose of this study is to determine if a significant difference exists between the insurer financial strength ratings of a solicited agency (A.M. Best) and those of an unsolicited agency (Weiss). In addition, rating upgrades and downgrades along with their timing will be examined. To our knowledge no previous studies have examined the same set of companies, with both solicited and unsolicited ratings, broken down into categories approved by both A.M. Best and Weiss. We examine life/health and property/casualty insurers along with the number and timing of rating upgrades and downgrades. The objections raised by A.M. Best to their rating assignments in the GAO study make this a relevant issue. 

 

The U.S., Japan in the Global Semiconductor Industry

Dr. Farahmand Rezvani, Montclair State University, NJ

Dr. Ahmat Baytas, Montclair State University, NJ

Dr. Serpil Leveen, Montclair State University, NJ

 

INTRODUCTION

As we enter the 21st century, the global electronics market is an approximately $1 trillion industry and is expected to double during the early years of this century.  Technologically-related industries currently account for more than a ninth of the U.S. domestic product, as compared to just a twentieth of U.S. domestic product only ten years ago (Simons 1995). Even though the invention of the transistor and the integrated circuit (IC), as well as the equipment to manufacture them were almost totally product of U.S. innovation (Spencer 1993), during the 1980’s the U.S. ceded its global domination of the field to the Japanese. However, by the 1990’s the U.S. market share managed to rebound significantly and this favorable trend appears to be continuing. This paper will explore the factors that resulted in the loss of world domination by the U.S. semiconductor industry and will also analyze the factors that contributed to its revival and the gradual American regaining of semiconductor leadership.  The birth of the modern semiconductor industry was marked by the invention of the transistor at Bell Laboratories in 1947. The transistor consisted of tiny silicon crystals which were classified as “semiconductor” because an electric current could pass through them in one direction but not the other. Soon after in 1958, the integrated circuit was invented by Texas Instruments and represented a significant breakthrough because all of the functions that previously required distinct devices were now integrated into an under-layer of the semiconductor itself. The so-called miniaturization of circuitry today has reached a point where a complete road map of all of Manhattan can be placed on a chip the size of the head of a pin (Standard & Poor’s 1995).  These developments were accompanied by the emergence of the U.S. semiconductor industry in California’s Silicon Valley in the late 1950’s, where Shockley Semiconductor Laboratories and Fairchild Semiconductor laid the groundwork for today’s high-tech industry. Clearly, without their early efforts, very few complex devices would be possible today.  Many of the next generation of high-tech companies emerged from groups of original employees of Fairchild Semiconductor; indeed, in 1987, a genealogical chart produced by the Semiconductor Equipment Manufacturing Industry Association showed over a hundred companies with linkages back to Fairchild Semiconductor (Warshofsky 1989).  It should be noted, however, that in a large number of American high-tech industries the relatively rapid initial rise to dominance by the U.S. was the result of a huge expansion in governmental scientific and engineering investment commencing during the Second World War and maintained during the ensuing three decades.  This was especially the case in the semiconductor industry in which the bulk of the initial demand for the latest and most advanced products came from the military and space programs.  (1) For example, as Angels (1994) maintains, U.S. high-tech enterprises gained substantial commercial benefit, directly and indirectly, from defense related research and development, at least through the 1970s.  Although many economic historians stress the fact that the large US defense budget during the cold war era provided the semiconductor industry with the crucial subsidy to gain competitive advantage, it is rarely noted that it also played a large role in stimulating a sufficient pool of cheap trained labor both from universities and other federally funded research institutions. However, at the time there was virtually no government demand for semiconductor products to speak of in Europe and Japan. Instead, the emerging semiconductor industries in those countries could only serve the limited demand of private industrial and consumer users and consequently they lagged behind the U.S. industry in the development of advanced semiconductor technology (Wilson, Ashton, Egan 1980). It was during the late 1970s that US semiconductor firms felt the impact of Japanese competition for the first time, largely because of their failure to sufficiently augment their production capacity to meet the rapidly growing market demand.  Actually, during the economic downturn following the oil shock of 1974-1975, there was an industry-wide decline in semiconductor sales. As a result, U.S. firms radically curtailed their investment in new plant and equipment. When the economy began its recovery in 1977-1978 with the subsequent increase in demand for semiconductor products, ensuing shortages and delivery delays played a major role in switching of American customers to the Japanese semiconductor firms.

 

The Relationship of Personal Characteristics and Job Satisfaction: A Study of Nigerian Managers in the Oil Industry

Dr. John O. Okpara, Briarcliffe College, New York

 

ABSTRACT

The purpose of this study was to examine the effect of personal characteristics on job satisfaction of Nigerian managers employed in the oil industry. Stratified sampling techniques were used to select the managers for this research. A total of 550 questionnaires were distributed, and 364 were returned, representing a 66.18% response rate. The key findings of this study were that job satisfaction is strongly associated with personal characteristics of managers surveyed for this paper. Results also show that older managers were overall more satisfied than were their younger counterparts. Experience and education affect satisfaction with present job, pay, promotions, supervision, and coworkers. Findings of the study provide management and human resources professionals with key information that would assist them in recruiting, rewarding, promoting, and retaining their workers. This paper offers realistic suggestions to the management of oil companies for how to enhance the job satisfaction of their most valuable workers, thus improving their efficiency and effectiveness. It also offers tools for how to establish a comparable pay policy, create equal opportunity for promotion, and provide a favorable work environment for all workers. Job satisfaction has been a major research area for scholars, practitioners, and organizational specialists, and it is one of the most frequently researched areas of workplace attitude. The consequences of job dissatisfaction include high turnover, lateness, absenteeism, poor performance, and low productivity. According to Al-Ajmi (2001), excessive turnover, absenteeism, and low productivity result in a waste of human power and unnecessary loss in productivity and profit. Studies conducted in the West have shown that many individual variables influence job satisfaction (Ang et al., 1993; Hulin & Smith, 1964; Lee & Wilbur, 1985). The majority of job satisfaction studies have been undertaken in the West. Unfortunately, very little research has been done on this issue in Nigeria, and none at all on this specific topic. Nigeria is a rapidly developing country, and there is a need to understand the attitudes of workers, specifically mangers in the oil sector. Determining job satisfaction of these managers and developing strategies to enhance job satisfaction could empower Nigerian managers to assume an active role in managing the oil industry effectively and efficiently; it could also help them remain satisfied and committed to their jobs. A few issues were examined more closely: First, the constructs and relationships that apply in the West were examined in order to define whether they hold true in a non-Western context. Second, from a theoretical perspective, an increase in understanding was desirable, as was filling the void in the literature regarding the causes and consequences of job satisfaction in non-Western contexts, which may inspire further investigation in this area. Finally, from a practical point of view, there was a need to provide personnel managers in Nigeria with the information to make better decisions in terms of staffing, training, promotion, and retention of managers. Thus, this study deals with issues that are potentially useful for scholars and managers alike. Nigeria is the most populous country in Africa. It borders Benin in the west, Chad and Cameroon in the east, Niger in the north, and the Gulf of Guinea in the south. Nigeria accounts for approximately one-fourth of West Africa’s population. Approximately less than 25% of Nigerians are urban dwellers; at least twenty-four cities have populations of more than 100,000. There are approximately 250 ethnic groups, giving the country a rich diversity. The dominant ethnic group is the Hausa-Fulani, the overwhelming majority of whom are Muslim. The Yoruba people are predominant in the southwest. Most of the Yorubas (more than half) are Christian, and about one-fourth are Muslim, with the remainder following mostly traditional beliefs. The predominantly Christian Igbo are the largest ethnic group in the southeast. The language of communication is English; this means that persons of different language backgrounds most commonly communicate in English. Two or more Nigerian languages are generally spoken in different areas of the country. However, Hausa, Yoruba, and Igbo are the most widely used Nigerian languages (CIA World Factbook entry on Nigeria, 2005). The petroleum-rich Nigerian economy, long hobbled by political instability, corruption, and poor macroeconomic management, is undergoing significant economic reform under the new civilian administration. Nigeria’s former military rulers failed to diversify the economy away from over-dependence on the capital-intensive oil sector, which provides 20% of GDP, 95% of foreign exchange earnings, and about 65% of budgetary revenues. The largely subsistence agricultural sector has not kept up with rapid population growth, and Nigeria, once a large net exporter of food, must now import it. In 2000, Nigeria received a debt-restructuring deal with the Paris Club and a $1 billion loan from the IMF; both were contingent on economic reforms. Increased foreign investment combined with high world oil prices should push growth to over 5% in 2000-01 (The World Bank Group, 2000).

 

A Fuzzy Logic Approach to Explaining U.S. Investment Behavior

Dr. Tufan Tiglioglu, Alvernia College, Reading, PA

 

ABSTRACT

This paper uses a non-linear fuzzy logic procedure to empirically investigate the links between the real interest rate and aggregate investments in the United States from 1959 to 2000.  In an interesting paper, “A fuzzy design of the willingness to invest in Sweden [1998],” Tomas Lindström utilized a fuzzy logic approach to explain willingness to invest in Sweden during the period 1950-1990.  I examine whether or not his results in Sweden can be replicated in the United States, focusing on both the real interest rate and its variability.  The paper provides a brief overview of fuzzy set theory and logic, then discusses the Lindström model and results.  It concludes with the results of this approach using interest rate, real output, and investment data from the United States.Fuzzy logic has been widely used by scientists, mathematicians and engineers, among others, as a means of designing decision and control systems where “rules of thumb” are easier to conceptualize and implement than precisely delineated decision making criteria.  This practice may result from the inherent complexity of the decision problem at hand, which makes analytical modeling difficult.  In this vein, a highly complex system gives rise to considerable (non-stochastic) uncertainty, since the complexity itself makes it too difficult or costly to specify exact relationships among critical variables.  Confronted with the necessity of making a decision, decision makers in these circumstances may opt to simplify the process into a series of rules of thumb. Economic decision makers are often faced with a high level of complexity and thus uncertainty relevant to their decision-making problem.  Moreover, variables such as price or output can be thought of as low or high, without precisely defined lines of demarcation.  Perhaps the concept of reservation price could be fruitfully treated in a fuzzy model.  Fuzzy logic is well suited to modeling human processes of decision making in the context of complexity and/or lexical uncertainty. Applications of fuzzy logic to economic decision-making would thus appear to be worth investigation.  In an interesting paper, “A fuzzy design of the willingness to invest in Sweden [1998],” Tomas Lindström utilized a fuzzy approach to explain willingness to invest in Sweden, focusing on both the real interest rate and its variability, during the period 1950-1990.  In this paper, I investigate whether or not his results in Sweden can be replicated in the United States.  I use the fuzzy model for the United States that Lindström applied to Sweden.  The paper will first provide a brief overview of fuzzy set theory and logic, and then discuss the Lindström model and results.  I conclude with the results of this approach using interest rate, real output, and investment data from the United States during the period 1959-2000.  I use this period to exclude the effects of September 11, 2001 terrorist attacks on investment decisions.  Many mathematical disciplines deal with the description of uncertainties such as probability theory, information theory, and fuzzy set theory.  These theories can be classified by the type of uncertainty they treat.  Statements using subjective categories play a major role in human decision-making processes.  Even though these statements do not have quantitative contents, the theory of fuzzy logic provides appropriate descriptions for these types of uncertainties.  Fuzzy logic has been developed as such to model human decision and evaluation processes in algorithmic form.  Unlike fantasy and creativity, fuzzy logic can derive a solution for a given case out of rules that have been defined for similar cases.  Fuzzy logic is a true extension of conventional Boolean logic - multivalued or continuous logic - that allows for intermediate values between the Boolean values true and false.  Therefore, fuzzy logic enables characterizations of statements by degrees or grades of truth.  The certain degree to which a variable matches the linguistic concept of the term of a linguistic variable is called degree of membership. The degree of membership can be represented by a continuous function that is called a membership function.  The use of fuzzy sets defined by membership functions in logical expression is called “fuzzy logic.” A fuzzy set is essentially a generalization of a crisp set that has clearly defined boundaries. The boundaries of crisp sets divide the elements into two disjoint groups; members and non-members.  Unlike crisp sets, fuzzy sets blur these boundaries through the use of membership functions.  The membership function for the fuzzy set A, with membership function defined over the universe X, is denoted ma(x), with domain consisting of the universe X, and range [0,1].  Then, let the fuzzy set A be defined as the set of ordered pairs A = {x,ma(x)}.  Then, ma(x) is interpreted as the grade of membership of element x in set A, taking any value over [0,1].  Since complete membership is represented by ma = 1.0 and complete nonmembership as ma = 0.0, the greater ma(x), the greater the truth of the statement that element x belongs to set A.  Fuzzy set and membership functions approximate some kind of intuition of the linguistic notion and provide an indication or tendency of corresponding linguistic terms.  Thus, grades or degrees of membership are discussed here rather than probabilities of membership represented by conventional probability theory.

 

Union Leaders' Value Systems: The Lack of Change Over Time and Scope

Dr. David Meyer, Central Connecticut State University, CT

Dr. William E. Tracey, Jr., Central Connecticut State University, CT

Dr. G. S. Rajan, Concordia University, Montreal, PQ

Vincent Piperni, Concordia University, Montreal, PQ

 

ABSTRACT

Two studies of union leaders’ value systems were conducted two years apart.  One was of local union leaders, the other was of national union leaders.  England's Personal Value Questionnaire (PVQ) (1967) was modified to focus on the union as the organization being studied.  A measure of the priority of each value was also obtained.  Analysis revealed no significant differences across time or scope of leadership.  Union leaders are very pragmatic and much more socially concerned than managers. England, Agarwal, and Trerise (1971), comparing union leaders' value systems with managers' value systems, concluded that union leaders were moralistically oriented, whereas managers were pragmatically oriented.  They suggested that union leaders occupying higher level positions would be more pragmatic than their counterparts at lower levels.  Since then, a number of articles have focused on managers' value systems.  Whitely and England (1977) and England (1978) compared managers' value systems across cultures and countries.  Lusk and Oliver (1974) measured the change in managers' values from 1966 to 1972 in order to test England’s supposition that values are stable over time.  Their study supported that contention.   The selection and development of union leaders is different from managers and possibly results in changing or evolving values.  Herman (1998) points out that “union members usually distrust leaders who have not come up through the ranks” (p. 91).  Union leaders were described by Holley, Jennings and Wolters (2001) as trying to “achieve something I personally valued” (p. 119) and believing in the goals of the union.  One of the questions that cries out for study is whether union leaders will show the same stability of values over time that managers have shown.  The environment that most affects union leaders was discussed by Miles and Ritchie (1968).  They found that leadership values were more strongly affected by the union leaders' jobs than by the theoretical ideals of democracy and participation within the union.  The importance of the job in forming a person's values was also discussed by England (1973, p. 2): "the requirements and constraints that the job of managing places upon the managers."  We expect that as the requirements and constraints of the job change, the values of the person holding that job will change also.  This “job orientation" of a person's values supports England’s contention that the further a union leader is from bargaining unit work, the more pragmatic he or she will be.  This brings us to the research questions. If the union leader is more sensitive to the economic environment than the manager, we would expect his or her values concerning economics to change as the economic environment changes. This, coupled with the election of the union leader, will show up as a change in the union leader's values. The first question is then:  Is a change in economic conditions reflected by a change in union leaders' value systems? The second question is: Are union leaders at higher levels of the organization more pragmatic than their counterparts at lower levels?  In order to better deal with union values, England's Personal Value Questionnaire (PVQ) (1968) was modified to focus on the union as the organization being studied. The PVQ was shortened by deleting the right, pleasant, or successful distinctions, instead we substituted a priority measure for use within each value grouping.  Two separate surveys were conducted with this PVQ. Both were in Canada. The first, in the summer of 1977, a time of poor general economic conditions, contained all independent local union organizations. The second, in the summer of 1979, a time of better general economic conditions, contained all independent national union organizations. Questionnaires were sent to each union's Chief officer.  The response rate for 1977 was 59% (48 of 81), the response rate for 1979 was 40.7% (46 of 113). The difference between the two samples can be attributed to the use of a follow‑up letter in the 1977 survey, whereas there was no follow up letter in 1979. The only difference between the two samples' geographical distributions was in the province of Quebec, which had a response rate of 67% in 1977 and 26.5% in 1979.  The problems that this difference will cause should be minimal. A previous paper by Rajan and Grigoleitis (1978) examined independent local unions. Their conclusions were that these unions were conservative, less militant, and content with their status and their relationship with the company. No regional differences were reported. The results of the analysis are presented in Tables 1 and 2. Table 1 lists the mean importance attributed to each concept.  Table 2 lists the relative priority of each concept within each concept grouping. Since there is a paucity of significant differences, the major focus of this discussion will concern the differences in rank order between groups. We will also look at the rankings themselves in order to gain insight as to what union leaders' values are, and what meaning this might have for collective bargaining. Analysis will first be by each value grouping, then findings concerning the entire study will be discussed. Union Goals.  The major differences are the rise in importance of Social Welfare and Political Power, and the drop in importance of Union Growth between the two groups of union leaders. However, the only change in priority is the rise of Political Power. The priority of Union Growth is insignificant for both groups. This shows that although the concept is important, there is no effort being expended in this area. An explanation of these differences would concern the respective scope of each union group.

 

Collaborative Systemic Training (CST) Model in Los Angeles for Adult Learners

Dr. Deborah LeBlanc, National University, Los Angeles, CA

 

ABSTRACT

Enrollment management is vital to the survival of institutions of higher education.  While enrollment management is not a direct part of the scope, duties and responsibilities of faculty; yet faculty can play a critical and significant role indirectly in the process of enrolling adult learners into colleges and universities.  This study has demonstrated a collaborative approach in Los Angeles which accomplished three major achievements during academic year of 2003-2005: SOBM-LA better served adult learners; (2) SOBM-LA  provided greater collectivity between faculty and student services unit; and (3)  Enhanced SOBM-LA faculty opportunities in the provision of quality community service using a collaborative method.  The goal of the study was to provide greater internal (inreach) and external (outreach) opportunities for faculty and staff to better serve adult learners was obtained. Lastly, this CST study has shown that adult learners require activities that are meaningful and relevant. The topics that that faculty presented on were as followed:  Sports Management, Career Management, and Time Management. Self-reports from those who attended session revealed: (1) Empowerment through the collaboration of faculty; and (2) Positive group interactive presentations and discussions.  National University is distinguishing itself ‘through its leadership in the field of adult learning through continued growth in improved effectiveness of operations, student support and academic quality’ (NU, 2010, Strategic Direction One).  New approaches in student enrollment management are essential to continued academic program growth, development and vitality in meeting the needs of adult learners. This study was developed to provide a descriptive analysis of  a team building approach utilized through collaboration designed to increase student enrollment and enhance services for adult learners within the School of Business and Information Management at National University in Los Angeles during the 2003-2005 academic year.  Findings and Recommendations from this study can be useful in the following three areas: (1) to better serve adult learners; (2) to provide greater collectivity between faculty and student services unit; and (3) to enhance faculty opportunities in the provision of quality community service using a collaborative method. Overview of chapter one included the following sections: background of the study;  statement of the problem; purpose;  research questions; assumptions; delimitations; definitions and summary.  National University was founded in 1971 in San Diego, California.  National University is a private institution; and is accredited by the following; the Western Association of Schools and Colleges (WASC); the International Assembly for Collegiate Business Education (IACBE); the Commission on Collegiate Nursing Education (CCNE); and approved by the California Commission of Teacher Credentialing (CCTC).  National University is located in La Jolla, California and offers 50 plus undergraduate programs in an array of disciplines in fields of education, arts, sciences, mathematics, nursing, human services, criminal justice, public administration, computers, engineering, technology and business in 30 locations throughout the State of California.  “National University is dedicated to making lifelong adult learning opportunities accessible, challenging and relevant to a diverse population of adult learners” (NU Mission). National University is the second-largest, nonprofit, private institution of higher education in the State of California; it has 30 campuses statewide and has been defining the ‘future of adult learning in California’ for thirty plus years.  National University has 17,000 (full – time equivalent) students and a strong  alumni of more than 100,000 graduates. National University is known for its unique ‘one-course-per-month’ format which recognizes the demanding schedules of adult learners.  Classes are held in the evenings to accommodate the unique needs of adult learners.  This study was developed to provide a descriptive analysis of a team building approach utilized through collaboration designed to increase student enrollment and to enhance services for adult learners within the School of Business and Information Management at National University in Los Angeles during the 2003-2005 academic year. The CST Model designed for this study was created through a collaborative faculty meeting in August, 2003; where the business professors in Los Angeles brainstormed approaches to meeting a directive by then School Dean Dr. Shahram M. Azordegan who challenged the faculty to provide a community service and increase enrollment in Los Angeles during the 2003/2004 academic year. The challenge is a lack of connectivity between the faculty and staff. Faculty are not engaged in enrollment management activities and staff are not involved in academic affairs. Faculty needs to better understand the process and procedures for student enrollment into degree programs; and staff needs to better understand the degree goals and offerings. There is a need for both inreach and outreach efforts between faculty and staff to better serve adult learners. New approaches in student enrollment management are essential to continued growth and vitality in meeting the needs of adult learners.  Faculty needs to better understand the process and procedures for student enrollment into degree programs; and staff needs to better understand the degree goals and offerings. There is a need for both internal systemic training and external community outreach efforts between faculty and staff to enhance the overall quality of service to adult learners. The purpose of the study was to utilize a collaborative approach in Los Angeles which was designed to fulfill three major accomplishments:  to better serve adult learners; (2) to provide greater collectivity between faculty and student services unit; and (3) to enhance faculty opportunities in the provision of quality community service using a collaborative method. 

 

On Financing with Convertible Debt: What Drives the Proceeds from New Convertible Debt Issues?

Dr. Camelia S. Rotaru, University of Texas – Pan American

 

ABSTRACT

Despite extensive research, it is not clear why companies issue convertible securities and what drives the variation of proceeds on the convertible market. In this paper I use a sample of 509 convertible securities issued between 1980 and 2003 to show that companies do not issue convertible securities to mitigate adverse selection costs. Rather, the variation of convertible proceeds suggests that managers time the convertible issue for periods when investors are optimistic. This is consistent with the findings of Loughran et al. (1994) and Lowry (2003) which show that IPOs are issued during periods of high investor optimism.  The dollar volume of outstanding convertible debt securities has grown tremendously over the past several years. (1) By 2001, the size of the US convertible market had reached $200 billion. However, not all companies issue convertible securities, and despite extensive research, we still do not know why only some companies choose to finance through convertible securities. Some authors suggested that companies issue convertible securities in order to reduce the bondholder-stockholder agency costs (Green, 1984; Brennan and Kraus, 1987), or to hedge against the impact of uncertain risk (e.g., Brennan and Schwartz, 1987), while other argued that convertible issuers are trying to reduce adverse selection costs and financial distress (Stein, 1992), or to take advantage of time-varying selection costs (Choe et al., 1993; Bayless and Chaplinsky, 1996; Lewis et al., 2003).  This paper’s contribution to the existing body of literature is that it shows that the convertible market is driven by investor sentiment, rather than by adverse selection. Investor sentiment and its impact on the IPO market has been extensively analyzed, but previous research on convertible securities ignores investor sentiment as a potential factor driving the convertible market. For the IPO market, Lee et al. (1991) conclude that changes in investor sentiment significantly affect IPO volume over time. For SEOs, Jindra (2001) suggests that firms are significantly more likely to have seasoned equity offerings when they are overvalued.  In previous work, Stiglitz and Weiss (1981) and Lewis et al. (2001) suggest that companies issue convertible debt because high adverse selection costs lead companies that are rationed out from the debt and/or equity markets, while Lewis et al. (2003) argue that convertible debt offerings should be clustered in periods when debt-related problems are more severe, since this is when they would be most appropriate. Yet, the argument that companies issue convertible securities to mitigate high adverse selection costs has received limited support in the literature. In this paper, I use the fluctuations in the proceeds raised through convertible securities to show that managers time convertible issues during periods when investors are optimistic.  Despite the tremendous growth of the convertible market in the late post-1990 period and early 2000s, and despite the tremendously large amount of literature on convertible securities, the variation in new convertible debt volume has received no attention and our understanding of these fluctuations is very limited. For example, on February 1, 2001, the Wall Street Journal stated: “In the aftermath of last year’s stock-market slide, convertibles remained one of the healthiest products for Wall Street firms. That is because these hybrid securities offer the safety of a bond, with features like annual interest payments, with the upside potential of a stock, since the investor is able to swap his securities for common stock at a predetermined premium to current market prices, providing exposure to any rise in stock prices.” (2) Moody’s however argues companies that issue convertible securities are financially shakier than those that have shunned this market. (3)  To add to the confusion surrounding convertible securities it is interesting to note that such securities generally do not provide the same protections commonly afforded by bondholders. Convertible securities are generally subordinated to all other debt securities, as well as other obligations of the holding company, which often increase over time. Furthermore, convertible indentures often contain virtually none of the protective covenants offered to traditional bondholders, especially in the below investment-grade categories.  Given the design shortcomings of convertible securities, it is therefore interesting to investigate what has caused the tremendous growth of the convertible market over the latest decade. In this paper I test whether the variation in the convertible market volume is caused by convertible issuers being companies rationed out of from the capital markets because of adverse selection problems, or whether market sentiment rather than adverse selection is what drives the convertible market. Table 1 shows the average coupon and average conversion premium offered on both investment-grade and below investment-grade issues in the pre-1990 period versus the post-1990 period.  This table indicates that the convertible market allows high-yield companies (i.e., companies rated below investment-grade and non-rated companies) to raise capital at lower coupon rates than investment-grade firms. However, it is interesting to note that during the pre-1990 period the coupon rate and conversion premium offered on both investment-grade and speculative-grade issues are approximately the same, while in the post-1990 period the average coupon rate decreased for both types of firms (reflecting lower market rates), but the conversion premium for below investment-grade issues appears to have increased. (4)  The fact that convertible issuers pay a lower coupon rate in the post-1990 period, while the conversion premium is higher in the post-1990 period than in the pre-1990 period indicates that the convertible market in general and the conversion premium in particular, may be a mere reflection of investor optimism.

 

Do the Stars Foretell the Future?: The Performance of Morning Star Ratings

Philip S. Russel, Philadelphia University, PA

 

ABSTRACT

The mutual funds industry has emerged as a major player in the financial system with net assets of over $6 trillion and serving nearly 100 million investors.  The latest institution to capitalize on the popularity of mutual funds is the mutual fund rating agencies.  Naïve investors are increasingly relying on “star-ratings” provided by mutual fund rating agencies to guide their selection of mutual funds. However, do mutual fund ratings provide any information of value to investors? We investigate this question by evaluating the performance of premier mutual fund rating agency, Morning star.  Mutual funds have become a popular avenue for investors and the net assets of mutual funds have grown exponentially from a mere $17 billion in 1960 to over $8 trillion in 2005.  The number of mutual funds has grown to more than ten thousand, exceeding the number of stocks listed on the organized exchange, making the selection of mutual funds an onerous task for the average investor.  In response to investor demand for some simple strategy to screen the numerous funds, several independent agencies have started offering ratings on the mutual funds, Morningstar being the most prominent among them. Morningstar rates mutual funds on a scale of 1 to 5 stars, with 5 stars being the best.  Other organizations (such as Lipper, Value Line) also provide similar rating services.  While the rating methodology varies from organization to organization, the ultimate purpose is the same: to simplify investors’ decision making process by providing a composite measure of mutual fund performance.  Mutual fund ratings, though based on seemingly complex analysis, are not necessarily credible in forecasting future performance. Indeed, they might be misguiding investors, as five stars do not necessarily guarantee superior future performance.  While Morningstar does not claim to forecast performance, anecdotal evidence suggests that naive investors are increasingly relying on mutual fund ratings to make their investment decisions.  Also, advertisements reveal that mutual funds are aggressively promoting their star ratings to attract investors.  While much has been written in the academic literature about the performance of mutual funds, not much research has been conducted on the performance of mutual fund rating agencies.  With the proliferation of rating agencies and growing investor interest, it is important and timely to investigate the performance of rating agencies. This study is designed to provide empirical evidence on the performance of mutual fund rating services by analyzing the performance of premier mutual fund rating agency, Morningstar. The results of this study provide important insights with significant implications for the investors, financial advisers, mutual fund managers, and the rating agencies.  Morning star has been rating mutual funds since 1985.  The rating system classifies the mutual funds into four groups (domestic equity, international equity, taxable bond, and municipal bond) and recognizes the best performing funds in each group. All the funds are evaluated every month based on risk and return measure and given a score that is plotted on a bell curve. The scores are then used to assign stars in each investment group – the top 10% receive five stars, the next 22.5% receive four stars, the middle 35% receive three stars, the next 22.5% receive two stars and the bottom 10% get one star. The stars are thus a composite measure of risk-adjusted performance based on historical data.  The ratings are calculated for 3, 5, and 10 years and funds are also assigned an “overall” rating.  The overall rating is computed based on weights of .2, .3, and .5 for the 3, 5, and 10-year rating respectively for funds with more than 10 years of data, weights of .4 and .6 for 3 and 5-year ratings for funds with more than 5 but less than 10 years of data.  For funds with less than 5-years of data, the overall rating is based exclusively on the 3-year rating. Blume (1998), Sharpe (1998), and Morey (2002) provide an analysis of the properties of morning star rating and discuss some of the limitations of morning star rating methodology. Blume points out that by combining load and no-load funds in the same group, the rating methodology is biased in favor of the no-load funds.  For example, he reports that out of the 164 funds receiving overall rating of 5 stars, 123 or 75% were no-load funds.  Furthermore, when the funds are re-classified into load and no-load fund groups, only 74 of the no-load funds receive 5 stars while the number of load funds receiving 5 stars increases from 41 to 90.  A similar pattern is seen for 4 star rating category as well.  Blume also notes that the domestic equity fund group includes a wide range of funds (diversified domestic equity funds, sector funds, and miscellaneous funds). Since sector funds tend to underperform using morning star rating methodology, other funds in the group are given a boost.  For example, more than 80 percent of the sector funds received three or fewer stars. Blume argues that rating such a wide group of funds may not be useful for investors wishing to analyze funds within a specific category (such as no-load, domestic equity fund). Sharpe (1998) criticizes the morning star risk adjusted rating system and shows that any attempt to identify funds based on ranking within a peer group is likely to lead to sub-optimal portfolio decision.  Recognizing that classifying mutual funds into four groups is extremely broad, Morningstar changed its rating system in July 2002 by classifying the mutual funds into 48 investment groups.  This leads to better peer comparison as the top 10% of funds within each group receive 5-stars.

 

Mobility of Technology Positioning with Change in Patent Portfolio

Dr. Shann-Bin Chang, Ling-Tung University, Taiwan

Dr. Kuei-Kuei Lai, National Yunlin University of Science and Tech., Taiwan

Shu-Min Chang, National Yunlin University of Science and Tech. & Nan Kai Institute of Tech., Taiwan

 

ABSTRACT

Patents are an important indicator of R&D performance. A researcher can determine the technological competence of an enterprise by researching the patents it holds and so determine its technological position. Over time and with a changing environment, a firm may change strategies and improve their technological position. The purpose of this study is to discuss the impact of changes in the patent portfolio on the mobility of technology positioning. This study researched 37 firms that are representative of the business method patent of US Class 705. These firms all experienced the three stages of the Internet life cycle. Five technology groups were formed based on cluster analysis. This study also examined possible group movements among the 37 companies during the three stages. This study combined these five technology groups into three technologies oriented: postage metering technology, information and Internet technology, and business model development technology. Furthermore, this study discusses the trend of technology group development, and makes suggestions regarding how to develop “co-petition” strategies between or within technology groups.  Patents are important indicators of a firms’ R&D performance. Some researches of patent analysis focus on certain industries such as the tool machine manufacturing and electronic industry (Ernst, 1997; 1998). Others utilize the patent data to evaluate the ability of technological development and innovation. Therefore, companies plan their technology strategy correspondingly (Mogee, 1991; Archibugi & Pianta, 1996). Not only can a corporation’s technological competence be analyzed by its patents, but it also affects the mobility of position and the strategy group in high-tech industry, e-commerce and business methods (Stuart, 1998; Lai, Chang and Wu, 2003). Indeed, a company’s objective may be adjusted because of a change in environment which will also influences its technology strategy. That is why Stuart and Podolny(1996) used the data periods of 1982, 1987, and 1992 to observe the position changes in the semiconductor industry. This paper scrutinizes the changes in the patent portfolio of 37 firms during three Internet life cycle stages, Pre-Internet stage (before 1993), the Internet Growth stage (1994-1998), and the Internet Mature stage (after 1998). These firms are representative of the Business Method patent of US Class 705. In addition, this paper applies a statistical methodology to distinguish several technological groups and discusses group shifting at different occasions. It is crucial to understand a companies’ change in technology position changing in relation to their technology strategy. The remainder of this paper is laid out as follows. Section 2 reviews the present condition of the business method patents and the relationship between the developments of Internet. Section 3 is the patent analysis, which is composed of patent searching, applied statistical methodology for grouping, and the validity of cluster analysis. In section 4 the longitude analysis is used to check the changed and unchanged positions during three stages. Finally, conclusions are drawn in section 5. Since Internet technology began sweeping the world, every company has come to view it as a new stage in which to compete in the 21st century. Business methods based on networking technologies have become the weapons of this battle for success. Following the announcement of the ‘Examination Guidelines for Computer-related inventions’ by US Patent and Trademark Office (USPTO, 1996), USPTO then published a business method patent White Paper (2000) named ‘Automated Financial or Management Data Processing Methods’. Additionally, several legal precedents of U.S. Court of Appeals for the Federal Circuit (CAFC), such as State Street Bank & Trust Co. v. Signature Financial Group, Inc. and Amazon.com v. Barnes & Noble, clearly illustrated the business methods that can be patented. These phenomena have competition for patenting business methods very hot. In order to cooperate with the management of business method patents, before the announcement of business method patent white paper, the U.S. patent office 2760 work group had already modified the definition of U.S. patents Class 705 in March, 2000 and the topic was ‘Data processing: Financial, business practice, management, or cost/price determination’. The number of searching US Class 705 from USPTO database was increasing rapidly during 1996 to 2000. Number of patents for each year was 240, 335, 704, 891, and 1010 respectively. Therefore, this study chooses the business method patent as the research subject. Most studies on business methods or the e-commerce patents are considered in terms of legislation. For instance, Grossman & Oliver (2000) proposed how to disprove business method patents. Furthermore, Lyon & Vanderlaan (2001) criticized the patentability of business methods. Moreover, Lai et al. (2003) applied statistic analysis to distinguish strategic groups based on patent sub-class and volumes of business methods, and discussed the differences among groups. However, the study of Lai et al. (2003) did not consider the environmental and time shifts that may influence the technological position. Accordingly, more attentions should be paid to the longitude analysis in this study. The key factor of longitude analysis is to decide time section.

 

Managing Corporate Legitimacy: Nonmarket Strategies of Chinese Firms

Dr. Zhilong Tian, Huazhong University of Science &Technology, Wuhan, PRC

Haitao Gao, Huazhong University of Science &Technology, Wuhan, PRC

 

ABSTRACT

In recent years, Chinese firms had met a lot of nonmarket obstacles on the global market. The fact indicates the deficiency of legitimacy management of Chinese firms. However, in spite of hostile institutional environment, Chinese private firms had changed their political status successfully and explored their living space. The article identifies nine legitimacy-building strategies that Chinese private firms had employed in Chinese transitional economy, and compares them with the legitimacy strategies of western firm. And discuss the implication for Chinese firms to go aboard. On September 17, 2004, leather shoes of Wenzhou shoe firms were burned by the local extremist in Elche in Spain, which cause a loss about $984,000. It was reported that a lot of similar events had happened in the internationalization process of Chinese firms in the past several years. The accident draws public attention. It is widely believed that the reason why the Elche Accident had happened is that a few extremists try to demonstrate their hatred by making violence event, hoping the government to pay more attention to them, as conventional shoes industry in Spain was facing a great pressure in fierce international competition. Of course, it is true, but we can find another explanation of the conflict from the news report: the shoes businessmen from Wenzhou city did not conform to the prevailing norms, such as evading duty, saling fake commodities, exploiting workers, not conforming to the schedule and so on, which leaded to disgust and resentment of the local people. Moreover, they didn’t communicate with the local people. Thus, the accident happened. The accident proposes an unnoticed but very fundamental topic: corporate legitimacy. The fact that Chinese firm had met a lot of nonmarket obstacles on the global market indicates the deficiency of legitimacy management of Chinese firms. But things are different as to the private firms in China. In spite of hostile institutional environment, Chinese private firm had changed their political status successfully and explored their living space. Rather than solely establishing their products and reputation in the marketplace as western firms, private business in China are mainly pursuing a legitimate status, in the political status. In a capital country with market economy, this is not a major issue worth discussing. The determination of its degree of legitimacy is not a problem in capitalist countries because private business is both legal and in harmony with the prevailing ideologies of these countries, Therefore, it has the status of full legitimacy.  The article identified nine legitimacy-building strategies that Chinese private firms had employed in Chinese transitional economy by interviewing the top manager of Chinese private firms, and compared them with the strategies of western firms. Then we discussed the implication for Chinese firms to go aboard. The term legitimacy is one of the key concepts in political science and is usually used in theoretical analysis of the validity of power associated with a ruler, a government, a regime, or an authority. The term legitimacy can also be used in the study of corporate legitimacy, a small branch of management concerned with the appropriateness and the ethical standard of business activity. Legitimacy management has its intellectual roots in both sociology and management, and thus draws on institutional theory, resource dependency theory, and impression management. The term legitimacy most commonly refers to the right to exist and perform an activity in a certain way. Lacking legitimacy, the ability of an organization to pursue its goals and accumulate resources can be substantially reduced.  Whilst in the past a firm’s profits were viewed as an all-inclusive measure of legitimacy, there seems to be a movement away from this. Matthews (1993) indicates that organizational legitimacy does not arise from merely making a profit and abiding to legal requirements. Instead, reference to the prevailing norms and values of society is fundamental in ensuring that an organization is bestowed legitimacy. Scholars have found this general definition of legitimacy too broad, and have developed notions of organizational legitimacy. A authoritative definition was given by Suchman (1995),“Legitimacy is a generalized perception or assumption that the actions of an entity are desirable, proper, or appropriate within some socially constructed system of norms, values, beliefs, and definitions.” (p.574) Thus, legitimacy isn’t the attribute of an enterprise; it is bestowed or offered by the constituent (Perrow, 1970). Legitimacy indicates the role of a firm in the institution of society is desirable; it helps to attract resource and continuous support of stakeholder. Lacking legitimacy, the ability of an organization to pursue its goals and accumulate resources can be substantially reduced.  The notions of legitimacy and social contracts are inexorably intertwined. Social contract is the basis for determining the legitimacy of any interaction. Relating specifically to business, Davis (1983) argues that through a social contract "society has entrusted to business large amounts of society's resources to accomplish its mission, and business is expected to manage these resources as a wise trustee for society.” (p.95) Deegan and Rankin (1996) state that a failure to comply with societal expectations may lead to a revocation of the contract. The firms then risks sanctions forced upon it by society.  Within the existing literature, there are three broad types of legitimacy. Which might be termed pragmatic legitimacy, moral legitimacy, and cognitive legitimacy (Suchman, 1995). Pragmatic legitimacy rests on the self-interested calculations of an organization's most immediate audiences.

 

Effort Analyzed by the Balanced Scorecard Model

Jui-Chi Wang, Hsing-Wu College, Taiwan

 

ABSTRACT

The Balanced Scorecard (BSC) model requires corporations to evaluate their organizational performance from four different perspectives—financial, customers, internal businesses, and learning and growth.  Its utility lies in the prioritization of key strategic objectives that can be allocated to these four perspectives and the identification of associated measures that can be used to evaluate organizational progress in meeting the objectives (Kaplan & Norton, 1992, 1993).  Through subsequent modifications and improvements, researchers and business specialists have found that the BSC could be used as an effective strategic management tool.  More specifically, by determining the existence of strategic linkages between the strategic objectives and measures of the four perspectives, managers can take into account both the organizational objectives and the business processes in their creation of a BSC.  Therefore, the BSC can not only be used as an evaluation of the organization’s performance, but also to manage business processes within the organization (Cobbold & Lawrie, 2002).  A case study analysis of the business re-engineering efforts of two high-tech companies—Compaq and Acer—was conducted.  The business re-engineering efforts of the corporations, which led to their alignment to their work processes with organizational goals, were analyzed within the context of the four perspectives of the BSC.  It was evident that the BSC could be used as a strategic management tool.  The presentation of the business re-engineering efforts within this model offered a clear overview of the performance of the two corporations and showed how they overcame their problems by forging connections between their organizational goals and their business processes.  Many corporations have begun to focus their attention on integrated strategic management tools that link performance measurements to organizational management due to increasing competition and globalization (Hannula, Kulmala, & Suomala, 1999).  Originally developed by Robert Kaplan and David Norton of the Harvard Business School in 1992, the Balanced Scorecard (BSC) is designed to enable organizations to formulate strategic goals and associated measures (Andersen, Lawrie, & Shulver, 2000; Rigby, 2001).  Using this comprehensive strategic management tool that encompasses both financial and non-financial measures, managers can obtain information about how their organizations have fared in integrating their vision and strategies with the organizational performance based on specific metrics (Kaplan & Norton, 1992, 1993; Missroon, 1999).  At the most basic level, the development of the scorecard begins with the establishment of the organization’s “vision.”  Based on their understanding of the organization’s structure, company managers then determine specific strategies that should be formulated and implemented to realize that vision.  The next level of planning involves the identification of organizational activities that are derived from these strategies.  In the final stage, metrics that can be used to accurately measure the performance of the organization in the specific areas are determined (Missroon, 1999).  In this thesis, the application of the BSC model to guide the re-engineering of business processes will be demonstrated.  The research study will present a comparative analysis of these corporations’ process reengineering efforts within the context of the BSC model.  This analysis will thus indicate whether the Balanced Scorecard model is an effective strategic management tool that can integrate high tech companies’ efforts to link their pursuit of strategic goals and their daily operations. As corporations, especially those in the high-tech sector, confront the challenges of increased competition and a highly unpredictable business environment, they are seeking new ways to evaluate performance and better meet their targeted goals.  For many decades, traditional performance management techniques have been used to measure the organization’s performance, for example key financial variables such as revenue generated, sales volume, gross and net profit.  However, researchers, analysts and senior management have long since recognized the inadequacies of these tools, especially with regard to the long-term strategic management of their organizations (Missroon, 1999).   Among the various new management tools that have been devised, the BSC has been found to be a highly effective tool for helping corporations in their decision-making processes, for several reasons.  The BSC not only allows managers to assess past performance, but also enables them to address specific problems to enhance the firm’s future performance (Missroon, 1999).  Furthermore, because it incorporates non-financial and financial measures in one report, the BSC offers detailed information that cannot be represented by financial measures alone (Kaplan & Norton, 1992, 1993).  In light of its potential effectiveness, the BSC model continues to be revised and updated in order to enhance its applicability to different corporations and operating environments.  Due to the vagueness of the initial concept, the application of the BSC model has been subject to criticism by some early adopters.  Furthermore, the difficulties of selecting appropriate strategic goals and associated measures for the evaluation of the organization’s performance still present considerable difficulties for users (Cobbold & Lawrie, 2002).  The purpose of this study is to utilize the BSC to analyze the business process re-engineering efforts of high-tech companies— Compaq, its strategic partner Hewlett Packard, and Acer Computer—to determine whether it is an effective strategic management tool.  This study deliberately focuses on the business re-engineering efforts because in these initiatives, corporations seek to determine how they can modify their business processes and other aspects to increase their success in fulfilling organizational objectives. 

 

An Empirical Study of Using Derivatives on Multinational Corporation Strategies in Taiwan

Yi-Wen Chen, Hsing Wu College, Taiwan, R.O.C.

 

ABSTRACT

This study is very much a preliminary one and arrives at through the open-ended interviews conducted in conjunction with the analysis of the literature review on the derivatives market. At the same time, it sets up a solid basis for (a) laying out the parameters and boundaries of the given field of study; and (b) stimulating further studies. It also seems to agree with most of the major studies in the literature with reference to the importance of the derivatives market around the world and its effect on firms of a particular size (Bryan, 1993; Steinherr, 1999; and Fornari & Jeanneau, 2004). The study makes fairly clear what must be done in Taiwan with respect to improving the chances of its firms to compete globally: improve the financial derivatives market through further deregulation and a less hands-on attitude on the part of the government. In this way, Taiwan’s economy can become more integrated with that of the rest of the world (Chelley-Steeley, 2003). While past performance is no indication of future trends, it is well-known fact that traditional financial services have undergone massive changes in the last 30 years, thanks in part to rapidly developing technologies, demographic shifts, economic globalization, opening of dormant and emerging markets, and increased competition among financial institutions. These forces have led to dramatic “change in how financial firms make money; in how and by whom they are regulated; in where they raise capital; in which markets they serve; and in what role they play in society” (Bryan, 1993, p. 59). There is little doubt that derivative, defined as “financial contracts whose values depend on—and are derived from—the value of an underlying asset, reference rate, or index” (Bullen and Portersfield, 1994, p. 18), have become extremely important in the world financial markets. In fact, some argue that they have become indispensable. It is generally argued that markets, countries, regions, and individual firms looking to capitalize on globalization and the freer flow of financial instruments can ill afford to ignore the derivatives market if they want to take full advantage of all the opportunities to maximize their profit and expansion potential. As with any financial instrument, even one designed to spread risk more evenly, there are potential downsides, downsides that can quickly become exponential if care is not taken. Risks are placed in two categories: those experienced by individual firms and those by the financial system as a whole. Among individual risks: credit, default, legal, market, liquidity, and/or management risk. Systemic risk has to do with increased competition, greater linkages across the board, less disclosure of financial information through the use of so-called off-balance sheet transactions (a la Enron and WorldCom et al., see Barreveld (2002); Mulford and Comiskey (2002); Holtzman et al. (2003); Ketz (2003)), and a more rapid response to market problems and destabilization. Despite the known risks, any firms who wish to compete at the international level cannot do so without becoming involved in the financial derivatives markets. This is particularly true in emerging markets such as those found in East Asia in general and Taiwan in particular. These economies are growing more rapidly than those of mature markets in many cases. With the linking up of the global capital market and the increased securitization of funds, the market in derivatives and the linkages among the various financial instruments can only increase. Individual firms in these markets need to take advantage or face being left behind. Countries and regions need to find ways to enter these markets in ways that minimize the risks and maximize the potential benefits. The aim/focus of the research is to examine the relationship between the strategy of multinational corporations and the financial derivatives markets in Taiwan. The study aims to examine three specific types of financial behaviors among Taiwan enterprises. In contrast to most prior studies that have analyzed the average behavior of enterprises as a whole, this one will not only divide Taiwanese enterprises into three separate categories—multinational firms operating with branches in Taiwan, local firms operating as multinationals with branches outside Taiwan, and pure domestic firms—but will also analyze and compare the determinants and impacts for adopting different types of internationalized strategies, to see if an optimized strategy can be worked out or a model created from such a strategy. The purpose of the study is to provide potential explanations for the high utility ratio of derivatives in multinational corporations. There is no doubt that derivatives are among the most often used types of financial instruments today. To determine why that is so and the fit between derivatives and businesses may help determine the types of financial derivatives most suitable for different types of Taiwanese businesses. The research question that is examined in this paper is: Given that becoming involved in the international financial derivatives markets is now and will continue to be a key to the sustained growth of both Taiwanese firms and the island’s economy, what is the best way for Taiwanese firms to go about maximizing their involvement while minimizing their risks?

 

Market Entry Patterns: The Case of European Asset Management Firms in the U.S.

Dr. Jafor Chowdhury, University of Scranton, Scranton, PA

 

ABSTRACT

The process theory of internationalization posits that the foreign expansion process of firms follows an incremental and sequential path. However, the entry patterns of a fairly large sample of European asset managers establishing presence in the U.S. market in the last two decades show that the entrants had taken a predominantly acquisitive approach, skipping the intermediate steps in the expansion process. The objective of this study is mainly two-fold. First, to describe the firms’ entry patterns with regard to their choice of market entry vehicles. Second, to explain the entry patterns in terms of the internal factors prompting the firms to deviate from the process theory’s predicted path. The implications of the findings of this study for both theory and research are explored. The process theory of internationalization posits that the foreign expansion process of firms generally follows an incremental and sequential path (Johanson & Weidersheim-Paul & Johanson, 1975, Johanson & Vahlne, 1977). However, the entry behaviors of a sample of 54 European asset management firms engaging in 152 entry incidents in the U.S. market during the period 1984-2004 show that the entrants had taken a predominantly acquisitive approach. To a degree, the entrants had employed all three available entry modes— build (internal expansion), buy (acquisition), and partner (strategic alliance). However, buying was by far the most frequently utilized entry mode accounting for nearly three-fourths of all entry incidents. In terms of deal value and assets involved, the acquisition incidents represent significantly larger transactions than those involving either building or partnering. In addition, partnering was used mostly as an adjunct to buying for tapping into certain location-specific resources and capabilities that the entrants needed for bolstering their European and global competitive positions. The preponderance of acquisitions among the incidents implies that the entrants had skipped some intermediate steps in their expansion process to accelerate the speed of their market entry. Overall, the observed entry patterns are not consistent with the predictions of process theory. This study makes no attempt to test process theory empirically.  Instead, by assuming that the theory explains the “pure” or “most basic” case, a research question is posed that has largely been neglected in the extant literature: What factors enable the entrants to deviate from the process theory’s predicted path? This paper focuses solely on the internal (i.e., firm-specific) factors prompting the European firms to pursue a largely acquisitive approach in accessing the U.S. market. The asset management industry is a vast field in terms of number of firms competing in the market, size of investor assets it handles, and the range of critical value-added services it offers to investors. By some estimates, the U.S. asset management industry now intermediates nearly $26 trillion of investor money. The U.S. mutual fund segment (the second largest branch of the of the asset management industry after the retirement market) controls more assets than the U.S. banking and insurance industries combined. The growth rate of the industry is much higher than either banking or insurance, which are at a more mature stage in their life cycle. As a result of falling international trade and investment barriers, technological advances in computers, data processing, and communications, and financial services sector deregulation and reforms at national, regional, and international levels, the asset management industry is becoming increasing more global over time. The leading U.S.-based asset managers had over the years expanded to Europe and other major markets of the world. Likewise, a substantial number of European firms had entered the U.S. market in the past two decades. As a group, these entrants represent a cross-section of the firms competing in the European financial services sector and asset management field. In terms of national origin, the entrants originate from a dozen or so European countries. Some U.S. acquisitions by European managers are massive transactions in terms of assets-under-management (AUM) changing hands. The four largest acquisition transactions had transferred nearly $1 trillion of AUM from U.S. to European hands (Bogle, 1999). Considering the number of European firms entering the U.S. market over the years and the number of entry incidents they had undertaken, the U.S. expansion of European asset management firms provides an interesting setting for examining the international market entry patterns of firms. A vast amount of research on the internationalization process of firms has already been conducted although the bulk of it focuses on the extraction and manufacturing sectors. Scholars examining the international expansion process of service firms have conducted a number of single-industry studies: banks (Khoury, 1979; Gray and Gray, 1981; Miller & Parkhe, 1998), advertising agencies (Weinstein, 1977; Terpstra & Yu, 1988), hotels (Dunning & McQueen, 1981, Dunning & Kundu, 1995, Contractor & Kundu, 1998), construction firms (Enderwick, 1989), private equity firms (Dixit & Jayaraman, 2001), international news agencies (Boyd-Barrret, 1989), accounting firms (Daniels, Thrift, & Leyshon, 1989), equipment leasing firms (Agarwal & Ramaswami, 1992; Brouthers,, Brouthers, & Werner, 1999), technical consultancy (Sharma & Johanson, 1987), venture capital firms (Guler & Guillen, 2004), and law firms (Cheng, Cheng-Min, & Wen-Shiumg, 1998). In addition, some scholars have conducted some empirical studies based on samples consisting of firms drawn from multiple service industries (Erramilli, 1990, 1991; Li & Guisinger, 1992).

 

The Discussion of Media Selection and Accessible Equity in Distance Education

Dr. Jack Fei Yang, Hsing-Kuo University, Taiwan

 

ABSTRACT

Is the role of media in distance education important? Is the medium the message? The impact of media on instructional outcomes continues to be debated. The cost for new distance-system development and training is a major challenge for institutions in developing countries that want to be competitive in the global society. To adopt the newest distance media does not always directly result in a proportional increase in student learning outcomes and learning achievements. A choice of a high quality media for a few people may miss the challenge of serving the greatest number with an effective delivery approach within economic reach. Media influences learning by introducing different levels of learning objectives, learning activities, and learning outcomes. However, beyond media consideration, factors such as instructional methods, learning styles and teaching strategies need to be high priority to policy makers. The less economically fortunate people of developing countries need the concern of the high technology world to assure that large populations are excluded from the distance learning society. Development of distance education across the world is influenced by economics, politics, technology, and societal issues, it is important for distance program designers to be aware that technology applications in distance education may provide quality education for mass populations, radically increasing equal access to opportunity. Clear educational purposes‚ and careful decision-making and program design are key factors to develop a good distance education program to serve an increasing mass population. When determining instructional media in distance education‚ elements such as access‚ cost‚ and level of interaction are key factors need to be considered.  Jones Shoemaker (1998) pointed out that in order to evaluate the sources and impacts of change in continuing education, political, economic, social, physical, and technology issues need to be considered. When economic and political conditions change in developing countries, demand for higher education and more educational opportunities will result in the expansion of higher education. As Harry and Perraton (1999) indicated, “Distance education at the end of the twentieth century reflects international economic, political and related ideological change and is shaped by technology opportunity” (p. 2). There are many factors that influence media selection strategy: learning objectives, subject content, teaching methods, learner expectations, time, facilities, and societal expectations that drive the education market (Romiszowski, 1988; Strauss &Raymond, 1999). In developing countries, it is more critical and important to adopt appropriate distance media/methods within available resources because of limited economic support. However, cheap or simple distance media should not be directed to poor nations. Expensive, fancy distance media are not necessarily equated to produce the best educational quality. Debates about directions for distance education development need to be at the forefront of discussions about building a global society that offers potential for inclusion by both developing and developed countries.  Appropriate distance-media adoption is the key to determining the role of distance-education in a diverse society. Different cultural sagas have significant affect on the expectations of students and faculty about the best ways to teach and to learn (Bennett, 1995). In the East, teaching methods are teacher-centered; theoretically, media may not influence learning there. If the teacher always lectures in a teacher-centered approach, then media has little opportunity to be used or to influence learning outcomes. For media to be used to bring visual senses into full play, the teacher needs to stop talking some of the time. Use of problem solving in video has little utility to a teacher committed to the use of speech and passive learning as the preferred teaching approaches.  In the West, teaching methods are frequently learner-centered, with high interaction expected between teachers and students. Frequent use of video to stimulate engagement in discussion of problem solving is used by some American professors. Media will influence levels and efficiency of interactions and activities. Western educators are encouraged to use media to enhance learning, since the adage (slightly modified) is, “Pictures and media are worth a thousand words.” However, both teacher-centered and student-centered teaching strategies and styles can produce quality in learning outcomes. In the East, students are more passive and teaching is more teacher-centered; and in the West, students are more active and teaching is more learner-centered. Each instructional method is suited to fit its educational culture and learner expectations. It is not appropriate to judge either the Eastern or Western learning styles without considering learner characteristics, learning styles, learning environments, and educational traditions.  However, the educational trend with emerging learner-controlled email, Internet resources, and instructional technologies is changing toward student-centered teaching. Peterson (1997) stated, “Higher education continues to change from a teacher-centered focus to a learner-centered focus” (p.327). The reason why the concept of student-centered teaching method is becoming so important is because the level of interaction is different between teacher-centered and student centered teaching methods.

 

The Value Added Tax applied in the Member States of European Union: The Case of Spain

Dr. Maria Luisa Fernandez de Soto Blass, San Pablo-CEU University, Madrid, Spain

 

ABSTRACT

The Value-added tax (VAT) was introduced in the European Economic Community in 1970 by the First and Second VAT Directives and was intended to replace the production and consumption taxes which had hitherto been applied by the Member States, which hampered trade. The following text summarises a consolidation of existing Directives in this field. The present paper introduces new figures and formulas never seen before at book of taxes, analyses the concept of Value Added Tax., makes a brief approach to the history of the VAT, studies the elements of this tax as the beneficiary, taxable person, territoriality,  basis of assessment,  exemptions, explains the basic mechanism for VAT: VAT NET, OUTPUT TAX and INPUT TAX,  deductions, the taxable base, the VAT rates, place of taxable transactions, chargeable event and chargeability of tax, special schemes VAT Invoice, Collections and example at European Union and  the case of Spain.  This paper is the result of three researches that I´m carrying out at The Institute for Fiscal Studies, Ministry of Economy and Finance, Spain,  University of San Pablo-CEU, Madrid, Spain, from 2003 to 2006, and at University of Leeds, United Kingdom from 1st July to 1st September of 2004, 2005 and 2006 that is going to continue at the same time and place. European action in the area of indirect taxation has its legal basis in articles 90 and 93 of the Treaty establishing the European Community (EC Treaty). It is subject to the unanimity rule and has always been governed by the subsidiarity principle: its aim is to harmonise national systems of indirect taxation , not to standardise them. In other words, its aim is to ensure that national systems are not only mutually compatible, but also comply with the objectives of the EC Treaty (European Commission, 2006). The Value-added tax (VAT) was introduced in the European Economic Community in 1970 by the First and Second VAT Directives and was intended to replace the production and consumption taxes which had hitherto been applied by the Member States, which hampered trade. In 1977, The Sixth VAT Directive 77/388/EEC harmonised this tax. It introduced a common assessment for VAT, and represented a body of law laying down Community definitions of important concepts. It also paved the way for subsequent measures working towards a goal set as early as the First VAT Directive: the abolition of tax frontiers. Further amendments to the Sixth VAT Directive in 1991 and 1992, Directives 91/680/EEC and 92/111/EEC concerned the abolition of tax frontiers and were intended to adapt VAT to the requirements of the new single market. They established a transitional VAT system that would later be replaced by a definitive system for taxing trade between the Member States based on the principle of taxation in the Member State of origin of the goods or services supplied. Spanish action in the area of VAT has its legal basis in Law No 37 of 28 December 1992 on Value-Added Tax; Royal Decree No 1624 of 29 December 1992 approving the regulation on Value-Added Tax. The Value Added Tax, or VAT, in the European Union is a general, broadly based consumption tax assessed on the value added to goods and services. It applies more or less to all goods and services that are bought and sold for use or consumption in the European Community. Thus, goods which are sold for export or services which are sold to customers abroad are normally not subject to VAT. Conversely imports are taxed to keep the system fair for EU producers so that they can compete on equal terms on the European market with suppliers situated outside the Union (European Commission, 2006)  VAT is applicable to the supply of goods or services affected for a consideration within the territory of the country by a taxable person acting as such and to the importation of goods. Value added tax is a general tax that applies, in principle, to all commercial activities involving the production and distribution of goods and the provision of services;  a  consumption tax because it is borne ultimately by the final consumer. It is not a charge on businesses, charged as a percentage of price, which means that the actual tax burden is visible at each stage in the production and distribution chain;  collected fractionally, via a system of partial payments whereby taxable persons (i.e., VAT-registered businesses) deduct from the VAT they have collected the amount of tax they have paid to other taxable persons on purchases for their business activities. This mechanism ensures that the tax is neutral regardless of how many transactions are involved paid to the revenue authorities by the seller of the goods, who is the "taxable person", but it is actually paid by the buyer to the seller as part of the price. It is thus an indirect. The beneficiary in Spain are the central government and certain autonomous communities that have a share in the revenues obtained by the tax (in the Basque Country and Navarre the tax is collected in pursuance of the central government legislation, except using different tax declaration forms, and part of the revenue accrues to the said autonomous communities) European Commission, 2002). The taxable person is the person individual, firm, company etc who independently carries out in any place one of the following economic activities, whatever the purpose or results: the activities of producers, traders and persons supplying services, including mining and agricultural activities and activities of the professions. Member States may also treat as a taxable person anyone who carries out one of these activities on an occasional basis, and in particular one of the following: the supply before first occupation of buildings or parts of buildings and the land on which they stand; the supply of building land (Fernández de Soto Blass, M.L, 2006). 

 

Causes and Consequences of High Turnover by Sales Professionals

Dr. Phani Tej Adidam, University of Nebraska at Omaha, Omaha, NE

 

ABSTRACT

Retention of sales professionals is becoming one of the most challenging issues facing current sales managers, especially since the cost-effects of high turnover on the corporate bottom line is devastating. In this scenario, this paper investigates the costs and reasons behind high sales professional turnover, and offers some suggestions on how to increase sales professional retention, and thereby lowering the turnover rate. Emphasis must clearly be placed on recruiting the right kind of sales professionals by offering realistic job previews, providing appropriate training and developmental opportunities, engaging salespeople by developing high trust and commitment levels, establishing reasonable and equitable sales quota setting procedures, designing appropriate compensation structures, and individualizing the motivational incentives for each sales professional.  One of the most important issues facing businesses is finding and keeping good sales professionals. After all, sales professionals are the most valuable organizational resource, and good sales professionals should be thought of in terms of investments needing frequent rewards. Sales human resource professionals find themselves trying almost anything to retain their best sales people, especially when such sales professionals are being ensnared by their competitors in a tight labor market. Retaining top sales people may indeed be hard. It requires being alert to organizational problems and difficulties which may drive sales people out the door (Brashear, Manolis, and Brooks, 2005). It also means being sensitive to their hopes and dreams, needs and desires, and managing sales force in a manner that lets them achieve their own goals (Schwepker, 1999).  Savata (2003) opines that “Losing staff is always a part of doing business.” He also says that turnover higher than 20% is unnecessary and wasteful. He indicates that employees' personal reasons for leaving are beyond a firm's control, but it often can do something about work-related issues that cause staff to move on.  US Department of Labor (www.bls.com) provides the numbers on total employee turnover rate, and Nobscot Corporation (www.nobscot.com), the pioneer in exit interview management software, offers average voluntary employee turnover rate in the US.  The numbers are summarized below:  Some turnover in a firm is even desirable, since new sales people bring new ideas, approaches, abilities, and attitudes and keep the organization from becoming stagnant (Holmes and Schmitz, 1996). However, high turnover sends a very clear signal that something is wrong somewhere in an organization. There are many methods to help one do a better job of finding and retaining good sales professionals, but they all cost money. However, sales professional turnover also costs money.  Every single industry faces these issues. Research shows that the turnover rate tends to increase as the size of the sales-force increases because the larger organization may strike a salesperson as impersonal (Sager and Menon, 1994). This paper will discuss such issues as employee turnover costs, causes, and will attempt to reveal the most commonly suggested ways of dealing with high turnover overall, and specifically in the sales profession.  Pinkovitz (1997) identifies three most common costs—separation costs, replacement costs, and training costs. Separation costs include costs incurred for exit interviews, administrative functions related to termination, separation/severance pay, etc. Replacement costs include the cost of attracting applicants, entrance interviews, testing, travel/moving expenses, pre-employment administrative expenses, medical exams, and acquisition and dissemination of information. Training costs include both formal and informal training costs. Also, there is a performance gap between those who leave and their replacements. Estimates of turnover costs may range from 25 percent to almost 200 percent of annual compensation (Klewer and Shaffer, 1995; Pinkovitz, 1997).  On the other hand, there are costs of high turnover that are more difficult to estimate. They include customer service disruption, emotional costs, loss of morale, loss of experience, burnout and absenteeism among remaining employees, etc. These may be even harder for a company to deal with than the ones that are openly displayed. This may be especially true in the sales industry. The amount of revenues lost when just one established salesperson is lost through termination or transfer is astounding. New sales are lost because the territory is open until a new salesperson can be hired and trained (Munasinghe and O’Flaherty, 2005). The new salesperson that fills the territory goes through a ramp-up period, often lasting a year or more, during which he or she is much less effective than an average sales professional in the company (Cotton and Tuttle, 1986).  For each sale lost while a territory is open or the replacement salesperson is less than fully productive, future add-on sales and maintenance revenues are lost. The cost to hire and train a replacement is an additional expense—especially for companies selling expensive products or services. These lost sales directly affect the bottom line. Often sales managers or other salespeople pick up the responsibility for any active prospects until the void is filled. However, future sales will suffer in either case because fewer new prospects are developed during that period. In addition, people who are "filling in” neglect some of their own duties to cover the vacant territory; they are most likely to experience work overload, increased stress, anxiety, etc. These, in turn, lead to an even higher rate of turnover. Singh, Verbeke, and Rhoads (1996) focus on the direct effects of role stressors and job characteristics on salespersons' behavioral and psychological job behaviors. Using data from salespeople across a range of small and large firms, the author finds that work overload and dysfunctional effects of role ambiguity, in turn, decreases performance and increases turnover. So, this is a cycle that managers want to avoid.

 

Two Stage Residual Income Model for Evaluating Intrinsic Value of Stock Listed Firms An Empirical Analysis of Electronic Information Industry of Taiwan Fifty Index

Tao Huang, Ming Hsin University of Science and Technology, Taiwan

Shih-Chien Chen, Takming College, Taiwan

Dr. Jovan Chia-Jung Hsu, Kun Shan University of Technology, Taiwan

 

ABSTRACT

Along with the diversification of our securities market in recent years, as well as with the chaos of market order, corporate valuation has now become a critical issue for numerous public investors. In this thesis, we will carry out our study by means of the Residual Income Model as proposed by Ohlson, and try to understand the intrinsic value of our stock market through the analysis of electronic information industry of Taiwan Fifty Index, thereby providing references for our investors. Conclusions in this study are drawn as follows: Ohlson’s Residual Income Model is a good reference for the forecast of middle-term and long-term industrial rate of return; while the book to market price ratio (B/P) can have relatively more excellent predictive power in both short-term and long-term. And the earnings to price ratio (E/P) is a good index in the short term. In the case of the sales to market price ratio (S/P), it can’t be a suitable reference for short-term rate of return for electronic information industry. Since its inception in the year 1962, Taiwan Securities Market has had a history of 43 years up to now. On the other hand, excellent performances have been repeatedly achieved on Taiwan Stock Market since when the Taiwan Stock Exchange Weighted Stock Price Index increased over 1,000 points in the year 1986. With the experience of the so-called fast-growing period (1987-1990) and collapse period (1990-1991), our domestic market has gradually entered into a stage of maturity in its structure (1991-). Throughout this process, several fluctuations have taken place due to the inadequately healthy and complete order of this market, implicating that the stock values can differ or vary great from the real corporate values. Our observation of the previous transactions on the Taiwan Stock Market shows that there is an intense climate of short-term trading on this market in that at the time of making investments, the public tends to be in popular lack of those concepts regarding the investment value, causing the stock market to degenerate to a venue for speculation and gambling. In view of this reality, such a climate can be eradicated to foster both the healthier development of securities market and more solid security for the public investors only until a stock valuation model particularly applicable to the Taiwan Stock Market is established by way of studies to serve as a reference for companies, government and civilians. In November, 2002, the Taiwan Stock Exchange has collaborated with FTSE International Ltd to work out the Taiwan Fifty Index, of which those single stocks sampled were mostly the leading stocks representing various industries. With their market capitalization accounting for 70% of the total market value of listed companies, as well as with that the electronic stocks sampled can occupy a sizable proportion equaling to approximately 65%, such stocks can reflect the intrinsic value of the Taiwan stocks more sensitively. In this circumstance, those electronic information industrial stocks sampled from the Taiwan Fifty Index will be taken as the research objects in this study with a view to accomplishing a more correct valuation of corporate values by the application of residual income model, thereby facilitating the acquirement of excess profit when making investments in the stock market. Based on the aforesaid research background and motives, the objectives of this study can be concluded as the following: 1. Taking the constituent stocks of electronic information industry of Taiwan Fifty Index as our research objects to make an analysis of the intrinsic value of stocks by way of the Residual Income Model (RIM), whereby to evaluate the real value of the stock market in one nation. 2. Seeking to derive the generic modes from the above Model to assess whether the explanations or predictive power of this model can vary in accordance with the addition of other financial variables. Discounted Excess Surplus Method (Ohlson;1995) has proposed to (1) redefine the Stock Dividends through the assumption on clean surplus relation and (2) to differentiate the surplus into such components as normal surplus and excess surplus, thereby developing a valuation model that integrates the surplus, book value and stock dividends together. In the analysis of intrinsic value and as far as the relevant documents on financial ratios are concerned, Lee et al.1999have based on their empirical evidences to show that: In terms of the time sequence analysis, the fundament value can not only be more securable than the market prices, but also need shorter time to return to the mean number than the latter. The V value determined by Frankel and Lee1998by the application of ROE that analysts had worked out could have a high correlation with the current stock prices and could present powerful explanation facilities, with R2 approximately as 66.67%. In the long run, the ratio of intrinsic value to market price would apparently prevail over that of the book value to market price. Lee et al. (1999) has assumed that as a co-integrated relation can be presented between them, both the stock market price and real value would tend to be in a state of convergence over time.

 

The Determinants of Working Capital Management

Dr. Jeng-Ren Chiou, National Cheng Kung University, Taiwan

Li Cheng, National Cheng Kung University, Taiwan

Han-Wen Wu, China Development Financial Holding Corporation, Taiwan

 

ABSTRACT

This paper investigates the determinants of working capital management. We use net liquid balance and working capital requirements as measures of a company’s working capital management. Results indicate that the debt ratio and operating cash flow affect the company’s working capital management, yet we lack consistent evidence for the influence of the business cycle, industry effect, growth of the company, performance of the company and firm size on the working capital management. Corporate finance can be mainly categorized into three domains: capital budget, capital structure and working capital management. The raising and management of long-term capital belong to the domains of capital budget and capital structure. Source and use of long-term capital are traditionally aspects of much concern in finance, while management of the working capital that sustains the operation of an enterprise draws relatively little attention. Working capital, including current assets and current liabilities, is the source and use of short-term capital. In addition to company characteristics, working capital is also related to the financial environment, especially the fluctuation of business indicators. Since the poor performance of the global economy during the late 1990s, financial institutions have in general adopted a tighter credit policy to lower their deposit/loan ratio. Thus enterprises have had to manage their working capital more prudently to adapt to the changing financial environment. Kargar and Blumenthal (1994) demonstrated that many enterprises go bankrupt despite healthy operations and profits owing to mismanagement of working capital, so it is a topic that deserves increased investigation.  The existing literature on the management of working capital is limited in scope, and most prior studies use variables such as current ratio, quick ratio, and net working capital to evaluate enterprises’ management of short-term working capital. This study uses the net liquid balance (hereafter referred to as NLB) (1) and working capital requirements (hereafter referred to as WCR) (2), both used by Shulman and Cox (1985), as the proxy of working capital management. We will investigate determinants of the management of working capital, including business indicator, industry effect, debt ratio, growth opportunities, operating cash flow, firm performance and firm size. We use 35 quarters’ data from the first quarter of 1996 to the third quarter of 2004. The study reveals that debt ratio and operating cash flow can affect management of working capital, whether NLB or WCR are used as proxies. Research conclusions are consistent with our predictions, yet we have no consistent empirical results on the relation of the working capital management to business indicator, industry effect, company growth, firm performance and firm size.  Factors affecting the management of working capital can be roughly divided into interior and exterior types. Some companies can only adjust operation strategy according to business indicator, an exterior factor indicating general economic performance. Prior research on business indicator and financial ratios revealed that business indicator exerted an influence on the financial ratios of a company (Horrigan, 1965; Luo, 1984; Liu, 1985; Zhou, 1995; Su, 2001). Yet difference financial ratios carry different meanings in terms of finance. This study investigates the relation of business indicator and management of short-term capital from the perspective of a firm’s working capital management, which traditionally is rated by current ratio, quick ratio, and net working capital. Shulman and Cox (1985) indicated that the first two are used to evaluate a firm’s capability to pay debts from the perspective of liquidity with no consideration of the going concern of the company, while net working capital integrating operational and financial strategy is no suitable indicator of liquidity. Thus, in predicting the financial crisis of a company, Shulman and Cox (1985) classify net working capital into WCR and NLB to evaluate the management of working capital and capability of raising and allocating capital, respectively. Their study found NLB is better than traditional indicators in terms of predicting financial crises and the liquidity of a company. Hawawini, Viallet, and Vora (1986) hold that evaluation based on NLB and WCR were better than any based on traditional indicators. This paper also uses WCR and NLB as proposed by Shulman and Cox (1985) as indicators for working capital management.  The business cycle refers to fluctuations of general economic performance in the long-term development of an economy. It is not easy for a firm to raise money during the period of economic recession, when cash supply is relatively tight. To retain capital for daily operations, NLB must be kept at a higher level, and business indicator is expected to be negatively proportional to NLB. WCR serves to gauge the management of working capital, and is also influenced by business indicator. In economic recession, the expansion of a company may be not as smooth as expected, with possibly longer time periods for collecting accounts receivable or possibly expanded inventories due to a decline in sales. Thus, a relatively high net volume of working capital requirements may occur. It is thus evident that the higher demand on WCR is due to poor performance of the overall economy.

 

Exploring Customer Satisfaction, Trust and Destination Loyalty in Tourism

Heng-Hsiang Huang, Ching Kuo Institute of Management & Health, Taiwan

Chou-Kang Chiu, Ching Kuo Institute of Management & Health, Taiwan

 

ABSTRACT

As the concept of relationship marketing has motivated management of travel agents to seek fresh and creative ways of establishing long-term relationships with their tourist customers, it is important to explore the tourists’ destination loyalty given a competitive market in tourism around the globe. This study proposes a model of tourists’ satisfaction, trust and destination loyalty in tourism. In the proposed model, the perceived cultural differences, perceived safety and convenient transportation indirectly influence destination loyalty through the mediation of relationship quality comprising satisfaction and trust. Finally, the discussion and limitation about the proposed model are also provided.  Tourists’ decisions to choose destinations and spots have been one of the significant issues usually discussed by researchers (Ajzen and Driver, 1991; Chen, 1998; Fesenmaier, 1988; Um and Crompton, 1990). Such decisions have been also linked with the topics of decision rules, decision-making processes, and choice factors (Chen and Gursoy, 2001). Despite the substantial contributions from previous research on decisions to choose tourist destinations (Crompton, 1992; Crompton and Ankomah, 1993; Fesenmaier, 1990; Woodside and Carr, 1988), research pertaining to the linkages between decisions to choose a tourist destination and tourists’ destination loyalty from a perspective of relationship marketing is rather limited, which deserves a close attention as this study attempts to demonstrate.  The concept of relationship marketing has promoted management of travel agents to seek fresh and creative ways of establishing relationships of mutual-benefit with their customers. Specifically, customer loyalty nowadays becomes critical to many service industries, including the area of tourism. Travel agents are busy searching customers by offering highly competitive services in order to achieve customer loyalty towards a specific destination, but such loyalty relies on achieving relationship quality with that destination in order for tourists to willingly visit the same destination in the future. Previous research has discussed about the importance of relationship marketing in some service industries and their impact on firm profitability and customer retention (e.g., Crosby, Evans and Cowles, 1990; Tam and Wong, 2001), but the modern approach to relationship quality and loyalty in the service sector borrows heavily from the marketing theory and science that has been in use for decades in the general industry (Lin and Ding, 2005, 2006). This study proposes a model and explores critical determinants of destination loyalty from a different perspective of tourism. This research differs from previous works in a principal area. That is, the applicability of relationship quality to strengthening customer loyalty has been extensively studied for many tangible goods. In contrast, highly intangible products related to leisure and tours have attracted little attention. Therefore, this work explores relationship quality and destination loyalty from an angle of intangible tourist destinations and proposes useful inferences for the management in tourism.   A conceptual model, displayed in Figure 1, is proposed to generate insights for management in tourism. In the model, perceived culture differences (experiences), perceived safety and convenient transportation influence destination loyalty indirectly through the mediation of relationship quality comprising satisfaction and trust (towards the destinations). From the theory of relationship marketing, relationship quality is considered as an overall assessment of the strength of a relationship (Garbarino and Johnson, 1999). Although there still exists discussion on which dimensions make up relationship quality with a tourist destination, relationship quality can be viewed as a construct comprising at least two components (Lin and Ding, 2005, 2006): (1) trust on a tourist destination (Swan, Trawick and Silva, 1985) and (2) satisfaction with a tourist destination (Crosby and Stephens, 1987). Consequently, this study assumes that relationship quality is accompanied by satisfaction and trust (Lin and Ding, 2005, 2006) as in the following descriptions. Satisfaction with a tourist destination is not only regarded as an important outcome of a relationship between tourists and their desirous destination (Smith and Barclay, 1997), but also an emotional state that occurs in response to an assessment of tourist experiences in the destination (Westbrook, 1981). In other words, satisfaction can be defined as a tourist’s affective state resulting from an overall appraisal of his or her psychological preference and pleasure towards the tourist destination. To sum up, increased satisfaction with a tourist destination brings with it an improved relationship quality.  The development of trust is considered an important result of investing in a dyadic and affective relationship between tourists and their destination (Wulf, Odekerken-Schröder and Lacobucci, 2001). Trust is defined as a willingness to rely on the tourist destination in which one has confidence, or the belief that the tourist activities in the destination are reliable (Schurr and Ozanne, 1985). Increased trust on a tourist destination is often cited as a critical ingredient for determining relationship success and consequently brings on an improved relationship quality with the destination (Lin and Ding, 2005, 2006). 

 

The Effects of Individual and Joint Gift Giving on Receipt Emotions

Dr. Shuling Liao, Yuan Ze University, Taiwan

Yu-Huang Huang, Yuan Ze University, Taiwan

 

ABSTRACT

This paper employs the concepts of hedonic framing and mental accounting to interpret the effects of individual and joint gift giving on receiver’s emotional responses. The moderating effects of situational factors including distance of social relationships and types of gift are also investigated. The results show that receivers respond more positively to joint gift giving. Gifts from close members produce better affect. When the gifts come from intimate relationships, receivers respond no differently to nonmonetary gifts from individual or group but negatively toward someone who sends the monetary gift alone. By contrast, individual gift giving from distant relationships generates the worst emotions for a nonmonetary gift. The findings provide insights to the appropriate behavior and practice of gift giving.  Gift giving is a kind of common behavior in daily life and is highly promoted by business marketing activities. People are motivated to give gifts for the purposes of social exchange, economic exchange, and love sharing (Belk & Coon, 1993). People also love to receive gifts when the gifts are appropriate to indicate the perceived interpersonal connection (Neisser, 1973; Shurmer, 1971). However, the course of paying a man back in his own coin between men not merely involves the exchange of the entity goods; the implicit psychological factor and influence behind this interaction are even more intricate and thought-provoking. For this reason, the past research on gift giving behavior has integrated perspectives of different domains in social sciences. Among them, the gift giving theory developed by Sherry (1983) has received wider attention for its conceptual completeness. Sherry (1983) originally incorporated concepts from anthropology, sociology and psychology to sketch up the gift giving behavior. A number of following studies on gift giving stem from Sherry’s work and bring in abundant and various explorations to the stream of gift giving research.  Nevertheless, the past research mostly focuses on the reasons of gift giving (e.g., Belk, 1976, 1979; Caplow, 1982; Cheal, 1988; Brunel, Ruth & Otnes, 1999), the giver’s motivation (e.g., Murray, 1964; Belk, 1979; Caplow, 1982; Solomon, 1990; Goodwin, Smith & Spiggle, 1990; Wolfinbarger & Yale, 1993; McGrath, 1995; Wolfinbarger, 1990), or the timing of gift giving (e.g., Mauss, 1967; Lowes et al., 1968; Belk, 1993; Sherry, 1983). These studies on one hand investigate gift giving from the giver’s perspective only. The responses from receivers to gift receipt remain unknown. Some giver-centered studies on the other hand shift the research direction from giver’s motivation to giver’s emotions during gift giving such us positive affect in love (e.g., Cheal, 1988; Belk & Coon, 1993), happiness and joy (e.g., Belk, 1996) ; or negative emotions in sadness, worry and anxiety (e.g., DeMoss & Mick, 1990; Belk & Coon, 1991). This shifting attention sheds light on the intriguing emotional aspects of both gift givers and receivers, and thus underlines the major interests of the present study in discovering how gift receivers would affectively respond to vari0ous types of gift giving. This paper intends to apply the theories of hedonic framing and mental accounting (Thaler, 1985, 1999) to compare the effects of individual and joint gift giving on receiver’s emotional responses during gift-giving behavior. As Sherry (1983) indicated, gift-giving behavior is a multi-facet process that involves social connection, economic concerns, and self perception between givers and receivers. Therefore, the moderating effects of distance of social relationships and types of gift in monetary or nonmonetary value are also discussed. This current study expects to enhance the knowledge of gift giving strategy and fill the research gap in gift receipt. In particular, the findings of this research will advance the gift-giving practices and the marketing promotions by the related industries in the following ways. Firstly, to gift givers, it is suggested that giving gifts by individual or group will elicit receiver’s different emotions; Secondly, the effects of individual or group gift giving are contingent on the types of perceived relationships by receivers and the value forms of gifts. Finally, to marketing practitioners, the results of this study offer some useful guidelines to gift promotions which can provide better advice to consumers on avoiding the pitfalls of inappropriate gift giving while maximizing the benefits of it to relationships building and emotions sharing. In daily life, consumers often fail to behave in accordance with the normative descriptions of economic theory. Some researchers have put forward the economic theory and attempted to explain this phenomenon (see Becker, 1965; Lancaster, 1971 for relevant discussion). But all those economic models omit virtually all marketing variables except price and product characteristics, and could not apply in marketing practices widely. In 1979, Kahneman and Tversky proposed the framing concept with many marketing variables and developed Prospect Theory with value function that can elaborate the psychological effects further. Yet the main goal of prospect theory is to describe or predict behavior, not to characterize optimal behavior. Thaler (1980) later extended Kahneman and Tversky’s (1979) notions in framing and prospect theory, and proposed the concept of hedonic framing by pointing out that people tend to employ several principles either separating or integrating the gain and loss of two or some incidents to make themselves feel relatively happy. Hedonic framing indicates that people make different decision choice based on different mental accounts established in the mind. Thaler’s (1980) mental accounting theory explains why choice of individuals does not follow or even violate the past economic law. The current study applies the concept of mental accounting to move further toward a choice-based theory of gift-giving.

 

A Study of Implementing Six-Sigma Quality Management System in Government Agencies for Raising Service Quality

Dr. Li-Hsing Ho, Chung Hua University, Taiwan

Chen-Chia Chuang, Chung Hua University, Taiwan

 

ABSTRACT

In an age of thin profit margin, the corporations are diligently looking for ways to differentiate themselves from the competitors, to beat the competition, to expand market share, to create quality differences, and even to achieve zero quality defects. Regardless of the industry type, continuous improvement on quality is the irreplaceable part of the entire production activities. Although there are many ways to solve the problems in product quality, the six sigma quality management system can effectively solve the core issue in production quality. This quality management system is highly integrated, contains detailed problem solving procedures, and has been tested by multination corporations like Motorola and General Electric. It is also a system that emphasizes on fundamental educations, changes in organization culture, the quantification of the effective productivity through the discussion of the core problems. Taiwan’s government agency has realized the importance of the six sigma quality management system, and by implementing the six sigma quality management system, the government agencies are able to increase the qualities of the services provided. With an increased and improved government service quality, the general public will have a greater confidence on out government. The main tasks for the health administrative agencies are providing vaccination shots, controlling the drug usage, examining and testing food quality, providing smoke hazard-prevention and public health educations, managing the medical administrative tasks, and etc. Therefore, the quality of the services provided by the health bureaus will have immediate impact on the health safety of the public health. How to effectively implement the six sigma quality management system in order to increase and improve the health service quality and to promote a healthy general public, then, should be the key topic the health administrative agencies should actively and aggressively discuss. The six sigma quality management system provides a brand new vision, concept, and methodology for corporate management strategy. There are many cases where corporations have successfully implemented the six sigma system and the results are astonishing. The six sigma quality management system is divided into five procedures: define, measure, analyze, improve, and control. When executing the corporate improvement project, the trained personnel will identify correct and valuable projects, define the improvement goal, utilize team work efforts , collect data information relating to concerned issues, analyze the major improvement factors, establish improvement processes, and, finally, control improvement results. To achieve it maximum benefits, significant amount of information are needed when using the six sigma quality management to conduct quality improvement activities. The private enterprises or government agencies throughout the world should design improvement strategies in order to face the challenges in the twenty first century and to increase the nations’ overall competitiveness. The government health administrative agencies in Taiwan should not be exception. The quality of health administrative operations is influential on determining the safety of the general public health. Therefore, how to effectively implement the six sigma quality management system for quality improvement should be a key topic for the government’s health administrative agencies. The top priority for these health administrative agencies, then, will be to creating a new image for themselves by providing exceptional service quality to the general public. The improvement on service quality, however, requires immediate actions from the health administrative agencies in order to ensure better protection for the health of the general public.  The first step of using the six sigma quality system as the measurement standard is to identify what the customers’ true expectations are. In another words, the system requires users to identify factors which are critical to quality (CTQ). Then, based on the customers’ key demand, a measurement on the users’ process flow will be performed. If the service level satisfies 68% of the customers’ demand, then the user has achieved the “two standard deviation” level, or two sigma level. If 93% of the customers’ demands are satisfied, then “three sigma” service quality level has been achieved. Therefore, the methods of using standard deviations to measure the process flow performances can proved a more consistent method to evaluate and to compare the differences among different process flows.  Currently, the average service quality level for most corporations is between three to four standard deviations. Even for financial corporations with four sigma service quality level, out of the more than two hundred and fifty thousand of credit card billing statements processed each moth, these corporations will receive approximately one thousand five hundred and fifty complaints from the customers. These complaints might potentially lead to loss of businesses from these customers. Researches have shown that if the customer retention rate increases by 5%, the increase on corporate profits might exceed 25%. Therefore, the six sigma quality management system is aimed to reduce the possibility of error. The corporation should set their performance goals on reducing operation errors to merely 3.4 errors out of every one million operation.  In order to achieve the near-perfect goal of six sigma quality, the corporations must undertake a series of improvement measures toward their goals.

 

Influence of Audio Effects on Consumption Emotion and Temporal Perception

Dr. Chien-Huang Lin, National Central University, Taiwan

Shih-Chia Wu, National Central University, Taiwan

 

ABSTRACT

Consumer expectations towards retail stores exceed selling functions. Audio arrangements and the atmosphere of the shopping environment have become key influences on consumer satisfaction. Previous studies used actual retail stores as the investigation bases. This study adopted a computer graphic design tool to avoid previously uncontrolled variables in actual stores. A virtual realty shopping environment was built to identify audio effects in all aspects on consumers’ shopping behavior. The findings of this study demonstrated that consumer consumption emotion and temporal perception toward a shopping environment are significantly influenced by low music volume, and music type. Audio effects exert a greater influence on consumption emotion and temporal perception.  The study of retail environment in the past research was focused on actual retail stores.  Variables, including climate, shopper traffic, the attitudes of sales persons, and special events, are difficult for the researchers to control. Moreover, the validity of the findings is easily impeded by imprecise variables. Recently, with rapid advances in technology, virtual reality has created more opportunities for consumers in shopping and also become an effective research tool. This study was attempting to achieve variable control by using the computer graphics design software, named “Space Magician V2.0,” to create a virtual electronic appliance store, similar to a real bricks-and-mortar store, as an experimental and test venue. In this study, objects in virtual electronic appliance stores can be adjusted and moved based on individual needs in terms of color, location, and size. Subjects can also manipulate music broadcasting, for example: volume and broadcast timing. Emotion has been shown to influence consumption perceptions, assessment and behavior, and has also been identified as a key mediator in assessing service quality (Taylor, 1994). Thus, by precisely controlling the previous variables, this study attempts to identify the impact of music familiarity to customers, music type, radio broadcasting programs, volume effects in the store, on consumption emotions and temporal perception. Seidman (1981) explored the use of music in movies and education media, and found that music significantly influences cognition and attention. The findings have already been adopted in the movie and TV production industries. A scholar specialized in both music and neuro-physiology, Manfred (1982), indicated that music structure would stimulate the nerves in the brain and thus provoke emotional responses. Previous literature also revealed that the relationship between individual music preferences and the complexity of music demonstrated a U shape, and the complexity of music would gradually increase to suit human beings. In a study of the effect of music on the emotions, Wang (1992) identified emotion is connected to the speed of rhythm, and an allegro tempo is associated with a livelier and happier effect. Field research in department stores showed that music with a faster tempo tended to produce more positive emotions.  “Time” is a factor that consumers must examine in evaluating product or service quality. Jacoby et al. (1976) found a growing number of studies exploring the relationship between time and consumer behavior. Furthermore, Hornik (1992) indicated related researches focus on three aspects: time allocation and behavior, time perception, and time-budget approach to discuss time allocation among family, work, and leisure activities. Time perception is defined as subjective consumer judgments of time, and is generally used to orally measure subject feelings about time after experiencing an event or activity. Some studies have found that subjects generally make time judgments subjectively, reasons behind attribute to personal motives (Thomas & Weaver, 1975; Fraisse, 1984; Hornik, 1992). Therefore, strong subjectivism and situational variance generally influence subject perceptions of time.  Zakay (1989) assumed the Resource-Allocation Model perspective for elaborating the psychological state of consumers. This model uses a cognitive timer to determine individual perceptions of time. The cognitive timer is activated when people begin to discern the passage of time. By using audio or visual stimuli to distract people’s attention on waiting duration, the sense of the passage of time is decreased. Moreover, during the passage of time, the counting units of cognitive timers are varied owing to the influence of emotions. Attentional Model points out that following the reduction of non-time related information, more attention resources are allocated to cognitive timer; therefore, more time-related information is accumulated in the timer, causing the subject to have longer subjective time perception. Namely, more audio-visual cues lead the subject to pay more attention to processing the information, while paying less attention to time cues and the passage of time, leading to less subjective time perceptions. Along with the Attentional Model, researchers concluded: (1) a negative correlation exists between task difficulty and subjective time perceptions; (2) longer subjective time perception is produced under no fill mechanism than with it (Frankenhauser, 1959; Priestly, 1968). The majority of previous research results have supported this model, which is also adopted in this study to evaluate how consumers are influenced by the fill mechanism of music familiarity and broadcasting in a retail store.

 

Multi-Criteria Analysis of Offset Execution Strategies in Defense Trade: A Case in Taiwan

Dr. Chyan Yang, National Chiao Tung University, Taiwan

Tsung-cheng Wang, National Chiao Tung University, Taiwan

 

ABTRACT

In international trade the offset practices have received increased attention over the past twenty years.  In the coming ten years, the Taiwanese government may expend roughly US$16 billions for purchasing military equipments through the Foreign Military Sales (FMS) program, and can achieve US$8 billions offset credit.  Consequently, this paper is to discuss with Taiwan’s optimal offset execution policy and propose a framework of drawing on offset credit in future.  In order to help the decision makers determine the optimal offset strategy, the TOPSIS method by incorporating AHP method is applied to determine the best compromised offset execution strategy.  The potential applications and strengths of Multi-Criteria Decision-Making (MCDM) in assessing the offset strategies are highlighted.  Offset is one alternative marketing strategy recently introduced in the international marketplace.  Offset, or its military conunterpart, is a commitment associated with a sale where the seller will provide the buyer with an offsetting agreement to purchase other products.  The basic philosophy of an offset agreement or countertrade is to structure the commitment so the seller will fulfill a contract that rewards the buyer.  This reward may have the potential for economic, social or technological growth, or increased sales of other domestic goods in exchange for the buyer’s purchase.  This contract increases the competitive value of the seller’s product.  In theory, this agreement allows the buyer to purchase additional units since the sale is more economically, socially, or politically attractive with the offset agreement, making a product more affordable or competitively attractive.  This philosophy allows arrangements to create a multiplier effect.  Many methods satisfy offset requirements including co-production, direct offset, indirect offset, technology transfer, et al.  However, one or a few practically methods may be adopted or implemented in a single government procurement program. In the coming ten years, the Taiwanese government may expend roughly US$16 billion for purchasing Patriot-III missiles, P-3 long-range anti-submarine planes, and diesel-engine submarines from United States through the Foreign Military Sales (FMS) program, and can achieve US$8 billion offset credits the largest procurement in Taiwanese history (MND, 2005).  Therefore, the main purpose of this paper is to discuss with Taiwan’s optimal offset strategies and propose a framework of drawing on offset credit in future. The evaluation of offset strategies should be considered from four various aspects including policy, ability, economy, and environment.  The multi-criteria evaluation process is thus used in this paper.  The Analytic Hierarchy Process (AHP) method (Satty, 1980) is applied to determine the evaluation criteria/aspects weights by the experts from different decision making groups comprising Military of National Defense (MND), Ministry of Economic Affairs (MOEA), and Aerospace Industrial Development Corporation (AIDC).  To overcome an irreconcilable conflict among criteria in selecting the best offset strategy, a multi-criteria model, by incorporating AHP method and Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) method (Chen and Hwang, 1992), could help the decision makers to determine the optimal compromise offset strategy.  The result can prove the effectiveness of this method and illustrate the directions for going further in offset planning. The rest of this paper is organized as follows, next section will describe the offset strategies evaluated in this research study, following two sections are the procedure of generation and evaluation of criteria/strategies, and then the evaluation methodology, results and discussion; the last section is conclusions and recommendations.  Offset agreements provide direct benefit, including reduced currency requirements, increased economic activity, and increased sales volume of both the product for seller and corresponding other products for the buyer and improved technology development throughout the country.  In addition to the direct economic benefit to the buyer, offset also provides indirect economic benefit to the country.  For instance, offset provides a vehicle to increase a country’s technology base, allowing the country to develop a competitive position in a well-established international market.  Offset commitments can be satisfied using a wide range of alternatives, including both manufactured parts and services, transportation, training, education, tourism et al.  Many countertrade agreements utilize other creative marketing or finance approaches (Palia, 1990). In 2005, Ministry of Economic Affairs (MOEA) published offset results compilation which has shown that Taiwan offset credits have already accumulated $5.366 billion U.S. dollars credit from 1986 to 2004.  In total sixty-nine offset agreements were signed by MOEA with eight countries of United States of America, French Republic, Germany, the United Kingdom, Japan, South Korea, Singapore, and the Netherlands.  For the forty-five major defense procurement cases in Taiwan, it acquires 75% Taiwan’s total offset credit.  For the non-defense twenty-one procurement cases, it acquires 25% Taiwan’s total offset credit (MOEA, 2005).   We believe that if the Taiwanese government wants to get new technology and to enhance its industrial capability with offsets from purchasing the above three military equipments, a decision-making process involving the item’s evaluation for offset execution strategies is necessary before the execution of various weapon systems purchasing programs. 

 

Convergence of Learning Experiences for First Year Tertiary Commerce Students – Are Personal Response Systems the Meeting Point?

Brian Murphy, University of Wollongong

Dr. Ciorstan Smark, University of Wollongong

 

ABSTRACT

This paper reflects on the need for interactivity in first year lectures. This need is suggested to arise from first year students’ diminishing tolerance for impassivity and also from the increasing accessibility of Personal Response Systems (PRS) in terms of cost, user-friendliness and students’ level of technological savvy. The ways in which PRS can enhance interactivity and the importance of increasing interactivity in first year students’ learning outcomes is discussed in terms of  factors supporting good learning and enhancing their overall learning experience. A fundamental shift in the outlook of commerce students coming into Universities today from the outlook of first year students ten years ago has been argued by many authors (for example, Tapscott, 1998; Friedlander, 2004; Davis, 2005). This shift in outlook is related to the fact that the bulk of first year students coming into Australian university courses in 2006 are both familiar with technology and (in a related development) are reluctant to suffer impassive learning environments silently. This shift in outlook has been accompanied (at least in the field of commerce) by generally increasing student numbers (Freeman and Blayney, 2005) and a realization that the large lecture format of instruction is less draining of resources than smaller forums such as tutorials and seminars. The result is that, at a time when our students demand more interactivity, Australian Universities are anxious to provide a teaching environment (large lectures) which has traditionally allowed little interactivity (Draper and Brown, 2004,  81). This paper argues that a judicious use of Personal Response System (PRS or “clicker”) technology can help to promote the intellectual engagement of our first year students in lectures. PRS can engage the “Net-Generation” or “Millennial” student through interactivity. The importance of interactivity to people as accustomed to the two way conversation of the internet (as opposed to the one-way broadcasting of knowledge in the traditional lecture format) is mentioned by several authors (Biggs, 2003; Tapscott, 1998; Mazur, 1997; Hake, 1998).  Tapscott (1998, 22) refers to those born between 1977 and 1997 as the Net Generation  (or N-Geners) and argues that their exposure to the internet in their formative years has led to this group being the antithesis of the couch-potato generation that preceded them. They are used to interactive, participatory, investigative enquiry. They have a very limited tolerance for knowledge transmission systems which require them to be passive observers (such as traditional lectures at university). ‘The students like active learning, not passively listening to a teacher drone on. They absorb a variety of information from different multimedia. They want visual stimulation - pictures, movies, animation - and not reams of paper.’ (Doherty, 2005,  3).  There are huge differences between Millennials and those preceding them and parents and schools are having to play catch-up. Fast. Microsoft recently released a study, Boomers, Gen-Xers, Millennial: Understanding the New Students, which describes the new generation in detail. Millennials are spending less time watching TV, more time doing homework via the internet…The digital generation also “do” information, rather than memorise it, due to the impossibility of keeping up...Australian research concurs with the Microsoft report. The number of Australians using the internet at home has steadily increased since 1998, rising from 13 per cent of adults to 43 per cent in 2002 according to the Australian Bureau of Statistics. Access to the internet and use of computers is highest in younger age groups. In 2002, 86 per cent of households with children under 15 had access to a mobile phone.’ ( Friedlander, 2004,  9) Ruthven (2003,  24) offers another interesting observation on the Net generation (who he categorises as born between 1981 and 2001 and the New Millennials (here categorised as post 2002) – they are “we” focussed instead of having the “me” focus of Baby Boomers and Generation X’ers. That is, as a group, they are group focussed and interactive: The first year students coming into our lecture halls have a different skill set, a different mind-set and different expectations from the students of a decade ago. As educators, we ignore this change at our peril. Davis (2005, 20) points out that Millennials (characterised by Davis as those born after 1982) have a very impressive ability to ‘take new technology such as peer-to-peer programs on the internet and use it to run conversations over vast networks of contacts’. As educators, we have the responsibility to grasp the optimism and skills of this new generation of first year students and harness it, rather than grumbling over “the good old days” when a lecture was still an old-fashioned lecture.  This would place “Net-Geners” and “Millennials” as people born on or after January 1st, 1982 as opposed to Tapscott (1998) placing their birth year at 1977 and onwards. What both typologies agree on is that the bulk of new first year student facing academics in lecture halls in 2006 are technology savvy, familiar with active participation in learning and have very little tolerance for the passive, educational experiences.

 

The Influence of Cultural Factors on International Human Resource Issues and International Joint Venture Performance

Dr. Lung-Tan Lu, Fo Guang University, Taiwan, ROC

 

ABSTRACT

The numbers of international joint ventures (IJVs) have rapidly increased during the past decade, providing Multinational Enterprises the opportunity to stay competitive and manage complex international business activities. This paper proposes a conceptual model, distinguishing between culture theory, international human resource (IHR) issues and IJV performance. An IJV mixes the IHR activities of parent firms from different nations, and, therefore poses more complex management challenges than other entry modes in foreign markets. Many IJV failures, according previous research, point out the importance of (IHR) activities. Theoretical perspectives are used to generate four propositions of concerning cultural factors and IHR issues in IJV performance in this paper. Several suggestions involving culture, management styles, role stress, and conflict resolution strategies are made for future research. The issue of cultural factors for multinational firms investing in foreign countries has increasingly attracted academic attention in the field of international business (Buckley 2002). Cultural background has been suggested to be influential in the entry mode decision for internationalizing firms (the choice, for example, between wholly-owned and joint venture activity). And cultural distance between firms of different nationalities has been argued to influence co-operative strategy, and the success or otherwise of international joint ventures and other co-operative modes. A cooperative strategy (of which an international joint venture, IJV, is one example) offers many advantages to a company, since the host country partner can supply familiarity with the host country’s culture and market. An IJV mixes the management styles of at least two parent companies. It therefore poses more complex management challenges than domestic managerial activity, or than the transposition of domestic routines into a foreign wholly-owned subsidiary (WOS). An IJV manager faces a difficult juggling act, trying to cope with the different management styles of the two parents, and trying to meet the possibly conflicting criteria for success that these parents impose. Differences of management style can be discerned in many areas, such as supervision style, decision-making, communication patterns, paternalistic orientation, and control mechanism. Academic interest in comparative management grew strongly in the 1980s, partly driven by the challenge to established ideas that Japan’s striking economic success seemed to offer. Japanese firms seemed not only to be different, but also to be capable of transferring at least some of these differences to their foreign subsidiaries. Some of the most complex problems of cultural difference are faced in the context of IJVs. An IJV has at least two parent firms, and these are frequently from dissimilar national cultures. Differences between their management styles can damage IJV performance (Schuler 2001, Iles and Yolles 2002). The goals of this study are to explore the impact of cultural factors and IHR issues on IJV performance. This paper is structured as follows: in Section 2, we consider, at a theoretical level, the influence of cultural factors and international human resource issues (i.e. management style, role stress, and conflict resolution strategies) on IJV performance, and build up a conceptual model. The final Section presents our discussion and conclusion. Culture is a complex concept with numerous definitions. However, Hofstede’s (1980) definition “the collective programming of the mind, which distinguishes the members of one human group from another” probably is the most cited one since 1980s. His IBM survey is a large-scale investigation across more than 50 national branches of IBM. He found four dimensions by a combination of factor analysis and theoretical reasoning (i.e. Power Distance, Uncertainty Avoidance, Individualism, and Masculinity). His findings of four dimensions across countries have given rise to the concept of ‘national culture’. Hofstede argues that nation can be the cultural boundary since the different systems between countries such as legal and educational systems are the collective programming, which differentiate people of one nation from another (Hofstede 1983). A survey of Chinese values was conducted by a team of 24 researchers called The Chinese Culture Connection (1987) in 22 countries. Three of the factors found in this study from the Chinese Value Survey (CVS) correlated with three of Hofstede's four dimensions. One of the factors was unrelated to any of Hofstede's but correlated with the named Confucian work dynamism. It was introduced as the fifth dimension in Hofstede's study (1991). He divided it into two poles: one is labeled "long-term orientation" which is more oriented towards the future (especially perseverance and thrift); another is labeled "short-term orientation" which is more oriented towards the past than the present. Hofstede argues that long-term orientation may be the key factor of economic growth in Asia (1991, p.167). Cultural similarity refers to one party who observe a similar degree of behavioral patterns to another party (Lin and Germain 1998). The concept of cultural similarity and nationality are correlated and distinct (Buckley and Casson 1988). IJV foreign partners from different countries could perceive local partners different ways of managing and operating so that they can learn and corporate with each other. In this study, we use the three concepts to measure (i.e. Hofstede’s cultural dimensions, nationality, and cultural similarity). Until recently, the issues of international human resource management arising in IJVs have received rather little attention within MNE managers and academia (Child and Faulkner 1998). Failure of cooperation between can result in the poor adjustment of managers to working on IJV management.

 

Measuring Goodwill: Rationales for a Possible Convergence between the Excess Profits Estimate and the Residual Value Approach

Dr. Marco Taliento, University of Foggia, Italy

 

ABSTRACT

This study examines the two principal approaches to goodwill valuation which are widely accepted in academic literature and in the best accounting practice: (i) the discounted excess earnings technique and (ii) the residuum estimation procedure. The specific attempt of this study is to verify whether – and under which technical conditions and limits – it is possible to achieve a quantitative convergence between the results coming from both methods. In particular, since the former (‘direct’) method finds in firms’ net income power its explanatory variable, any investigated economic / financial convergence will be retained unlikely, or merely casual, if the other estimation (‘indirect’) approach is incoherently based on some alternative value drivers (e.g. cash flow, unless measuring involves the direct goodwill by some kind of ‘abnormal’ cash streams or similar performance indicators). Therefore, attention is paid to the determination of the above normal earnings capacity (excess earnings) and, on the other hand, to the allocation of the value of the firm, assessed as a whole, firstly to every identifiable assets and liabilities of the business enterprise, and then, as residual value, to goodwill. Against this backdrop, numerical illustrations and methodological remarks on time periods and discount rates are provided. Goodwill is a crucial business concept (1) that it is not easy both to qualify/quantify and to correctly account for (2)  (3).  In general terms, it represents a significant intangible asset which reflects the transferable and sustainable competitive advantage of firms. Financial statements usually exhibit the value of goodwill only after a business combination occurs (e.g. mergers, acquisitions, takeovers, demergers, etc.); nevertheless, it is not uncommon today to see firms that estimate their own earning power, future economic performance or capacity to create business wealth over time. In fact, nowadays a growing number of firms utilize suitable metrics focused on the ‘goodwill’ concept/measure – or upon some equivalent topic like the market value added (4) –, also in order to correctly adopt (and then control) effective and efficient management decisions (M&A projects, future investments, capital budgeting, corporate restructuring, appropriate financing, etc.), take valid corporate / business strategies, support or improve modern value-oriented disclosure schemes, new information sets for equity investors and other stakeholders, etc (5).  Within this context, it is worthwhile drawing attention to a specific issue: the question of choosing among the various methods of goodwill estimate.  It is common knowledge that, with regard to the longstanding academic controversy between the direct procedure for determining goodwill, founded upon the anticipated ‘above normal’ earnings, and the indirect ‘differential’ or ‘residuum’ model, which is rather based on the excess of the price of an (hypothetical or real) acquisition over the fair (market) value of the entity’s net assets, the prevailing opinion seems to lean towards the latter approach, whereas the former – de facto – is regarded as a mere tool of validation of the other one.  Being convinced that an a priori refusal of the substantial correlation binding together both methods referred to should be avoided, some reasonable working hypotheses are therefore provided in the attempt to control whether, and when, it does seem proper to judge – at least from a quantitative point of view – the above mentioned procedures as equivalent.  More specifically, the objective of this paper is to stress the technical possibility to bring – under particular conditions – the direct estimation of goodwill back to the indirect approach (and vice versa), thus implicating the same result.  The working hypotheses that seem more susceptible to imply the quantitative convergence of the result deriving from the direct procedure toward the result coming from the application of the indirect method – which is recognized, in a theoretical perspective, as being more robust than the other one –, are the following: since the calculation of the abnormal earnings capacity of firms is affected especially by the economic variable ‘income’ (i.e. earnings stream), even the calculation of the value of owners’ capital (minuendo) accrued from the indirect procedure should be coherently driven in the consideration of the income-stream forecast (i.e. through a discounted future earnings model or a similar method); the time frame for goodwill estimate, in both the approaches referred to, is infinity: in consequence, the afore said evaluations will depend - in line with a frequent practice - on a necessary capitalization of a suitable income-flow (usually net income, that is considered either before or after the appropriate return on capital out-flow); the discounting/capitalization rate of the forecast excess earnings is set to be equivalent to the congruous rate of return (namely, opportunity-cost of equity capital) employed for discounting/capitalizing future income streams.  In the absence of reasons – for the sake of these notes – to remove the first hypothesis (i.e. considering income-flow as the pivotal value driver) (6), the effects of the possible removal of the two remaining assumptions about ‘time’ and ‘rate’ of estimation will finally be clarified. Following our premise about the significance of valuing goodwill through (a) the above normal earning capacity and, alternatively, (b) the excess of the price of a business combination [W] over the ‘fair valued’ net assets of firms [K’], it is possible to show that:  

 

Research Discussion of Independent Mechanism in an Industrial Area Developed by the Government

Dr. Li-Hsing Ho, Chung Hua University, Institute of the Management of Science and Tech., Taiwan

Chao-Lung Hsieh, Chung Hua University, Institute of the Management of Science and Tech., Taiwan

 

Abstract

Our country has made regulations to investment since the 49th year of the Republic of China. This is by the joined strength of the government and the chinese people in making investments and production space together. Over the past forty years, our country has developed the industrial area and is up to more than 13,000 hectares in total, with a great contribution to the development of Taiwan. The conditions for sale in the industrial sector have been poor in recent years, primarily as a result of economical recession. Funding sources for management of development in the industrial sector has been reduced.The government has invested in order to fullfil the demand from the manufacturers for industrial development. Investments are made to promote the relevant coaching and services and to continously decrease the costs for administration and management. However, the develpment of management in the industrial sector is in a difficult phase. Bringing new life to the government in terms of organization and management in the industrial area, with regard to the way of operation is already initiated, An idea for independent operation in the industrial sector has been generated. This enables the industrial area to act according to the tendency of today with regard to service efficiency, strength of pluralism in guest service functions and decrease in the financial shortfall in development of management. From historical background outlined in the abstract, it is clearly seen that the government needs a reform of its organisation. This should improve the operation and financial issues related to the management in the industrial sector..A socalled “responsibility center system” as well as “public services outsourced by the private sector“, have been proposed among other strategies. However, these strategies seem unable to completely solve all the existing problems.In recent years, the financial costs of development in management within the industrial sector have expanded. One of the reasons is the high costs of human labor. The management maintenance fees and waste water processing fees collected cannot balance these expenses. The National Asset Commission points out that there is a problem in funding of development and operation in the industrial sector. This is described in the “general list of recognized parts for the improvement of revision results and the non-operational special fund deposit of the central government” from The National Asset Commission. It states that: “there are 60 sites within the industrial jurisdiction of the development funds for industrial areas, 47 sites have service centers and 36 sewage treatment plants…whose functions are limited in only maintaining the environment inside the industrial areas, the functions are insufficient, besides the bad fund finances...”..The suggestions for sustainability and improvements are: “the sewage treatment plants and service centers must be revised and eliminated from their missions, and allow the factory owners to set up their own management commission, gradually revise and improve the outsourcing management methods, and employ suitable staff.”. The industrial management institutions in the National Asset Commission, when they reported on “poor sales of the land in the industrial areas and the elimination of the service centers”, pointed out: “according to the Council of Organizational Reform, Executive Yuan, which in the “institution operation revision principle”, has established the assessment parameters to proceed with outsourcing to the private sector, localization, outsourcing operations and turning into legal entities went in four directions”. After analyzing them, the suggestions are to turn them into legal entities.Executive Yuan from The Council of Organizational Reform, points out that in the past, the operation model in the industrial areas has always been “according to the regulations implemented by the government”. In the future, it should be revised and directed to “industrial area operation autonomy management”. It is so to say, the management model should imitate the spirit of the “apartment building management regulations”. The various autonomies inside the industrial areas (government and factories) should establish their own government laws or regulations, and depending on the management commission, or outsource industrial autonomy authorization management, mainly finance oriented, staff-oriented, self absorption of the profit and loss. This project is based on this philosophy. The government, according to the “rewarded investment regulation” and “enhancement of industrial upgrade regulations”, has directed the development in the industrial sectors. After more than 4 decades of management performance, our economy prospers, and it has been called an “economic miracle”. However, in recent years, the problems with finances for operation in the industrial sectors and the promotion of the government reform plan have introduced new thinking with regard to the operation philosophy and operational model. To enable the management mechanism to meet the new trends, to maintain the existing service efficiency and to reinforce diverse customer services, this research discusses the improvement of management efficiency in the industrial areas. This is accomplished through the professional skills of the management mechanisms in the industrial areas, and quality control in business administration, etc. Based on the above theory, this research includes the following four objectives: To discuss the opportunities of the autonomous management in the industrial areas.through the experiences accumulated by industrial area management mechanism in more than 42 years in serving the factories and managing public assets and from their revenue and economic added values. To discuss and evaluate possible economic activities inside the industrial areas, to coordinate knowledge and economic trends, to innovate product commercialization and sales of knowledge in management and to raise management competitiveness of the industrial areas.

 

Financial Management in the Nonprofit Sector: A Mission-Based Approach to Ratio Analysis in Membership Organizations

Dr. Anne Abraham, University of Wollongong, Australia

 

ABSTRACT

Nonprofit organisations (NPOs) are melting pots combining mission, members and money.  Given that the mission of a nonprofit organisation is the reason for its existence, it is appropriate to focus on financial resources in their association with mission and with the individuals who are served by that mission (Parker 2003; Wooten et al 2003; Colby and Rubin 2005).  Measurement of financial performance by ratio analysis helps identify organizational strengths and weaknesses by detecting financial anomalies and focusing attention on issues of organizational importance (Glynn et al 2003).  Questions have been raised that relate the performance of NPOs to their financial resources, their mission and their membership.  Addressing these questions is the key to analysis and measurement of financial and operational control (Turk et al 1995) and provides an appropriate analysis for past performance which will help an organisation chart its future direction.  This paper analyses financial performance by concentrating on ratio analysis in order to identify anomalies and focus attention on matters of significant concern to NPOs.  It discusses the centrality of mission in the use of financial ratio analysis and extends previous financial performance models to develop one that can be applied to individual NPOs thus ensuring that financial performance analysis is not carried out in isolation from any consideration of an organization’s mission.  The paper concludes by identifying the limitations of such an analysis and makes suggestions for further application of the model. Nonprofit organisations (NPOs) are melting pots combining mission, members and money.  Mission is the central thrust of an NPO, the very reason for its existence (Drucker 1989; Oster 1994).  But there is no mission without members.  This centrality of mission and members means that although NPOs are ‘involved in the provision of health services, education, personal social services, and cultural services of various kinds’ (Salamon, Hems and Chinnock 2000, 5), they can be expected to differ from organisations in the other two sectors (businesses and government agencies) which provide similar services (Weisbrod 1989).  An NPO has also been defined as and organisation that has ‘predominantly nonbusiness characteristics that heavily influence the operations of the organization’ (FASB 1980).  To this end, an NPO may neglect to use accounting information to facilitate organizational control.  The key to this oversight is often related to the culture of the organisation, and the very fact that it is not profit-oriented.  Many NPOs started with a ‘cause’ and lacked an early professional management orientation.  This trend is still evident with Froelich et al (2000) reporting that about a third of their sample of 363 large and medium sized NPOs in the USA do not employ staff with accounting qualifications.  An annual operating budget may be the extent of the financial planning with this budget being developed in isolation and not part of a long term strategic plan (Callen, Klein and Tinkelman 2003).  Rather than advance planning, NPOs often tend to react to changing circumstances and events.  Thus, their systems have developed as responses not as initiatives.  Hence, the nature of financial management has been reactive rather than proactive.  Arnaboldi and Lapsley (2004) reported that even when an NPO adopted a financial management technique (in their case, activity based costing), it did so to present itself as being “up-to-date and modern to its external controlling environment” (Helmig, Jegers and Lapsley 2004, 105), rather than to actually implement the technique to improve its financial management system.  The rise of accounting at various times in the history of many NPOs appears consistent with times of crises (Abraham 1999).  The organisation may hold special meetings, special conferences, special appeals and employ facilitators to attempt to improve the situation.  This is consistent with the conclusion that ‘accounting arises in partially rationalized (or partially bureaucratized) settings’ (Meyer 1994: 129).  Meyer further argues that When [an] organization is relatively complete, controlling its own definition of reality, accounting becomes less necessary, and sometimes intrusive. … We thus expect to find accountants in greater numbers where [an] organization is not self-sufficient (Meyer 1994: 129-130). Thus, it appears that an accounting system can operate in an organisation and yet not function as a control mechanism or as a mechanism to provide accountability.  In an organisation that values informal relationships, voluntary participation and ‘niceness’, the idea of explicit accountability may be somewhat alien.  While the need for accountability is acknowledged, the reality is often a different matter (Lee 2004; Molyneaux 2004).  It may be necessary to change the organization’s culture so that accountability is incorporated as a positive value (Little 2004; Poole et al 2001).  It may be important to introduce more professional financial management (Gallagher and Radcliffe 2002; Parker 2003).  Consequently, the use of accounting as a control mechanism is not merely a technical system, but a socio-technical system ‘because it involves change in the social or cultural system which interacts with the accounting technology’ (Flamholtz 1983: 166). The next section of this paper analyses financial performance by considering ratio analysis in order to identify anomalies and focus attention on matters of significant concern to NPOs: mission, members and money.  The third section discusses the use of financial ratio analysis, while the fourth examines and extends Turk et al’s financial performance model.  The final section identifies the limitations of such analysis and provides suggestions for further research.

 

Chinese Currency Forecasts and Capital Budgeting

Dr. Ilan Alon, Rollins College, Winter Park, Florida

Dr. Ralph Drtina, Rollins College, Winter Park, Florida

 

ABSTRACT

Forecasting the Renminbi (also called RMB or Yuan) is crucial for any type of capital budgeting or long-term investment assessment in China.  This article makes a dual contribution by first showing how to use exchange rates in a capital budget model and, second, by forecasting the 10-year RMB-US dollar exchange rate with the help of scenario analysis and using this forecast in a capital budget model.  We believe that the RMB will appreciate against the dollar in the decade to come making the net present value of long-term invested capital higher in US dollar (USD) terms.  China’s urban middle class, estimated at 50 million, is growing rapidly, and has disposable income to buy consumer products favored by developed economies worldwide (Browne, 2006).  Foreign direct investment in China during 2005 reached $60.3 billion and this number is expected to grow in the future.  For many companies, investing is not an option, but a necessity because the opportunity cost of not investing can be quite high. China is now the leading host of foreign direct investment in the world.  Investing in China, however, does require the company to take risks. Among the risks are political, economic, country and project related.  Changes in the exchange rate, in particular, are relevant to almost any type of foreign direct investment and with the liberalization of the Chinese Yuan, currency conversion is now a growing concern in China.  This paper makes a dual contribution by first presenting a model for including changes in exchange rate in the capital budgeting process and, secondly, developing a forecast for the Chinese Yuan.  Given the importance of China to today’s world investment and the changes that are occurring in the currency, the article discusses a salient global management issue.  For the capital budgeting process, foreign companies investing in China must make an estimate of the long-term expectation for conversion of Chinese Yuan.  Such an estimate is needed in order to calculate the profitability, required return, and present value in the home currency.  Currency conversion is particularly critical in developing capital budgets since calculations depend on the accuracy of forecasting the amount and the timing of project cash flows.  In this article, we offer a practical means for evaluating the potential for long-term revaluation of the currency when making capital investments in China.  The discussion is predicated on a firm making a ten-year commitment to its investment overseas.  We present several possible – yet divergent – scenarios for annual revaluation of the RMB during this ten-year investment period.  We then show how different conversion rates affect the outcome of a capital budget denominated in US dollars.  We conclude by offering a model that allows managers to incorporate assessment of currency risk into their estimate of project cash flows.  The article is structured as follows: first, we present the capital budgeting process; next, we explain the RMB currency situation including difficulties in forecasting, its history, and the current situation; finally, we develop scenarios for the Chinese currency and apply them to a model we develop.  At the end, the reader should better understand how currency exchange expectations can be implemented in the capital budget, and appreciate the Chinese currency exchange scenarios that may ensue in the decade to come.  Corporations create shareholder value by generating future cash flows.  The capital budget offers a mechanism for collecting information on future estimates of project cash flows and discounting those amounts back to the present.  A project’s net present value (NPV)—its sum of discounted cash flows—offers a measure of the project’s contribution to the corporate value.  Three critical estimates are needed to determine a project’s net present value: the amount of cash, the timing of cash, and the discount rate.  An error in any of these three variables can cause serious misstatements in calculating the project’s worth.  The discussion in this paper is focused on the amount and timing of cash flow since they are directly affected by the RMB exchange rate.  A firm considering an investment in China will likely begin its analysis by estimating currency flows denominated in RMB.  It would then convert RMB each year to US dollars, which would be discounted to determine NPV.  A project’s discounted cash flows determine a project’s NPV and its acceptability.  Investments made by MNC firms in China face two issues in calculating project cash flows.  They must predict the amount and timing of cash flows, typically in local currency.  Then they must estimate the exchange rate for each year of the capital budgeting model.  What ultimately determines acceptability of the project to a foreign investor is the NPV in its own domestic currency.  A project that reported a positive NPV in RMB may become negative after this conversion.  It is also possible that a project with a negative NPV in RMB may become positive in a firm’s domestic currency.  Such a change in direction of NPV can arise from the original timing of RMB currency inflows and from the rates used for year-to-year currency conversion.  Longer term projects will tend to have greater uncertainties in cash flows and in conversion rates. 

 

A Study on the Information Transparency of the Involvements by Venture Capital—Case from Taiwan IT Industry

Dr. Dwan-Fang Sheu, Takming College, Taiwan

Hui-Shan Lin, Deloitte, Taiwan

 

ABSTRACT

A series of financial scandals of Tung Lung Hardware, Central Bills, Godsun and Taichung Commercial Bank were occurred in Taiwan due to unsound corporate governance, especial the weak information transparency. This study explores the role of venture capital in the information transparency of IPO companies of IT industry from 2001 to 2003. Regression analysis is used to explore the relation of information transparency of the invested companies depending on variables of whether involved by venture capital, shareholding rate of venture capital, number of venture capital and whether venture capital are appointed as directors and supervisors. This study will be discussed year by year based on “Criteria Governing Information to be Published in Annual Reports of Public Companies” as amended by Securities and Futures Bureau in 2001, 2002 and 2003. Empirical results indicate the information disclosures are significantly positively correlative to investments of venture capital, their shareholding rate and number of venture capital in all samples of 2001, 2002 and 2003. Only in samples of 2001 and 2002, venture capital as directors and supervisors are significantly positively correlative to information disclosures. Because of their involvements, venture capital companies have ability to ask the invested companies to disclose more relative information, strengthen information transparency, and minimize the possibility of concealment of information by insiders. The financial crisis caused by US Enron bankruptcy at the end of 2001 shocked the confidence of US stock market. The amounts in a series of subsequent fraudulent expenditures occurred in larger companies such as Worldcom and Merck are larger and larger. Investors became aware of the discrepancy in the reliability of financial statements and the values of companies. As affected by such, not only the financial crisis of individual company that is known emerges, but also the faith confidence values crashed. In Taiwan, a series of scandals such Tung Lung Hardware, Central Bills, Goldsun, and Taichung Commercial Bank occurred in 1998 indicated that alert function of the financial examination did not work, resulting in that responsible persons of enterprises utilized their affiliates and group financial institutions to manipulate capitals, get excessive loans, oversell assets of their company. These fraudulent practices not only caused financial crises of the company, but also brought worries to the financial market. In particular, after Enron case happened, investors suddenly were aware of the importance and value of information transparency of the company, in which strict and fair accounting system should play an important role in information transparency. When financial information disclosed to public by a company cannot completely satisfy their desires of “awareness”, how to increase the transparency of a company by “value report” method is an issue of implementing corporate governance.  Venture capital(VC) are promoters of high tech industry in Taiwan, mainly because of their aggressive and strategic functions of “post-investment management”. In addition, the managements of venture capital focus on promoting corporate governance principles such as regulatory compliance, information disclosure and transparency, in order to protect their rights and further increase values of the invested companies. Barry et al. (1990), Megginson and Weiss (1991), and Lin (1996) suggested that venture capital companies have the functions of supervision and verification. The function of supervision argues that the aggressive involvement and participation in the decision-making of the invested company will be recognized by the market participants, resulting in better performance of the invested company and thus eliminating the asymmetric information between issuer and investors. The function of verification means that the outside economic entities, because of two features – understanding the information about quality or perspective of a company, maintaining their reputation for perpetual operation and development, have ability and incentive to faithfully disclose and reflect the true value of a company, therefore, the interested parties of a company can acquire private information about quality or perspective of a company through their economic act, just as such information is assured by them.  This study intends to explore the affection of the involvement by venture capital on information transparency of invested company. IPO (Initial Public Offering) companies in IT industry are selected as objects of this study.  Megginson and Weiss (1991) suggest that the company invested by venture capital will better attract reputable underwriter to undergo the underwriting, and the time and cost of listing will be decreased. The existence of venture capital can eliminate the asymmetric information between listed company and financial specialist, and between listed company and investors. Admati and Pfleidere (1994) indicate that if lack of such insider investment as venture capital, a company will not disclose all private information, such as formation of new securities, underwriting price, and selection of underwriter, resulting in the agency problems of over- or under-investment. This study argues that the involvement of venture capital in a company will enhance the soundness of corporate governance mechanism and the level of information transparency. Therefore, the following hypotheses are developed: Information transparency of a company with involvement by venture capital is better than that without involvement by venture capital. The higher the shares held by venture capital, the better the information transparency of the invested company.  The more number of venture capital, the better the information transparency of the invested company.

 

The Effect of Corporate Identity Changes on Firm Value An Empirical Investigation

Dr. Ceyhan Kilic, New York Institute of Technology, New York, NY

Dr. Turkan Dursun, New York Institute of Technology, New York, NY

 

ABSTRACT

The objective of this study is to examine the value creation effects of the company name change announcements. Additionally, the wealth creating effects of the company type (consumer versus industrial goods companies), and the type of name change (partial versus complete name changes) are investigated. An event-study methodology is utilized using a multi-variate regression model. The final sample included 44 name change announcements made by U.S. companies. The empirical results of this study indicate that name changes add to firm value. Furthermore, it was found that the name changes made by industrial goods companies with monolithic identity reduce shareholders’ wealth significantly. However, the name changes made by consumer goods companies with branded identity do not affect the investor’s perception of firm value. In terms of the type of a name change, partial name changes generate positive and significant stock returns. Managerial implications and future research suggestions were also provided. Corporate name change is a major strategic decision. The last two decades have witnessed a continued increase in name changing activity among U.S. corporations. Every year hundreds of companies confront the challenge of changing their names and facilitating additional activities associated with corporate name changes. Name changes have often resulted from friendly or hostile takeovers, corporate acquisitions, or spin-offs, restructuring, mergers, and new strategic direction (Morris and Reyes 1991; Morley 1998). According to a recent statistics by the Schecter Group Name Change Index, the number of name changes by publicly traded companies went up, reaching 197 in 1992 or 28.9% higher than in 1991. Financial services led in name changes (Marketing News 1993, p.1; Slater 1989). Wall Street Journal reports that, in 1995, a total of 1,153 companies changed their names. This figure is up 4% from that of 1994. More than half of these companies changed their names since they involved in some kind of merger, restructuring, or acquisition activity. The name and logo are the two basic elements of corporate identity. Corporate identity is “a visual statement of who and what a company is” (Gregory and Wiechmann 1991, p.61). The effect of a name change, or identity change can be dramatic for the company’s shareholders in both positive and negative ways (Ferris 1988). The potential effect of identity change via name change on firm value needs to be investigated. A stream of research on company name changes has focused on examining the impact of company name change on the company’s stock prices (e.g., Ferris 1988; Horsky and Swyngedouw 1987; Madura and Tucker 1990; Morris and Reyes 1991). These studies have utilized an event-study methodology to investigate the possible association between various aspects of change in a company’s name and its common stock’s performance in financial markets. Horsky and Swyngedouw (1987) examined the effect of a new name on the firm’s profit performance, and what type of firm is more likely to be successful in doing so for a sample of 58 companies made name changes. Ferris (1988) views a corporate name change as a signal sent by the firm’s management to its current and potential owners/ shareholders about future firm profitability. The corporate name change announcements between 1983 and 1985 were used to examine the impact of a name change on the firm’s performance. Madura and Tucker (1990) attempted to find how the stock market reacted to savings institutions that remove their “savings and loan” designation in their names. Their sample included only 12 savings institutions that engaged in mainly “cosmetic” name changes from 1987 through 1989. Morris and Reyes (1991) explored whether there is a significant difference between excess stock return rates of those companies whose new names represent five functional characteristics (distinctive, flexible, memorable, relevant, and positive) of a well-chosen name suggested by the relevant literature. A random sample of 28 firms undertaken “pure” name changes during the period of 1979-1985 was selected. The general objective of this study is to investigate the announcement effect of corporate identity changes through company name changes on firm value by using more current data for a relatively longer time-period. More specifically, this study attempts to identify whether there is any difference in the reaction of the financial market in terms of type of companies (industrial versus consumers goods companies) and some name characteristics (partial versus complete name changes). The both management and marketing literatures have not produced substantial conceptual and empirical research on the effects of corporate name changes. In this area of research, it is important to conduct more replication studies to ensure the consistency, or inconsistency in the empirical results of the similar studies over time. So far, this need has not been fulfilled by the current literature. In this study, we partially aim to fulfill this need as well. Ferris (1988) argues that one crucial aspect of the agency theory is the information asymmetry available to both a firm’s owners and managers. The agency theory proposes that the principal contracts with an agent are for conducting the necessary transactions of the business in an integrated fashion. The transactions are regarded as the foundation for the profitable conduct of the firm. Also, the transactions must be based on decisions that are rationally made to accomplish the objectives of the principal (Winfrey and Austin 1996). Ferris (1988) characterizes a firm “where the shareholders are the owners and the managers are agents hired to serve the owners, but are motivated by their self-interest” (p.41). The typical agency model deals with a principal and an agent in clearly superior-subordinate roles. The traditional roles played by the partners may be reversed. As a result of this, the agency relationship becomes reciprocal. According to Winfrey and Austin (1996), the reciprocal nature of this relationship may cause information asymmetry and monitoring problems (Winfrey and Austin 1996). In a firm, the owners (shareholders) cannot control or observe all the transactions of a manager directly. Both parties act on their behalf or self-interest. The agents have an interest in eliminating or reducing the conflict between the principal and agents, given the asymmetry of information between two parties (Ferris 1988). According to signaling hypothesis, economic information exclusively possessed by management will be conveyed to the shareholders via various signals. Such signals might reduce the problem related to the asymmetry in information. Corporate name changes can be regarded as such a signal to the current shareholders and potential investors in the financial market.  A company name change can communicate a variety of messages or information to the financial community. A corporate name change may carry simultaneously both negative and positive messages to the members of the financial market and each member perceives and evaluates this information differently, and responds to it.

 

Estimating Costs of Joint Products: A Case of Production in Variable Proportions

Dr. Ying Chien, University of Scranton, Scranton, PA

Dr. Roxanne Johnson, University of Scranton, Scranton, PA

 

ABSTRACT

The purpose of this paper is to offer two different joint cost allocation models to add to the traditional cost allocation methods with which we are most familiar. Cost accounting, of which this concept is an essential component, is becoming more and more important as the need for the control of costs, no matter how marginal such controls may be, are recognized within the business community.  The importance of cost accounting is evident in that cost information is essential for production planning, budgeting, product pricing, and, inevitably, inventory valuation for financial reporting purposes.   As in all matters significant to the preparation and dissemination of financial statements, one must recognize the use of estimates, arbitrary allocations, and alternative methods of constructing the final information that constitutes the building blocks resulting in these statements.  This is nowhere more evident than in the various methodologies used to attribute joint costs to product lines for purposes as varied as decision-making, planning, and inventory valuation.  In this paper two new methods using a multiplicative total joint cost model and an additive total joint cost model respectively are proposed to estimate production costs for joint products produced in variable proportions over a period of time.  The multiplicative total joint cost model and additive total joint cost model, their characteristics, and the procedures for estimating production costs for individual joint products are described.  Numerical examples are presented to illustrate the application of the models. Cost information is vital for production planning, cost control, product pricing, inventory valuation, and, ultimately, financial reporting.  When a group of different products is produced simultaneously with a single production process or a series of production processes, joint costs occur up to the point where the joint products are separated.  Joint costs as a component of cost accounting are becoming more and more important as companies in a variety of industries join preexisting firms manufacturing joint cost based products such as oil, beef, chemicals, etc.  Previously, the purpose of joint cost allocation has been to attribute joint costs to major product lines for the purposes of meeting financial reporting requirements.  The purpose of this paper is to add two new, unique approaches to the preexisting techniques for joint cost allocation currently in use.  These approaches are at the same time both sophisticated and straightforward to apply.  Traditional cost allocation methods, such as the current sales method, the physical units method, and the relative sales value method are mainly for the purpose of inventory costing and product pricing  [Barton and Spiceland, 1987; Biddle and Steingberg, 1984; Horngren et al., 2006, pp. 565-573].  All assume a linear data base, and all restrict cost allocation techniques based on “how it’s always been done.”  Thus, most of these traditional methods generally attempt to assign costs to the individual joint products in relation to their relative revenue-generating power or simply to get it done.  The calculations of cost allocations are based on a single observation of cost data at a given point in time.  In the rapidly changing world we live in, this assumption cannot be maintained as an accurate basis for decision-making, planning and cost control, even though it will obviously still meet the requirements of financial reporting. The purpose of this paper is therefore to propose new, more appropriate methods of estimating production costs for joint products produced in variable proportions over a period of time.  There are many situations where a firm has the ability to vary, at least to some extent, the proportions in which joint products are produced.  For example, the refinery manager may regulate the quantity of petroleum products such as gasoline and fuel oil produced from a given amount of crude oil.  As a result, a joint cost comprising crude oil cost and manufacturing cost must be allocated among the consequent petroleum products in order to derive such cost information as individual average costs and marginal costs for production planning and pricing decisions.  The very use of the word allocation alludes to the inevitable but heretofore acceptable inaccuracies of the resulting calculations.  The value of this attribution for decision-making, production, planning and cost control is therefore dubious as the requirements for financial accounting do not necessarily fully meet the needs of cost accounting.  However, when considering the use of any of the joint cost allocation methods under the cost accounting umbrella, each should be evaluated with the resultant need to consider cost-benefit analysis when deciding how many resources to dedicate to a particular technique.  The more complex the technique becomes, it must be evaluated in conjunction with the need to consider the perceived additional accuracy that will be gained as well.   When a firm is able to vary the proportions of production for the joint products over time, it is possible to measure production costs for each of the joint products by examining the mathematical relationship between joint costs and the production of the joint products. 

 

Board Control and Employee Stock Bonus Plans: An Empirical Study on TSEC-Listed Electronic Companies in Taiwan

Chiaju Kuo, MingDao University, Taiwan

Dr. Chung-Jen Fu, National Yunlin University of Science & Technology, Taiwan

Yung-Yu Lai, The Overseas Chinese Institute of Technology, Taiwan

 

ABSTRACT

This study examines the correlation between board control and employee stock bonus of TSEC-listed electronic companies in Taiwan, in both corporate governance and regulatory perspectives. In addition, the soundness of regulations regarding board control is also examined. This empirical research differs from previous studies in that not only the different roles played by board directors and supervisors are identified more clearly, but also that there is more accurate data with regard to board control. The evidence provides support for our argument that, owing to the different characteristics, it is inappropriate to combine directors’ ownership with supervisors’ ownership, or to combine the number of directors and supervisors as an explanatory variable, as adopted by previous studies. The main contribution is that we examine the influence of directors and supervisors separately, taking into account the two-tier structure of the corporate governance system in use in Taiwan. In Taiwan, in order to assist companies listed on the Taiwan Stock Exchange Corporation (TSEC) and the GreTai Securities Market (GTSM), collectively referred to as "TSEC/GTSM-listed companies"), to establish a sound corporate governance system, and to promote the integrity of the securities market, the TSEC and GTSM jointly issued “Corporate Governance Best-Practice Principles for TSEC/GTSM Listed Companies” on October 4, 2002, to be followed by TSEC/GTSM-listed companies.  Many executives credit employee stock bonus plans for the recruitment of innovative employees and to help Taiwan’s high-tech companies become globally competitive. While a number of studies have examined the relationships between employee stock bonus plans and variables such as firm performance, corporate value (e. g., Sue 2004; Wu 2004), and compensation contracts (e. g., Chang 2004; Li 2003; You 2003) in Taiwan, relatively few studies have investigated the correlation between board characteristics and employee stock bonus plans. As such, the study will focus on the correlation between board control and employee stock bonus plans in corporate governance and regulatory perspectives. Under the taxation rules in Taiwan, the total amount of cash compensation is taxed according to the personal income tax rate. Yet, the bonus shares are taxed at par value while market prices are higher than the par value. A high level of employee bonus grants will benefit employees at the expense of stockholders’ wealth, considering that distribution of employee stock bonuses will result in dilution of the firm’s EPS. Furthermore, it is the case in Taiwan that qualification requirements of employees who are entitled to receive a dividend bonus, including the employee of subsidiaries of the company meeting certain specific requirements, may be specified in the articles of incorporation. The arguments about the ways of distribution and the amounts of dividend bonus paid include the transparency of the decision-making process, the independence of related decision-makers, and the rationality of the amount distributed. Because the plans of surplus earning distributions are proposed by boards of directors, this study will develop and then test the theoretical hypothesis related to the level of employees stock bonus plans and both the governance effects of directors and supervisors. With the Introduction as the first section, this paper is further organized into five sections: Section 2 develops the hypotheses; Section 3 describes the sample selection and empirical design; Section 4 shows the empirical results mainly surrounding the association between the percentage of employee stock bonus granted and the board and ownership structure variables; and Section 5 contains sensitive tests to determine the robustness of the results to alternative specifications. A summary and conclusion are then provided in Section 6. Agency theory has been one of the most important theoretical paradigms in finance and accounting during the past two decades. It explicitly deals with conflicts of interest, incentive problems, and mechanisms for controlling agency costs. Compensation contracting is one of the mechanisms to induce employees to act in the best interests of the firm. Possible factors which may influence the percentage of employee’s stock bonus grants can be divided into five categories: 1. firm performance; 2. industry specification; 3. firm size; 4. risk; and 5. board control.  Since the research is focused on high-tech firms in Taiwan, we do not make further control for industry specification, whereas we control firm performance, firm size, and risk by treating them as control variables in the regression model. Taking into account that the board of directors makes the proposal for earnings distributions, and then shareholders ratify the proposal in the shareholders’ meeting in Taiwan, board control may be the most important factor influencing the percentage of employee stock bonus granted. In this reseach, board control is divided into eight essential elements based on the research of Ittner et al. (1997):

 

Measuring the Performance of ERP System --- from the Balanced Scorecard Perspectives

Mei-Yeh Fang, Chihlee Institute of Technology, Taipei, Taiwan, R.O.C.

Dr. Fengyi Lin, Chihlee Institute of Technology, Taipei, Taiwan, R.O.C.

 

ABSTRACT

Enterprise resource planning (ERP) systems are commercial software systems that can be defined as customizable, standard application software which integrates business solutions for the core processes and the main administrative function of an enterprise. Traditionally, ERP performance measures focus on financial indicators which tend to reflect on past performance, this study therefore proposed a Balanced Scorecard (BSC) approach; a framework provides a comprehensive set of key perspectives to simultaneously evaluate the overall ERP system performance. Moreover, we empirically investigated Taiwan public companies with ERP system implementation to explore whether different corporate ERP objectives could have affected the post-ERP performance and cou7ld translate a company’s vision and strategy through all levels of organization. Adopting the Balanced Scorecard increases the completeness and the quality of ERP implementation reports and raises the awareness for relevant factors. Based on the research finding, we provided a regression model to measure the performance of ERP systems and found that financial perspectives have closed relationship with non-financial perspectives.  Enterprise resource planning (ERP) systems are commercial software systems that can be defined as customizable, standard application software which integrates business solutions for the core processes (e.g production planning and control, warehouse management) and the main administrative function (e.g. accounting, human resource management) of an enterprise (Rosemann & Wiese, 1999; Skok, W. and M. Legge, 2002). Companies that implement ERP systems have the opportunity to redesign their business practices using templates imbedded in the software (DeLone and Mclean, 2003; Chesley, 1999; Huang et al., 2004). Many companies are implementing ERP packages as a means or strategic objectives to reengineering its existing processes, performing supply chain management, requiring for e-Commerce, integrating ERP with other business information systems, reducing inventory costs, changing existing legacy system, requiring for multinational enterprise competitiveness, enhancing enterprise images, and evoluting e-business (Minahan, 1998; Mirani and Lederer, 1998; Pliskin and Zarotski, 2000; Davenport, 2000). Because of ERP’s broad functionality, a company typically can replace much of their legacy systems with ERP applications, providing better support for these new business structures and strategies. However, the advanced IT technology implemented not simply for more and faster data processing, but a management philosophy that could address the measurement of an organization, allow feedback, and facilitate communication between all management levels.  In order to structure the management of ERP software, the related tasks can be divided into the process of implementing ERP software and the operational use of ERP software. For the evaluation of both tasks the Balanced Scorecard – a framework to structure the relevant key performance indicators for Performance Management (Kaplan and Norton 1992; Kaplan and Norton 1993) – can be applied (Walton 1999, Reo 1999, van der Zee 1999, Rosemann and Wiese, 1999; Brynjolfsson, E. and Hitt, L., 2000). The Balanced Scorecard enables translation of a company’s vision and strategy into a coherent set of performance measures that can be automated and linked at all levels of the organization. Organizations have come to realize the importance of a strategic feedback and performance measurement/management application that enables them to more effectively drive and manage their business operations (Edwards, 2001). Besides the traditional financial measures, the Balanced Scorecard accounts for a wider range of ERP effects (Maloni and Benton, 1997) as it consists of four perspectives: financial, internal process, customer, and innovation and learning. Thus, it also includes non-financial and less tangible aspects such as implementation and response time or the degree of ERP-supported business functions.  This study selected ERP performance measures from the related literature (DeLone and McLean, 2003; Mirani and Lederer, 1998; Mabert et al., 2000). We proposed Balanced Scorecard approach to measure the implementation ERP system performance from four abovementioned perspectives. The primary objective of this research is (1) to examine the ERP system performance by using the Balanced Scorecard approach. (2) to explore whether different corporate objectives of ERP implementation could have affected the post-ERP performance so as to provide an insight into ERP implementation. (3) to study on the relationship between financial and non-financial performance measures of ERP system.  The rest of the present study is organized in the following manner. Section 2 provides the relative literatures review of ERP systems and discusses how the Balanced Scorecard approach can be used to evaluate the implementation of ERP software.  The research methodology and the analysis of the performance measures of ERP implementation with the Balanced Scorecard is given in Section 3. Section 4 presents the findings of our study.  ERP systems can push an organization towards generic processes even when customized processes may be a source of competitive advantage (Davenport, 1998).

 

Endless Surpluses: Japan’s Successful International Trade Policy

Dr. James W. Gabberty, Pace University, NY

Dr. Robert G. Vambery, Pace University, NY

 

ABSTRACT

In 1991, authors Akio Morita, chairman of Sony and Shintaro Ishihara, member of the Japanese Diet, published a book titled “A Japan That Can Say No”. This work called for the nation of Japan to take a much more self-assertive and aggressive attitude toward the rest of the world (especially the U.S.) in its diplomatic and business relations. The caustic tone of the book, with chapter titles such as “America Itself is Unfair”, “American Barbaric Act!”, and “Let’s Become a Japan that Can Say No” caused much consternation in the West. Nonetheless, before and after the publication of this book, the U.S. - Japan trade deficit was (and still is) enormous [Morita, Shintaro]. Consequently, Americans who express concern about large and persistent trade deficits are not engaged in Japan bashing, but rather may be strong supporters of free trade who are analyzing the effects of large-scale adverse economic phenomena that may need remedy [Lincoln].  In the 1950s and 1960s, the U.S. was the world's leading export powerhouse. The Marshall plan helped provide the capital needed to rebuild Europe and Japan, and fueled a tremendous demand for U.S. exports. During this period, the U.S. ran a substantial trade surplus of about one percent of gross domestic product. The U.S. also benefited initially from strong export demand in a wide range of industries, from low-tech textiles and apparel to sophisticated aircraft and machine tools. Since the 1970s the U.S. moved from a trade surplus to a deficit position, as Europe and Japan began to compete effectively with the U.S. in a range of industries. Now, China has come online as a major trade partner with the U.S. However, China has become not only a surplus trading partner with the U.S, but has been so successful at penetrating the U.S. market that it has eclipsed Japan in its trade surplus position. The notion of (now) these two countries intensely selling their exports into the U.S. market unabatedly is frightful as perpetual calls by noted economists continue to warn that this trade deficit position is simply untenable. The longer the deficit problem is ignored, the harder it becomes to deal effectively with curtailing the deficit’s growth. For many, the trade imbalance with Japan seems to have always been a feature of Japan - U.S. trade relations. Indeed, it is the almost historical reflex reaction by U.S. consumers to cite the higher-quality features of Japanese products as the root causes of the trade imbalance. This, it is assumed, is the real culprit of the trade imbalance, buttressed by the lack of similar quality products produced by U.S. manufacturers and made available to Japanese customers. Although partly a false perception, this perception nonetheless helps to perpetuate the trade deficit with Japan.  A historical glance backward to the early days of the trade imbalance helps put the current trade deficit into perspective.  The trade relationship evolved as follows: In the 1970s, the time at which Japanese dominance in certain industries became evident, the quality of certain U.S. products in industries such as automobiles began to slip. The domestic automobile producers at that time enjoyed a market share of approximately 90% and the lack of useful foreign competition through which improvements to their products would have been facilitated was not a reality. This caused many U.S. producers to fall into a false sense of market supremacy and the self-absorbed attitude caused production quality to slip.  New automotive entrants by Japan at the same time were lacking in major quality or technological sophistication but they had fewer defects and were cheap to produce - a chilling hallmark of Chinese imports into the U.S. witnessed today.  U.S. automobile producers all but ignored repeated calls by the marketplace to improve the worsening frequency of quality defects in their own products. Similarly, American consumers, unwilling to live with pervasive quality problems, shifted their purchasing patterns away from nationalist tendencies to ”buy American” and began to purchase the less expensive, less complex vehicles produced in Japan that were beginning to appear on U.S. showroom floors. The gradual winning of market share by the Japanese in the automobile sector continued at a not-so-gradual pace. In the span of a few short years, the popularity of the Japanese automobile radically increased and the moniker of Japanese quality for other products was transcended into the minds of U.S. consumers. The awareness of the availability of the alternative products emanating from Japan were beginning to cause these consumers to initiate purchases of other Japanese imports during the 1980s, when the birth of the video game, personal computer, video recorder and handheld electronic industries began to flourish. American consumption of all things Japanese available for import were snapped-up by U.S. consumers as U.S. manufacturers watched in awe of this flurry of economic activity and massive increase in (mostly imports) trade [Porter]. By the time U.S. automobile producers became very concerned about the slippage of their corresponding market share, it was too late: too much time had passed and the Japanese automobile product range increased dramatically.  The devastation caused to the domestic automobile industry some twenty years ago has taken nearly a quarter of a century to repair and was extremely costly (in terms of lost income and jobs) for the U.S. automobile industry and was only partially successful in countering the prevailing mindset of U.S. consumers about Japanese product supremacy. Although U.S. automobile manufacturers now produce products that consistently rank near or on par with their Japanese counterparts, the quality stigma associated with Japanese products remains. Moreover, attempts made by other U.S. manufacturing firms to compete against Japanese products as late entrants into the consumer electronics industry have proved futile as witnessed by the attempt of U.S. television manufacturers to move into the burgeoning flat panel display sector. Though not comprehensive in the historical context of the various reasons why trade with Japan brought about a globally-accepted acknowledgment of the inequality of many of its manufactures, Japan’s trade deficit with not only the U.S. but also with the world has other roots. It is necessary to look back a little further in time than the 1970s and 1980s. It was during the 1950s that similar events helped destroy former U.S. supremacy in certain industries and some further discussion is helpful in broadening the understanding of how U.S manufacturing supremacy began its downturn.

 

Economic Convergence in the Old and the New Economies of the OECD

Dr. Bala Batavia, DePaul University

Dr. P. Nandakumar, Indian Institute of Management, Kozhikode, and Sodertorn University of Stockholm

Dr. Cheick Wague, Indian Institute of Management, Kozhikode, and Sodertorn University of Stockholm

 

INTRODUCTION

The optimistic belief that incomes per capita will converge in course of time, voiced by a number of economists and soothsayers, has been belied by developments in the last few decades.  Instead of catching up with the affluent west, the less developed countries of the southern hemisphere have fallen still further behind in terms of income per resident.  In this paper, we address the same issue for the OECD group of countries, and also analyze the impact on the convergence of the income-catch up process of certain fresh factors which have emerged or have become relevant recently in this regard. In particular, a distinction is made between the convergence process in traditional sectors of the economy, and the ‘new economy’ -  i.e., the sectors which use significant inputs of information  technology.  The notion that relatively backward countries, with a comparatively low income per capita, will grow faster than the richer nations, thus effectively closing the income gap, seems to have been prevalent for several decades. Such a process of income catch-up had clearly occurred in the aftermath of the war years, when barriers to trade and capital flows fell rapidly, heralding a golden age for international commerce which lasted well into the 1960s. In fact, this may have been the most expansive era for trade since the classical age that was spearheaded by Great Britain.  The income-catch up hypothesis basically postulates that countries with lower  per capita incomes will   grow  faster than the leader with the highest income per capita in a group of trading nations. The rate of growth will be related to the income gap relative to the leading nation. This hypothesis has  been considered even in the analysis of the productivity slowdown in the OECD countries  since early 1970s (Lindbeck, 1983). Generally speaking , the hypothesis has been considered relevant only in explaining the catch-up process within the group of industrialized nations, which are said to belong to a ‘convergence club’ in the words of Baumol (1986). Baumol et al., (1994) have also postulated that the convergence process among the OECD countries may have run it’s full course by now, after an extended process of strong convergence in the post-war decades, and echoes in this respect, the study by Lindbeck (1983).  Testing of the catch-up hypothesis has not been limited to the use of the variable income per capita. The convergence processes with respect to labor productivity levels as well as total factor productivity has been the subject of scrutiny in recent years, and are important in their own right as indicators of international competitiveness. Normally, convergence in income per capita would imply catch-up also in productivity terms, but there need not be a one to one correspondence.  It  may be noted that (OECD, 2002)   total factor productivity (TFP) has been growing at different rates in the OECD countries and that the convergence process in TFP has bee quite strong even during periods of relatively weak income convergence. Also, the process may differ between the aggregate economy and different sectors.  The  importance of such a distinction can be seen in  Pilat  et al. (2002),  who show that labour productivity growth in IT producing and IT using sectors have been greater than in other sectors for virtually every country in the OECD area (with some nineteen countries included in their sample). But this also means that the degree of catch-up can vary between sectors, depending on their intensity of IT inputs usage. This point is further developed in the next section, with supporting data.  It seems worthwhile to emphasize while dwelling at length on the role played by IT inputs in pushing up productivity growth that, other factors have also played key roles in the growth process in OECD countries in the past decades. Thus, to get a complete picture or model of economic growth, one may to have to adopt a growth accounting approach - which may have to be  extended in an appropriate manner to include factors other than just the traditional inputs.  In this paper, the catch-up process of both income per capita and labor productivity are modeled. As noted in the previous section, there will not be a one to one correspondence between these processes. To see this, we may write the expression for output growth as  In 1), output growth is decomposed into a weighted average of the growth rates of labor, i,  and of capital, k, with a representing the wage share . g (= y - i)   and  h (= y – k) are the rates of productivity growth of labor and capital respectively. From 1) it can be seen that while labor productivity growth increases the rate of growth of output, this effect can be reduced by a fall in capital accumulation or in the rate of growth of the productivity of capital. The productivity growth of labor and capital are also affected by technological change. Disembodied technical progress will also serve to bring about differential developments of income per capita growth and labor productivity growth.  

 

An Evaluation of Investment Performance and Financial Standing for Life Insurers in Taiwan

Dr. Shu-Hua Hsiao, Leader University, Taiwan

Dr. Shu-Hui Su, Fortune Institute with Tech,, Taiwan

 

ABSTRACT

Life insurers in Taiwan should set a goal of a higher efficiency of investment performance and profitability because the whole market structure changed after the insurance market opened in 1987. Facing more highly intensive competition, insurers may become insolvent when inefficient performance of investment occurs. Hence, to achieve a financial solvent objective and a competitive advantage, life insurers should maintain their relative investment efficiency and performance. The main purpose of this study is to determine the capital investment efficiency based on the DEA results and MPI. Some hypotheses were created to test if there is a statistically significant difference among the original domestic life insurers, new entrant domestic life insurers, and foreign branches of life insurers. Results expressed that there is no significant difference among those three groups for MPI. Nan Shan and Hontai are found to have an efficient investment performance for the overall efficiency and scale efficiency. In addition to Nan Shan and Hontai, Cathay, American, and Manulife are efficient for pure technical efficiency.  As the insurance market structure has been changed, more competitive environments will impact financial profitability. It is important to study the profitability and investment performance for life insurers, because companies may become insolvent when failure leads to declining profit, and even to serious interest spread loss. To evaluate the efficiency of investment performance, and to guide companies to financial improvement areimportant. In Taiwan, the main sources of a life insurer’s profit, financial receipts, depend on the investment performance. Obviously, premiums received only cover commission and business expenses, although this amount is about eighty percent of the total income (Yen, Sheu, & Cheng, 2001). Thus, whether the investment performance is efficient or not is a key factor that relates to the whole performance of business management. To achieve these objectives and competitive advantages, a life insurer should maintain their financial relative efficiency and performance. The DEA has been used frequently to make performance measures for banks (Asmild, Paradi, Aggarwall, & Schaffnit 2004; Krishnasamy, Ridzwa, & Perumal, 2004), insurers (ex Hewlitt, 1998), hospital (ex. Hu & Huang, 2004), and investment (ex. Chen & Zhu, 2004). Prior studies mainly focus on measuring the business performance using Data Envelopment Analysis (DEA). However, fewer papers used the DEA to evaluate the investment performance measurement of life insurers. Lin (2002) applied the DEA to measure efficiency scores and to examine whether life insurers in Taiwan have faced the new market structure after deregulation. Results showed no change for overall efficiency change, no pure technical efficiency change, and no scale efficiency change after deregulation. The findings also suggested for incumbents that innovation is the most important factor leading to productivity improvement. Furthermore, Brockett, Cooper, Golden, Rousseau, & Wang (2004) applied DEA to examine the effect of solvency on efficiency for insurance companies. Output variables of that study involved solvency, claims paying ability, and return on investment. Furthermore, Barr, Siems, & Thomas (1994) used DEA to predict bank failure. Hu & Huang (2004) use both the Mann-Whitney test and Tobit (censored) regression to find the effects of environmental variables on these efficiency scores. Apart from DEA, the MPI can further provide the measurement of productivity changes. The main studies which focus on investment issues are: Chen & Zhu (2004), Sathye (2002), Ramanathan (2004), as well as Asmild, Paradi, Aggarwall, and Schaffnit (2004). Ramanathan (2004) applied MPI to provide a further investment improvement in the technical efficiency change. Asmild, Paradi, Aggarwall, & Schaffnit (2004) assessed the productivity changes of the banks by MPI and concluded that “the shift of the best practice frontier over time are typically due to changes in technology.” Sathye (2002) analyzed the productivity change of Australian banks from1995 to 1999, and he found that the technical efficiency and the Total Factor Productivity (TFP) index have declined by 3.1% and 3.5% individually.  To measure the relative efficiency and investment performance by DEA and MPI for life insurers in Taiwan is the main purpose of this study. DEA and MPI model were developed by Charnes, Cooper, and Rhodes (1978) as well as Fare, Grosskopf, Lindgren, and Ross (1989) respectively. The MPI can provide an information including: technical efficiency change, technological change, pure technical efficiency change, scale efficiency change, and total factor productivity (TFP) change, life insurers can revise their input and output factors. In addition, the investment performance of life insurers was compared among the domestic original, new entrant, and foreign companies. Finally, results can provide information of strategies raising their competitive ability. The participants of this study, based on an annual report of life insurers in Taiwan, were classified in the following groups: eight year old domestic companies, nine year old new domestic, and nine year old foreign group life insurers. The Kuo Hua Life Insurance Companies were eliminated because of missing data or incompleteness in their financial annual report. The annual report of life insurers was published by the Republic of China in conjunction with the Life Insurance Association of the Republic of China. This database contains records obtained from insurers’ statutory annual statements.

 

Dolorous Songs and Blessings of the Curses

Dr. Kazi Saidul Islam, University of Wollongong, Australia

Dr. Kathie Cooper, University of Wollongong, Australia

Dr. Jane Andrew, University of Wollongong, Australia

 

ABSTRACT

The latest trend in accounting arises from the spate of pathetic exodus of sprinkling stars from the corporate sky around the globe. The direct and domino effect of the corporate collapses are dreadful.  The neurotic curses arising out of gripped collapses remind us of the fact that there is the other side of a coin. Severity in accounting scandals and commonality in the nature of collapses have bought in a number of blessings by triggering global consciousness and consensus to root out the diagnosed disease, setting celestial attributes in the governance process, bringing harmony as well as transparency in the disclosure regime and building a strong knowledge-base through continuous education to be provided by the higher educational institutions and professional bodies.  Regulatory changes, emergence of corporate governance codes, mandatory compliance with accounting standards for greater transparency and thus emergence of a new accounting order were not possible so rapidly without such a severity in the corporate ruins.  Songs reflect mind. Birds and people sing to express their jovial feelings. Again the songs of cuckoo in the spring remind the sorrows of loosing her pair. Sometimes songs of people cause tears. Songs on drums can not be as pathetic as those on a violin or piano. So songs can be dolorous or delightful. Blissful or brutal events determine the sweetness of songs.  Our concern is about the songs in the corporate world. Corporate bodies are artificial entities governed and surrounded by many people. Management, regulators and stakeholders are the birds who live on the branches and leaves of corporate entity to care for their interest and eat apples. When a company runs well, the sweet wind touches everybody living on the tree.  The management throws complacent smile for effective efforts, the regulators for good control and the stakeholders for having their shares from the company assets and profit. Their songs are then played on the drums followed by dances or Champaign. To the contrary, when a company runs badly and ultimately collapses for a range of reasons, the high sounding drums are replaced by buzzwords and the shocking songs are played on violins or pianos. The story of songs-dolorous or delightful-in the corporate world can be traced throughout history. Our paper embarks on a story based on the scandal games in the collapse tournament of the new millennium. There is a plethora of studies that have inquired into the causes of accounting scandals, impact of corporate collapses on the society and remedies for these. But those studies do not properly address that every cloud has a silver lining. The present paper aims to evaluate the two sides of a coin with special emphasis on the blessings of the curses arising out of scandals and collapses:  Specific points to be addressed are: The curses attributed by Accounting Scandals and Corporate Collapses  The Dolorous Songs – the negative impact of the curses  The Blessings – the positive impact of the curses. Because of chronological emergence of the events, the above points can be shown with the help of a diagram as above: Literally the term “curse” is featured by nuisance, blight, annoyance or irritation. Perspective determines the meaning and magnitude of the curse. Starvation, health hazards and deprivation are curses in the least developed countries. Corruption, ethical failure, terrorism, military aggression and deaths are the curses of the day to the mankind.  This paper deals with accounting scandals and corporate collapses caused by corporate corruption and ethical failure. Scandals are the catastrophes nobody wants to endure. Scandals refer to human characteristics that create anarchy and lend irregularities in a social system. These are the events that happened in the past and caused harms to self-image and others. There are many faces of scandals like political scandals (Water Gate scandals), organized crimes ( Mafia and Yakuza), money laundering, sexual harassment, racism, embarrassing emails, outrageous extravagance, and accounting, financial  or corporate irregularities.  Accounting or corporate scandals are not new. Accounting is as old as a civilization. Related scandals and collapses are also as old as accounting. Shakespeare’s ‘Merchant of Venice’ (written in 1596-1598) depicts about greed and scandalous business environments of his time. Johnston of NabarroNathanson identified 400 years of financial scandals (http://www.nabarro.com).   At least there is a two centuries of corporate panic and collapses in Australia ( Sykes: 1998). The World witnessed lot of Scandals and collapses in the 80s and 90s. The corporate collapses of the new millennium give testimony to the curses happened by spectacular accounting scandals caused by incompetence or greed of the directors, auditors and CEO. They adopted the brilliant, creative and illegal means of creating money.  When the vicious circle of poverty appears to the prime curses on the fate of the people of underdeveloped countries, and terrorism and atomic plants appear to be the prime threats to the mankind, the appalling accounting scandals and spectacular corporate collapses appear to the dreadful curses throwing disastrous blow to the economies of the first-world countries.

 

Process and Quality Model for the Production Planning and Control Systems

Dr. Halim Kazan, Gebze Institue of Technology, Turkey

Dr. Ahmet Ergulen, Nigde University, Turkey

Dr. Haluk Tanrýverdi, Sakarya University, Turkey

 

ABSTRACT

Over the last decades, many industrial sectors have been experiencing profound changes involving both the business environment and the internal organisation. This process has been so deep and radical as to suggest that a new operations management paradigm has emerged. In this new competitive and turbulent environment, effective production planning and control systems have become extremely important to drive improvement efforts.  We consider production planning models from a different perspective, assuming that both production and quality are decision variables. Within this class of models, we consider various degrees of process on the part of the producer including the quality, process technology and control system to determine the acceptance or rejection of how the system is designed, implemented, run, improved and measured the quality of the outputs.  Our intent is to provide an overview of applicable process and quality model; we present briefly how the quality is identified, designed, implemented, run, improved and measured in terms of the appropriate quantity, the appropriate time, the appropriate level of quantity. Especially, our purpose of the PP&C is to ensure that manufacturing run effectively and efficiently and produces products as required by customers. In this article we focus on process and quality model for the production planning and control systems. We also have organized the article into two major sections. In the first section we present a framework for the process technology and system. In the second section we discuss control system and quality models for production planning.  Over the last decades, many industrial sectors have been experiencing profound changes involving both the business environment and the internal organisation. Especially, today’s changing industry dynamics have influenced the design, operation and objectives of production planning and control systems since CAD, CAM and CIM systems used in industry. These systems affected the production planning and control systems by increasing emphasis on: integrated information technology and process flows, flexibility of product customization to meet customer needs improved quality of products and services, reduced costs, planned and managed movement, and reduced cycle time, improved customer service levels(Bardi, E. J., Coyle, J. J., and Langley, Jr., C. J. 1996). On the other hand, typical decisions include work force level, production planning and control, assignment of overtime and sequencing of production runs. Process models are widely applicable for providing decision support in this context. In this article we focus on process and quality model  for the production planning and control systems. We also have organized the article into two major sections. In the first section we present a framework for the process technology and system. In the second section we discuss control system and quality models for production planning. Our purpose of the PP&C are to ensure that manufacturing run effectively and efficiently and produces products as required by customers.  We do not cover detailed scheduling or sequencing models (e. g., Graves, 1981), nor do we address production planning for continuous processes (e. g., Shapiro, 1993). We consider only various degrees of process on the part of the producer including the quality, process technology and control system to determine the acceptance or rejection of how the system is designed, implemented, run, improved and measured the quality of the outputs. And do not include continuous-time models such as developed by Hackman and Leachman (1989). Our intent is to provide an overview of applicable process models; we present briefly How the quality is identified, designed, implemented, run, improved and measured in terms of the appropriate quantity, the appropriate time, the appropriate level of quantity. The process and quality model (PAQM), production planning & control and competitive advantage, effective PP&C and steps in setting up an effective PP&C system (Bardi, E. J., Coyle, J. J., and Langley, Jr., C. J. (1996),)  “Production Planning and Control technology combine the physical and information flows to manage the production system. As with any complex entity, PP&C has several distinct elements. In figure 1 we superimpose these elements on the physical flow of a production system. We position these elements at different places along the physical flow route. Interaction between the elements is not shown. The PP&C function integrates material flow using the information system. Integration is achieved through a common data base. Interaction with the external environment is accomplished by forecasting and purchasing. Forecasting customer demand starts the production planning and control activity. Purchasing connects the production system with input provided by the external suppliers. Extending production planning and control to suppliers and customers is known as supply chain management  Some elements are associated with the production floor itself. Long-range capacity planning guarantees that future capacity will be adequate to meet future demand, and it may include equipment, people, and even material.

 

Analysis of Financial Performance by Strategic Groups of Digital Learning Industry in Taiwan

Wen-Long Chang, Shih Chien University, Taiwan

Kevin Wen-Ching Chang, Abacus Distribution System Ltd., Taiwan

Jasmine Yi-Hsuan Hsin, Taiwan Federation of Industry, Taiwan

 

ABSTRACT

The research focuses on digital learning providers in Taiwan. The digital learning providers are categorized into different strategic groups depending on the strategic dimensions they conduct. Factor analysis is applied to determine the measurement index for the financial performance of these providers, and to further examine the divergence among their financial performance. As a result of the research, digital learning providers in Taiwan can be divided into four strategic dimensions which are ‘leading group with integration of marketing and sales abilities’, ‘leading group with external relation management, and research and development abilities’, ‘leading group with human capital and financial management abilities’, and ‘leading group with Niche market management and product innovation abilities’. Among all four, the leading group with integration of marketing and sales abilities shows the best profit earning ability.  Digital learning (also known as e-Learning) industry has been expanding with rapid acceleration in recent years. While people reply on the internet to read, to shop, to talk, and to learn, many countries have enclosed the barrier-free digital learning as one of their competitive essentials. Through digital learning, knowledge can be obtained easier, faster, and cheaper. It is believed that digital learning will ultimately provide us life-long learning experience with great learning efficiency and quality.  Since digital learning was first initiated in Taiwan in 1998, many researches have been conducted on its management models (Barron, 2002; Close, Humphreys and Ruttenbur, 2000), key success factors (Rosenberg, 2000), and system regulations (Anido and Llamas, 2001). Today, the major task for digital learning providers is to design the best competitive strategies, including studying their financial performance.  Past researches on digital learning industry have not included topics regarding financial performance because there were not enough data from the small number of digital learning providers, and many of the providers did not want to share their financial data at their germination. Now, these providers are at their steady growth with more new providers joining the market. This year, there are 135 digital learning providers registered in Industrial Development Bureau, Ministry of Economic Affairs, and some of them are already listed in Taiwan stocks. It is much easier to have access to their financial performance and other business performance information now.  As the growth of internet and fiber communication industries have slowed down since 2000, digital learning providers have been in their four years of self-adjustment period since then, and today the competitions have intensified. Therefore, it is the perfect time to study the strategic groups of digital learning in Taiwan, their formation, financial performance, resource allocations, and best strategy. Based on the industry background mentioned above, this paper tends to achieve the following objectives: 1. Discover the characteristics and differences between different strategic groups with analysis of their strategic dimensions.  2. Suggest future investment trend with comparison of financial performance by different strategic groups. A strategic group refers to business providers with same or similar strategy (Harrigan, 1985; Hitt and Hoskisson, 2001; McGee and Thomas, 1986; Peteraf and Shanley, 1997; Thomas and Venkatraman, 1988; Peng, Tan and Tong, 2004). The structures of strategic groups change as time goes and further lead to expansion and growth in the industry; therefore, different strategic groups show different financial performance regarding the competitive strategies they adopt (Asker, 1995; Cool, 1985; Fiegenbaum and Thomas, 1990; Hunt, 1972; Newman, 1973, 1978; Schendel and Patton, 1978). Understanding the formation of strategic groups helps business providers act upon the most suitable strategy and to better allocate their limited resources (Asker, 1995; Cool and Schendel, 1987; Galbraith and Schendel, 1983).  The value of strategic groups comes from strategy choices, which is regarded as effective allocation of strategic dimensions or strategic variables.  Strategy is the combination of strategic valuables (Hunt, 1972). Strategic dimension is a way to describe different business providers. With the description and measurement of strategic dimensions, characteristics of business providers and their resource allocation can be identified. Moreover, the difference between business providers can be classified to study the competition model within the industry (Porter, 1980). Because the choice of strategic dimensions can influence the study of strategic groups directly, it has to be made with consideration of industry characters and possible growth rate in order to measure the true existence of strategic groups (Houthoofd and Heene, 2002).  There are two analytical models of strategic dimensions in the past. The industrial organization model is based on the choice of industrial economics. Porter (1980) is one of the exponents for this model; he believes that environment – including industry and market, is the major strategy for any business provider. The other is the resource-based model. Its exponents believe that long-term advantage can not be achieved if there is any environmental restriction. Favorable profit performance can be achieved if business providers choose resources to be their major strategy. Grant (1991) and Barney (1991) are the two exponents for this model.  With some mergers and some co-opetitions, digital learning providers in Taiwan have started to think about the integration of their dominant resources and abilities to achieve their long-term competitive advantages (Chang W. L., 2006). With steady external environment, this research paper claims that resource-based model is most suitable for analyzing the strategic choices made by strategic groups within digital learning industry. The model will examine the present development for digital learning industry in Taiwan to a thoroughly understanding.  Based on the developmental trend mentioned above, the study proposed the first hypothesis:

 

Duality of Alliance Performance

Dr. Noushi Rahman, Pace University, New York, NY

 

ABSTRACT

While alliance research has proliferated and branched out to several areas in the past decade, alliance performance remains a misunderstood and limitedly studied area.  A review on alliance performance suggests that it comprises two elements: goal accomplishment and relational harmony.  Both are necessary to ensure alliance performance.  This paper reviews four theoretical streams in organization research that are relevant to alliance performance.  Apparently, extant research has attended to alliance relationship management much more than it has attended to alliance goal accomplishment.  This review highlights the need to extend existing theoretical streams in certain directions to further explain alliance performance. The literature on strategic alliances has flourished tremendously over the past decade.  Strategic alliances are enduring, yet temporary, interfirm exchanges that member firms join to jointly accomplish their respective goals.  In his review of the state of alliance literature, Gulati (1998) wrote about five avenues in which alliance literature has spread out: formation, governance, evolution, performance, and performance consequence.  Of these five paths, research on alliance performance has received the least attention: “the performance of alliances remains one of the most interesting and also one of the most vexing questions” (Gulati, 1998: 309). Strategic management research is generally geared toward better performance of the firm.  While conceptualizing and measuring firm performance is quite straightforward, the involvement of more than one firm and the permeable boundary of the alliance entity (with the exception of joint ventures) make conceptualizing and measuring alliance performance a messy and daunting task.  Performance of an alliance is conceptualized as the extent to which member-specific goals are accomplished by the alliance.  However, alliance members may find it difficult to work with each other for a lack or trust and threat of opportunism.  Consequently, an alliance may fail to perform despite its ability to accomplish alliance-specific goals.  Given the importance of maintaining a good working relationship between partner firms, many studies have focused on relational issues arising within alliances.  Ironically, as it would become evident toward the end of the paper, the current state of strategic management research seldom focuses on goal-accomplishing or task-oriented aspect of alliance performance. The purpose of this article is to review how major theoretical streams in organization management research explain alliance performance and how these theories can be extended to further our understanding of alliance performance.  The paper is divided into four parts.  First, I delineate the nature of alliance performance.  Second, I review major theoretical streams in organization management as they pertain to alliance performance.  Third, I discuss the research implications of this paper.  Finally, I describe how alliance managers can benefit from the  theoretical conclusions drawn here. Alliances are unique in that they are the only form of economic organization that requires maintaining a relationship, in addition to concentrating on performance issues.  Independent firms or firms engaged in spot transactions do not have to maintain relationships.  This peculiarity of alliance has drawn tremendous research attention to this topic.  Therefore, it is not surprising that lately the majority of research seems to be focusing on relational angles of alliances, such as trust (Gulati, 1995; Perry, Sengupta and Krapfel, 2004), relational risk (Delerue, 2004; Nooteboom, Berger and Noorderhaven, 1997), opportunism (Parkhe, 1993; Provan and Skinner, 1989; Brown, Dev and Lee, 2000), commitment (Gundlach, Achrol and Mentzer, 1995; Perry et al., 2004), reciprocity (Kashlak, Chandran and Di Benedetto, 1998; Wu and Cavusgil, 2003), relational capital (Heide, 1994; Kale, Singh and Perlmutter, 2000), and relational quality (Arino, de la Torre and Ring, 2001).  While the relational issues are critical to alliance effectiveness, another critical element of alliance performance is goal accomplishment.  Existing theoretical streams explain alliance performance in terms of either relationship maintenance or goal accomplishment.  Of course, conceptualizing alliance performance is different from measuring alliance performance, which can take various paths as well.  To avoid the mess of explaining relational and goal-based conceptualization of alliance performance, scholars have adopted alliance satisfaction as a measure of alliance performance (Habib and Barnett, 1989; Killing, 1983; Lui and Ngo, 2005).  Alliance satisfaction is, however, reflective of more than just alliance performance.  In the words of Hatfield, Pearce, Sleeth and Pitts (1998: 368): “Because the respondents were those individuals in the partner firm who were closest to the joint venture operation, the positive relationship between partner satisfaction and JV survival may reflect a bias for maintaining one’s sphere of influence and power.” Hatfield et al. (1998) argue in favor of goal accomplishment as the preferred measure of alliance performance. 

 

Does Cooperative Learning Enhance the Residual Effects of Student Interpersonal Relationship Skills?: A Case Study at a Taiwan Technical College

Kai-Wen Cheng, National Kaohsiung Hospitality College, Taiwan, R.O.C.

 

ABSTRACT

The relative effectiveness of cooperative learning instruction and traditional lecture-discussion instruction were compared for Taiwan technical college students to determine the residual effects of interpersonal relationship skills on accounting courses. A pretest-posttest control group experimental design involving two classes was used. The experimental group students (n=53) received the cooperative learning instruction, and the control group students (n=45) received the traditional lecture-discussion instruction. The “Interpersonal Relationship Skills Test (IRST)” was used as the research instrument. A multivariate analysis of covariance (MANCOVA) suggested that students taught using the cooperative learning instruction scored significantly higher than did students in the traditional lecture-discussion group. The research results showed that the cooperative learning indeed enhanced the residual effects of student interpersonal relationship skills and that cooperative learning could serve as an appropriate and worthwhile reference that schoolteachers could apply to their teaching instruction. Cooperative learning instruction plays an important role in contemporary teaching instruction. Many teachers and researchers have used cooperative learning to enhance learning effectiveness and interaction in the classrooms during the last few decades. Cooperative learning incorporates five basic elements: Positive interdependence, face-to-face interaction, individual and group accountability, collaborative skills, and group processing (Johnson & Johnson, 1999). Positive interdependence is structured once group members understand that they are linked together for the same goal. Face-to-face interaction means that group members need to be collaborative in fulfilling the assigned tasks. They need to encourage each others’ efforts. Individual and group accountability means that the whole group must be held accountable for achieving its goal, and each group member must be held accountable for making his or her own contributions to the group and the goal. Collaborative skills mean that teachers should incorporate various social, decision making, and communication skills into their instruction. Group processing means that group members are allowed to discuss together in terms of what group decisions are helpful. As a result, if teachers adopt cooperative learning appropriately, their students’ collaborative skills and interpersonal relationship skills are likely to improve. In the competitive arena of modern society, two vital attributes required by businesses in their efforts to outperform their competitors is pervasive team spirit and cohesive force, both of which require employees to have excellent interpersonal relationships skills to facilitate communication.  Interpersonal relationships play such an important role, primarily because in modern society, most jobs rely on the cooperative efforts of groups; few jobs now can be accomplished by individuals alone (Olson & Zanna, 1993).  However, in the context of Taiwan’s educational instructions, where traditional independent learning is the rule, it is difficult for students to cultivate the excellent interpersonal relationship skills they will need.  Under the circumstances of such an imbalance of supply and demand from Taiwan academic units and enterprises, a radical change in teaching instruction is the best way to solve the problem. Therefore, it’s worthwhile to explore the relative efficiency of cooperative learning and traditional teaching instruction in terms of their effects on the residual effects of student interpersonal relationships in typical classroom settings. The purpose of this study was to document and investigate such a comparison. Cooperative learning means making students learn by way of cooperating with a small group; collaborative skills and social skills are listed as one of the learning targets; evaluations are made based on the performance of the group.  Hence, in cooperative learning with a small group, students acquire collaborative skills and develop the notion of cooperative learning (Vaughan, 2002).  The study of cooperative learning has flourished since the 1970s, and based on the theory of cooperative learning, different scholars have created different teaching methods.  Among them, the most often adopted method is the Student Team Achievement Division (STAD).  STAD was developed by Slavin in 1978.  The content, standard, and method of evaluations it employed are similar to those in traditional teaching, so STAD is the easiest change to implement.  In addition, its range of application is the broadest and its effect is outstanding.  Consequently, this research adopted the teaching method of cooperative learning (STAD) in the experimental design. Despite constant support for implementing cooperative learning in schools, provoking research results on the comparative efficacy of cooperative learning versus traditional instruction are present in the relevant literature. Most research shows that students’ learning effectiveness and interaction favor cooperative learning over instructions reflected in lecture-discussion classrooms (Lazarowita, Baird, & Bowlden, 1996; McManus & Gettinger, 1996; Ciccotello, D’Amico & Grant, 1997; Gillies & Ashman, 1998; Gillies, 1999; Mueller & Fleming, 2001; Gillies, 2002; Vaughan, 2002). However, there is very little research available on the long effect of cooperative learning (Gillies, 1999; Gillies, 2002). In particular, only one research reported on the residual effects of cooperative learning, and that research was in Australia (Gillies, 2002). So the purpose of this study was not only to investigate the comparison between cooperative learning and traditional teaching instruction, but also to investigate their effects on student interpersonal relationship skills at the end of the next semester following the initial experimental semester.

 

Measuring Efficiency and Productivity Change in Taiwan Hospitals: A Nonparametric Frontier Approach

Ching-Kuo Wei, Oriental Institute of Technology, Taiwan

 

ABSTRACT

 This research investigated the productivity of hospitals (total 550 decision-making units) in Taiwan and its changes during 2000~2004, and applied Data Envelopment Analysis for analysis, as well as the Malmquist Productivity Index to evaluate the status of annual productivity changes. The results showed that the return-to-scale of medical centers was overly large, and there should be room for downscaling.  As evident from the MPI analysis, from 2003 to 2004, the productivity of all levels of hospitals had significant growth, as a reason ofdue to improved technical efficiency. Furthermore, this research also found out that after the first year of implementing the National Health Insurance Global Budget System, the productivity of all hospitals showed deterioration. In recent years, the management of hospitals in Taiwan has suffered a major impacts from changes in the macro environment, in which, the changes of the National Health Insurance Payout Scheme was most influential.  In the past, with fee-for-service payouts, the hospital could increase its service quantity to increase its income, but after implementation of the Global Budget System in July 2002, which intended to control the rise of medical fees through budgets, the operational efficiency of hospitals was greatly impacted. The management of many Many hospitals managements were faced with the crisis of losses or bankruptcy.  Thus, the efficiency of hospital management has become a problem worth exploring. Based on ownership, Taiwan hospitals can be divided into three categories, public hospitals, proprietary hospitals, and private hospitals. Public hospitals are profit oriented. Proprietary hospitals are a kind of private hospital, but they are not profit oriented either. Private hospitals, on the other hand, are mainly profit oriented. In terms of accreditation level, hospitals can be categorized into three main types, medical centers, regional hospitals, and local hospitals. Medical centers are large-scaled hospitals mainly responsible for education, research, training, and highly complicated medical treatments. Regional hospitals are medium-sized hospitals responsible for education, training, and complicated medical treatments. Local hospitals are small-scaled hospitals mainly for training and ordinary medical treatments.  Many researches have applied Data Envelopment Analysis (DEA) models to study hospital efficiency (such as Sherman, 1984; Ferrier and Valdmanis,1996; Chang,1998; Puig-Junoy, 2000). The fact shows that DEA is an excellent analytical tool in evaluating a hospital’s operational efficiency. However, most researches focused on cross-section data analysis, and seldom discussed the impact on hospital efficiency before and after implementing a major policy. In general, all DEA studies would consider performance analysis at a given point of time. However, extensions to the standard DEA procedures, such as the Malmquist Productivity Index (MPI) approach, have been reported to provide performance analysis in a time-series setting (Charnes et al., 1994). This paper will employ both DEA and MPI models to analyze hospitals’ efficiency and productivity change, and compare their discrepancies before and after implementing implemention of the Global Budget System. DEA is a non-parametric linear programming model for frontier analysis of multiple inputs and outputs of decision-making units (DMUs, e.g., hospitals), developed by Charnes et al. (CCR model) (Charnes et al., 1978) and extended by Banker et al. (BCC model) (Banker et al., 1984). Detailed introduction of DEA theories is provided by Cooper et al (2000).  The CCR model, which assumes constant returns to scale (CRS), and the BCC model allows for variable return to scale (VRS).  The input oriented linear programming of CRS model is shown below:  Through the CRS model, DMU’s technical efficiency () can be calculated, while λ is the weight, are slack and surplus, respectively,  is a non-Archimedean figure, x is the input (there are m input) and y is the output (there are s output), Banker et al. (1984) proposed the VRS model to calculate pure technical efficiency to separate it with from the technical efficiency and scale efficiency.  Banker (1984) proposed the most productive scale size (MPSS) to examine the production scale of inefficient unit. Banker & Thrall (1992) proved with a theorem that, when the sum of weights (λ) of a certain DMUo’s reference set equals 1, that is when  = 1, indicating that the input of one unit of production factor can produce one unit of output, the returns to scale remains constant. When < 1, it indicates that the DMU is in the situation of increasing returns to scale, meaning the input of one extra unit of production factor can produce more than one unit of output. Therefore, in order to promote an organization’s operational efficiency, the facility scale should be expanded to increase more input so as to gain more output. On the opposite side, if > 1, it indicates the situation of decreasing returns to scale, meaning the input of one unit of production factor will produce less than one unit of output. Therefore, input should be cut down and the facility scale should be adjusted to reach the level of the most productive scale size.  According to Fare, Grosskopf and Lovell (1994),the input-oriented Malmquist productivity change index can be written as: Where E is Efficiency Change, and T is Technology change. If the Malmquist Productivity Index and its components are greater than 1, equal to 1, or less than 1, they indicate progress, no change, or regress, respectively.
 

Capital Structure: Asian Firms Vs. Multinational Firms in Asia

Dr. Janikan Supanvanij, St. Cloud State University, MN

 

Abstract

The study analyzes the financing decisions of Asian firms and multinational firms investing in Asian countries during 1991-1996.  The results show that some factors are correlated with firm leverage similarly in both groups. Overall, leverage increases with tangibility and size in Asian firms. For MNCs in Asia, the explanatory variables are related with short-term financing decision, not long-term decision.  Empirical work in the area of capital structure is largely based on firms in the US and G-7 countries.  Without testing the robustness of these findings somewhere else, it is difficult to determine whether these empirical regularities can support the theory.  Very few studies expand the test to Asian countries because of data limitation.  This paper is the first study that compares the financing decision of Asian firms to MNCs investing in Asia.  It analyzes whether capital structure in Asian firms is related to the factors similar to those appearing to influence the capital structure of the US firms, and whether the financing choice is similar to MNCs investing in the area. The determinants of capital structure choice are investigated by analyzing the financing decisions of firms across industries in Japan and other Asian countries including Hong Kong, Singapore, Korea, Thailand, Malaysia, Taiwan, and Philippines during 1991-1996.  This section presents a brief discussion of the attributes that different theories of capital structure suggest may affect the firm’s debt-equity choice.  Harris and Raviv (1991) find evidence that leverage increases with fixed asset, nondebt tax shields, investment opportunities, and firm size; and decreases with volatility, advertising expenditure, the probability of bankruptcy, profitability, and uniqueness of the product.  Booth et al. (2001) examine the capital structure determinants in ten developing countries during 1980-1990 and provide evidence that the determinants are similar to those in developed countries. In this study, I focus on five factors: tangibility, investment opportunities, firm size, profitability, and volatility.  The reasons are that: 1) These factors have shown up most consistently as being correlated with leverage in previous studies; and 2) The data severely limit their abilities to develop proxies for the other factors.  Rajan and Zingales (1995) also note that the magnitude of nondebt tax shields other than depreciation is not available, and advertising expenditure and R&D expenditure are rarely reported separately.  Myers and Majluf (1984) suggest that firms may find it advantageous to sell secured debt.  Since there may be costs associated with issuing securities about which the firm’s managers have better information than outside shareholders, issuing debt secured by property with known values can avoid these costs.  Hence, firms with assets that can be used as collateral may be expected to issue more debt to take advantage of this opportunity.  Harris and Raviv (1991) find that leverage increases with fixed assets. Tangible assets are easy to collateralize and thus reduce moral hazard and the agency costs of debt (Wald, 1999).  If a large fraction of a firm’s assets are tangible, then assets should serve as collateral, diminishing the risk of the lender suffering the agency costs of debt.  Assets should also retain more value in liquidation.  Therefore, the greater the proportion of tangible assets on the balance sheet, the more willing should lenders be to supply loans, and leverage should be higher (Rajan and Zingales 1995).  Tangibility is measured by the ratio of fixed assets to total assets.  Highly levered firms are more likely to pass up profitable investment opportunities (Myers, 1977). When firms have more growth assets, the market value and firm risk are more easily changed to benefit the shareholders. Firms that expect high future growth or have valuable growth opportunities should issue equity when they raise external fund (Jung, Kim, and Stulz, 1996; Rajan and Zingales 1995).  Smith and Watts (1992) and Lang, Ofek, and Stulz (1996) also provide supportive evidence and report a negative relationship between leverage and firm growth.   Thus, the expected future growth should be negatively related to long-term debt levels because the cost associated with the agency relation is likely to be higher for firms in growing industries, which have more flexibility in their choice of future investments (Titman and Wessels, 1988).  The market-to-book value of assets is a proxy for growth opportunities and is expected to be negatively related to leverage (Myers, 1977; Gaver and Gaver, 1993; Rajan and Zingales, 1995). Firm size should have a positive impact on the supply of debt (Harris and Raviv, 1991). Larger firms tend to be more diversified and fail less often.  Titman and Wessels (1988) cite evidence from Warner (1977), and Ang, Chua, and McConnell (1982) that suggests the direct bankruptcy costs appear to constitute a larger portion of a firm’s value as that value decreases. Therefore, size may be an inverse proxy for the probability of bankruptcy.  As suggested by Titman and Wessels (1988) and Rajan and Zingales (1995), I will use the natural logarithm of net sales as indicators of size.  Myers and Majluf (1984) predict a negative relationship between leverage and profitability because firms will prefer finance with internal funds to debt. Pecking Order Theory suggests that firms prefer raising capital, first from retained earnings, second from debt, and third from issuing new equity.  Rajan and Zingales (1995) report a negative relationship between profitability and leverage.

 

Influence of Instructors in Enhancing Problem Solving Skills of Administrative and Technical Staff Candidates

Dr. Nesrin Özdemir, University of Bahcesehir, Turkey

Dr. Ozge Hacifazlioglu, University of Bahcesehir, Turkey

Mert Sanver, University of Bahcesehir / (Stanford Master Candidate), Turkey

 

ABSTRACT

Communication problems seem to serve as one of the main problems encountered at educational institutions. This is mostly observed at institutions where future middle management employees are trained. Vocational Schools in this respect have fundamental importance to train the necessary technical and administrative staff for the companies. Atmosphere of the class, democratic attitude and leadership qualities of the instructor and the program enhancing the creativeness of the students form the basis of a successful model. The purpose of this study is to determine the perceptions of students about problem solving and communication skills practiced by their instructors in a classroom atmosphere. A questionnaire was devised as a tool for data collection. 422 students chosen from Vocational School of Education constitute the sample of the study. SPSS (Statistical Package for Social Sciences) was used in the analysis of the data. Recommendations were made regarding the communication and problem solving process.  Problem solving skills helps the individual to effectively accommodate to the environment in which he/she is living in. It can be said that, all generations have felt the need to learn problem-solving techniques in order to adapt to their environment. Some of the problems have certain right answers and precise solutions. It is possible to reach the result by carrying out certain strategies in such kind of problems. However, solutions to some problems are not as straight forward. They do not have one right answer. Inter disciplinary knowledge and creativity should be used to solve them. Ultimate goal of the educational programs is teaching students, dealing with problems in their major subjects in school, and problems they will face in life (Gagne, 1985). Problems are challenges for which we need to spend effort in order to reach a goal, and we also need to define sub-goals in the process. ( Chi,Glazer, 1985) Problem solving is an activity in which both knowledge and appropriate mental strategies are utilized. The most important aspect in problem solving is determining the tools needed for the purpose. Some of the problems are one-dimensional; generally have one right answer and certain strategies that allow finding the right answer. They are specific to one knowledge area. However, there also exists multi dimensional problems, which requires multi dimensional thinking and do not have a certain path for deriving the solution. Regarding all these explanations, instructors in technical schools, our study investigates the strategies used by the instructors and measures the effectiveness of these strategies from students’ perspective. Problem should be presented in a way complying with mental schemas of students and then other steps are followed. As long as a problem is correctly understood, the solution efforts will be better satisfying. Mayer (1987) stated that, the furthermost difficulty for students in problem solving process is difficulty in understanding the verbal description of the problem. Students often cannot separate useless information from the problem itself. Neweil and Smen (1972) grouped problem solving strategies under four titles: Extract the useful information. Rearrange and illustrate the problem through schematizing or creating a mental picture of the problem. Making a tool-purpose analysis is the first step in problem solving. In other worlds, it is determining the purpose of the problem and expressing the possible ways to reach the solution. The following question should be answered in order to solve the problem; “What is the difference between where I am and where I want to be and what can I do to reduce this difference?”  In the analysis, determining the constraints given and the expected outcomes, allows to decide what is need to be done. Practicing with problems requiring creative thinking skills is needed to learn how to understand a problem. Extracting the useful information; Ordinary problems of daily life is not clear and organized as the problems in textbooks. In that stage, separation of relevant and irrelevant information greatly reduces problem-solving efforts.  Rearranging the problem; increases the understandability of the problem and the solution become easier to reach. Using illustrations like pictures and diagrams is helpful in this context.  As stated above, tool-purpose analysis, unleashing the critical information and rearranging the problem are basic preliminary steps of problem solving. Students experienced in problem solving much easily applies these steps when compared to students that have weaker experience. In the first stage where tool-purpose analysis is performed, loud thinking, motivating students, promoting cooperation and concentrating in the process rather than the result are proven methods for supporting the emergence of creative solution ideas.  In this stage, the ideas planned in previous steps are done and the results are obtained. In other words, the problem is solved. In this stage, it is essential that the instructor and students be using loud thinking, which also helps others to gain problem solving skills. In this stage, the students should be encouraged to identify the process of problem solving.

 

Relationships Among Public Relations, Core Competencies, and Outsourcing Decisions in Technology Companies

Dr. Chieg-Wen Sheng, Chihlee Institute Technology, Taiwan

Ming-Chia Chen, Da-Yeh University, Taiwan

 

ABSTRACT

Though the importance of public relations (PR) is rising, PR activities remain distinct from traditional management functions. It becomes a strategic decision to determine whether it is necessary (or possible) to internally direct or to outsource PR activities. The objective of this research is to examine PR activities in Taiwan’s technology industries and to determine through an informal survey and content analysis how this decision relates to core competencies. We provide theoretic analysis and develop four models quantifying relationships among core competency, PR functions needed, outsourcing channels, and outsourcing success. In addition, we discuss policies concerning outsourcing decisions and evaluate key decision-making criteria before and after outsourcing. Through rapid technological change, the global economy is quickly becoming one based on knowledge. Furthermore, regional commercial and financial activities are increasingly intertwined, and the context of economic exchange can no longer be characterized as a closed system. As members of the ever-expanding open system, enterprises must be increasingly aware of their environment (including particular production and value chains, the natural environment, and the surrounding community). In light of this changing reality, a company’s success hinges not only on economic profits; it also must demonstrate social responsibility [Hagen, Hassan & Amin, 1998; Sheng & Hsu, 2000]. Most moderate- to large-sized enterprises now depend on public relations (PR) departments to manage their corporate position and to produce a desirable image in the eyes of key community entities (including consumers, governmental bodies, and competitors). In this sense, the importance of PR activities is increasing [Hsu & Sheng, 2001].  Though the importance of PR is growing, PR activities remain distinct from traditional management functions. It becomes a strategic issue for firms to determine whether it is necessary (or possible) to internally direct or to outsource PR activities. According to some research [Mascarenhas, Baveja & Jamil, 1998], enterprises face a short-term incentive to manipulate external relationships in order to develop their core competencies but need to maintain a socially responsible image for long-term survival. In this context, enterprises considering outsourcing some activities, excluding core competencies, seek to strike a challenging balance. Sheng [1999] points out that many specialized PR companies can handle “necessary trivial things,” including managing customer and public sentiment, for these firms. This observation helps to explain why PR recently became the fastest developing business in America [David, 2000]. It is illustrative to focus on technology industries. Even though specialized technology PR companies can handle “necessary trivial things” for their customers, this is not the main reason that technology companies outsource these activities. For example, according to Lee [1995], PR activities are difficult to direct smoothly, the main difficulty being obtaining the proper balance between traditional and non-traditional professional knowledge and customer opinion concerning company behavior. The implication is that technology companies may prefer to outsource PR activities to concentrate on core competencies while at the same time enjoying a high-quality professional image. Sheng [1999] also argues that core competency characteristics in PR companies are determined by whether they agree with their customers’ preferences and characteristics, including attitudes toward professionalism, creativity, and innovation. Considering this and related viewpoints [Mascarenhas, Baveja & Jamil, 1998; Lee, 1995; Sheng, 1999], we find that technology firms outsource PR activities mostly based on whether the candidate PR company’s core competencies complement or substitute for those of the technology firm. This finding motivates quantitative description of the relationships among technology enterprises and PR companies based on harmony of core competencies, types of PR activities, and outcomes of cooperation. Our research sample targeted technology industries in Taiwan. We collected data through semi-structured interviews and performed data validation by consulting secondary data. We performed theoretically driven content analysis and corrected the framework to construct a new model based on empirical data.  Our research focuses on core competencies as we analyze PR outsourcing strategies in technology industries. In this section, we describe relevant literature concerning PR strategies, core competencies, and outsourcing results. First, we discuss literature concerning PR activities. Sheng [1999] argues that organizations directing their own PR activities mainly seek to manage public opinion through press releases. For these purposes, organizations determine their PR strategies based on how they understand public opinion and the press release mechanism. Understanding public opinion is divided into two steps: characterizing current public sentiment and identifying (positive or passive) channels for its management. Press releases are classified into two types: messages with content and form and releases that serve as a reminder of presence and continued mass media access.  Message with content and form, in turn, are separated into four categories.

 

Compliance with Disclosure Requirements by Four SAARC Countries—Bangladesh, India, Pakistan and Sri Lanka

Dr. Majidul Islam, Concordia University, Montreal, Canada

 

ABSTRACT

The purpose of this study is to empirically investigate the compliance with disclosure requirements by some South Asian Association for Regional Cooperation (SAARC) countries and to explore the possibility of standardization of accounting practice in the SAARC region. The reports of ten companies from the manufacturing sector of each of four SAARC countries—Bangladesh, India, Pakistan and Sri Lanka (BIPS) were collected. The reports were examined against 124 information-item requirements of the standards, company acts and listing rules of the stock exchange, which are commonly observed by the BIPS companies in their respective countries. The compliance with the obligatory information items was measured using a relative index for those four countries. The result shows that Sri Lanka complied, on average, with the highest requirements of the standards, acts and rules, followed by Bangladesh, Pakistan and India.  The South Asian countries are gradually making contributions to world trade, and because these countries are dependent on aid and investments from beyond their borders, the accounting development processes and accounting systems need to be such that they satisfy the investors and donors and, at the same time, create an environment for useful reporting for user groups within the countries.  The purpose of this paper is to investigate and assess empirically the degree of compliance with disclosure requirements of the listed Bangladeshi, Indian, Pakistani and Sri Lankan (BIPS) companies. As these countries belong to emerging capital markets (ECM) (Standard and Poor’s, 2001), it is particularly relevant for them to comply with financial reporting requirements of the standards.  Accounting information plays an important role in emerging economies, especially if the countries are dependent upon foreign investment. Financial statements of companies reflect the aspirations of the users’ information; however, many players influence the quality of financial reporting and bring strengths and weaknesses to the accounting and reporting process (Gavin 2003). Ahmed and Nicholls (1994) argued that there are many incentives for disclosure in emerging economies; there are also considerable reasons for not complying with mandatory disclosure requirements.  In their strategic policy formulation, large and multinational companies are focusing on the global economy. Globalization of trade and economies is changing economic growth and the world trading system. Globalization, in its turn, emphasizes the necessity of having standards to harmonize accounting practices, which would reduce diversity and improve comparability of financial reports prepared by companies from different countries. BIPS, being dependent on foreign investment and foreign assistance, should be trying to respond to the demands of the cross-border as well as domestic users of the information.  By analyzing the financial statements of the BIPS sample companies, this paper will focus on some salient features that induce companies to, or not to, comply with disclosure requirements. Also, the paper will identify key issues for accounting practices and standards development in BIPS by analyzing economic, social and cultural backgrounds. The paper is organized as follows: the following sections review the reporting environment of BIPS and accounting standardization and its implications for developing and SAARC countries, followed by research design and methodology, results and analysis, discussion, conclusion and limitations of the study.  Accounting principles allow the preparers of financial reports to increase the utility of information to external users, and they allow for users to have confidence in the accounting information. But accounting rules often differ from country to country, and even from company to company within the same country. This creates variations in financial reports that are based on the same economic transactions, thereby reducing their credibility and deterring international business investment and cross-border flows of capital. At the initial stages of the professional development of developing countries like BIPS, harmonization might be a viable alternative to establish credibility in financial reporting, which may be achieved through standard practices, because they limit the freedom of management to choose between alternative accounting methods favourable to management. The reasons for harmonizing the practice are to enhance overall market efficiency and reduce the cost of capital for companies. Provision of different figures in different environments is confusing for investors and for the public (The European Commission 2001). This is all the more true for the BIPS environments. The implementation of recognized standards gives credibility to accounting reports and is extremely important for developing countries.  It is imperative for BIPS to have a body of accounting principles that governs the measurement of transactions and the disclosure and presentation of financial information. Currently, however, compliance with the IAS in BIPS is optional, and the financial statements of different companies are not comparable. The level of development of local standards and adoption and implementation of the IAS is not consistent, but it is growing. Of 41 IAS, Bangladesh has adopted 30; India, 18; Pakistan, 32; and Sri Lanka, 31.  Adherence to international standards as well as local standards is in the best interests of both the users and the preparers of financial statements. External users will have more confidence in reports that are easy to analyze and understand. Internal users will have information that will help them make better investment and managerial decisions. Well-devised accounting systems and controls inspire investor confidence, stimulating the flow of capital. The systems and controls then ensure that this capital is used efficiently. The Financial Accounting Standard Board (FASB) states that “a reasonably complete set of unbiased accounting standards that require relevant, reliable information that is decision useful for outside investors, creditors and others who make similar decisions would constitute a high-quality set of accounting standards (FASB 1998).”  Bangladesh, India, Pakistan and Sri Lanka were once British colonies, and all four achieved independence at around the same time (1947-1948).

 

How Firms Integrate Nonmarket Strategy with Market Strategy: Evidence from Mainland of China

Dr. Yuanqiong He, Huazhong University of Science & Technology, China

 

ABSTRACT

Integrating nonmarket strategy with market strategy is the new trend in the field of strategic management. However, it is still unknown how market strategy and nonmarket strategy will be integrated, special in an emerging economy setting such as mainland of China in the existing literature. Therefore, based on 438 usable questionnaires and in-depth interviews with 10 top managers from mainland of China, this research examines the integration between nonmarket and market strategies among firms with various ownerships. The study represents a promising step forward toward the new trend of strategic management and provides Chinese firms with implications of dealing with stakeholders in today’s Chinese business environment.  Since Baron (1995a, 1995b) advocates the integration of nonmarket strategy with the economic “market” strategies of the firm, it is fair to say that two areas of market strategy and nonmarket strategy have often been treated as separate subjects in the academic literature. In the previous literature, nonmarket strategy of American firms is mainly taken as the sample of empirical researches so that there is lacking of evidence from an emerging economy setting such as China. Actually, being in a transitional period from a command economy to a market economy, Chinese firms adopt nonmarket strategy ubiquitously and integrate them with market strategies in their business operations (Tian et al., 2003). Because of different institutional background between China and other developed countries such as America, nonmarket strategies of Chinese firms and their integration with market strategies both have their own characteristics.  Therefore, this paper is an effort to fill this gap through an empirical examination based on evidence from mainland of China. This paper is structured as follows. The following section describes the nonmarket environment in the transitional period of China and proposes the hypotheses. Then, research methodology along with collection procedures and measurement of the constructs are introduced. The results of the empirical study are discussed in section four. Finally, we conclude by noting the managerial implications of the study’s findings and provide directions for future research.  The nonmarket environment in China is different from those in advanced Western countries in many ways (Nee, 1992; Hitt et al., 2002). The most salient difference lies in its authoritarian political system. In the economic reform process starting from 1978, the Chinese government remains the dominant policy designer and implementer. Government bureaux at all levels are powerful special interest groups and key stakeholders in business firms’ nonmarket environment (He & Tian, 2005), and have to be watched with a lot of attention by managers and scholars alike. Apart from making direct investment in state owned assets, government also controls firms’ investments through numerous approval mechanisms. The “Administrative Approval Law” (implemented in July 1, 2004) has significantly reduced the scope of government project checking duties. Now, only four responsibilities are explicitly stated: (1) approval, (2) check and admission, (3) registration and (4) certification and qualification, but the process may still look cumbersome to outside observers. During the process of marketlization, the business environment has been more and more regulated by laws and rules, which challenged and criticized the traditional ways of establishing relationship with Chinese government officials. For example, 30690 laws and regulations (including the laws issued by National People’s Congress, administrative regulations issued by the State Council, regulations issued by local governments and industrial regulations issued by national departments) have been issued in China in 1999. While in 2005, 141173 have been issued, which increased by 360% as compared with that of 1999. Besides what mentioned above, Chinese business environment is under the heavy influence of Confucianism, which has endured as the basic social and political value system for over 1,000 years (Hwang, 1987; Yum, 1988). Favor, trust and reciprocity are the common features of the relationship in China (Tong & Yong, 1998; Wong & Chan, 1999).  Porter’s generic strategies focus on market components consisting customers, competitors, suppliers, while nonmarket strategy addressed by Baron highlights nonmarket component. Market strategy and nonmarket strategy have several differences in some aspects including environment focus, strategic making process and so on. See table 1. Market strategy and nonmarket strategy, although unique, are structurally similar and they are coordinated with each other, which is the base of their integration.   The approaches of integrating market and nonmarket strategy could be classified into buffering and bridging based on Meznar & Nigh (1995)’s argument of roles of boundary-spanning function. “Bridging integrated strategy” is to integrate nonmarket issues (such as environment protection issue, public policy, etc.) into the process of making market strategy. “Buffering integrated strategy” involves trying to keep the environment from interfering with internal operations and trying to influence the external environment. For example, a firm actively influences its environment through such means as lobbying, being a member of CPPCC (National Committee of the Chinese People’s Political Consultative Conference) and NPC (National People’s Congress), providing government officials with industrial reports and so on (Tian, et al., 2003) in China.  Blumentritt (2003) concluded that firms’ characteristics including technology, size, and economic spillovers could influence the choice of buffering or bridging. Also Hansen & Mitchell (2001) used the ownerships as the predictor of nonmarket strategy because ownership is the substitutable variable of firms’ characteristics.

 

Relationship between Organizational Socialization and Organization Identification of Professionals: Moderating Effects of Personal Work Experience and Growth Need Strength

Xiang Yi, Western Illinois University, IL

Jin Feng Uen, National Sun Yat-Sen University, Taiwan

 

ABSTRACT

“Organizational identification” (OID) has significant implications for managing professionals in fast changing organizations. This study focuses on the relationship between socialization tactics and the OID of professionals. Work experience and personal “growth need strength” (GNS) were included as moderators. Three main results were found. (1) Ignoring the moderating effect, the serial tactic has a significantly positive effect on the OID; (2) Considering the moderating effect of work experience, collective and fixed tactics are helpful to OID for the professionals with formal work experience. (3) Formal and sequential tactics have positive impacts on the OID despite the GNS the professionals have, but high GNS professionals commonly have higher OID than the low GNS ones.  Rapid advances in science and technology have changed the world. The knowledge and skills of employees are key sources of productivity in knowledge-based economies. For this reason, how to attract and retain high-quality professionals will continue to be critical issues among researchers as well as practitioners. Employees with knowledge and skills are essential to an organization’s success. Professionals distinguish themselves from traditional employees in that professionals’ work requires highly complicated general and firm-specific knowledge and skills; while traditional employees tend to perform clearly demarcated jobs or jobs needing a high degree of supervision (e.g., Xu, 1996). They can solve the difficult and non-routine problems for the organizations. Therefore, professional employees can, and often do have more bargaining power when they negotiate with the employer. This puts more pressure on a firm to find ways both to promote professionals’ productivity and bind them to the organization. Another feature of professionals is that they must continually learn new knowledge and skills, and keep up with latest developments in their specialized areas if they are to maintain their own employability (Xu, 1996). Therefore, good professional employees usually value the opportunities organizations provide for acquiring new knowledge and expertise related to their career development. Researchers have suggested numerous ways to retain professional employees and reduce turnover by promoting loyalty and commitment. For example, King and Sthi (1998) and Baroudi (1990) found that reducing stressors such as role ambiguity and role conflict in the role adjustment process could help reduce employees’ intention to leave the organization. Arnett and Obert (1995) found that motivating constructive employee behavior could foster organizational loyalty. Others (e.g., Gillian, 1994) found that emphasizing teamwork, focusing on moral and stress management, and expanding career development could decrease turnover. In essence, keeping professional employees means finding ways to promote their organizational commitment and professional fulfillment.  Research demonstrated that employees are more committed to organizations if they feel they are treated well (Jandeska & Kraimer, 2005).  One method to achieve this commitment is establishing organizational identification through appropriate socialization tactics as a professional enters an organization (Ashford & Saks, 1996). Our study tried to explore the use of organizational socialization theory, which firms in the high-technology and knowledge-based industries may use to influence the formation of organizational identification for new-entry professionals. Additionally,, we discuss how organizational socialization tactics interact with work experiences and growth need strength of the new professionals and influence their organizational identification. The results indicate that some of the socialization tactics are especially effective on building new professionals’ organizational identification and personal work experience and growth need strength are meaningful moderators concerning the relationships between socialization tactics and organizational identification. Organizational socialization is the process in which a newcomer in an organization learns his or her roles and adapts to the new environments. (Van Maanen & Schein, 1979). The original definition of organizational socialization was to “know the rules” for a newcomer, but now has evolved into the process of making a person understand his roles or the organization’s values, philosophy, social network (Louis, 1980). This process of becoming acclimated to an organization is crucial to an employee’s future success.  Research has shown that content of information absorbed by new employees positively correlates to outcomes of job satisfaction and organizational commitment (Cooper-Thomas, & Anderson, 2002).  Van Maanen and Schein (1979) proposed that organizations can use six dimensions of socialization tactics, each of which is defined by a continuum of institutionalized versus individualized socialization as demonstrated in Figure 1 (Jones, 1986).  The first socialization tactic is collective (vs. individual) socialization, putting new employees together so that they will undergo a similar experience and receive the same information. The second tactic is formal (vs. informal) socialization. Here, rather than commixing a newcomer with current employees, programs or activities are created exclusively for new employees during a specific period of time. The third tactic is sequential (vs. random) socialization. This method demands a newcomer go through a specified sequence of stages that leads to adjustment to the new job roles. The fourth tactic, fixed (vs. variable) socialization, refers to a schedule for assimilation into the organization. The fifth tactic is serial (vs. disjunctive) socialization, a process in which an experienced role model, usually a senior colleague, is used to help the new employee. Serial socialization is opposed to disjunctive socialization, in which no consistent help is provided by others. Finally, investiture (vs. divestiture) socialization affirms personal characteristics, ideas and experiences of the new employee, as opposed to denying and breaking these personal characteristics and focusing on building totally new experience and attitude to the new organization. 

 

A Typology of Brand Extensions: Positioning Cobranding As a Sub–Case of Brand Extensions

Dr. Costas Hadjicharalambous, Long Island University, NY

 

ABSTRACT

This article treats cobranding as a sub-case of brand extensions and presents a typology of brand extensions. The underlying research is briefly outlined for clarity of meaning and intent of the terms of the typology. Brand extensions are classified (1) according to the number of brands involved in the extension and (2) the purpose of the extension. The rationale and the benefits of treating cobranding as a brand extension are presented. The paper concludes by discussing the importance of the typology. The typology presented can be used as an organizer of thought on the subject and a stimulus for future research. A relatively new phenomenon that caught the attention of academic researchers is cobranding. Cobranding is the use of two or more brands to name a new product.  According to some estimates, recent cobranding and other cooperative brand activities have enjoyed a 40% annual growth (Spethmann and Benezra 1994).  The basic premise behind cobranding strategies is that the constituent brands help each other achieve their objectives. Marketers recognized that, at least in some cases, using two or more brand names in the process of introducing new products offers competitive advantage. For example, ConAgra and Kellogg have joined efforts to market Healthy Choice adult cereals. In another cobranding effort, ConAgra has agreed to allow Nabisco to use the Healthy Choice brand in a new line of low fat, low cholesterol and low sodium snacks. The purpose of this double appeal is to capitalize on the reputation of the partner brands in an attempt to achieve immediate recognition and a positive evaluation from potential buyers. From a signaling perspective (Wernerfelt 1988; Rao, Qu and Ruekert 1999) the presence of a second brand on a product reinforces the perception of high product quality, leading to higher product evaluations and greater market share. Associating one brand with another involves risks that need to be addressed. Cobranding may affect the partner brands negatively.  One need only consider the problems experienced by Dell and Gateway when it was reported that the design of Intel Pentium processors was defective (Fisher 1994). Coke and Pepsi experienced similar problems when reports linked artificial sweetener aspartame (NutraSweet) to cancer. James (2005) found that combining two brands may cause brand meaning to transfer in ways that were never intended. Therefore, the potential benefits and risks associated with cobranding strategies must be explored and carefully examined. However, despite the increasing use of cobranding and its managerial implications, little research has addressed cobranding strategies, examined the factors that determine the success of such a strategy, or assessed the impact of each of the partner brands on the other. For the most part, empirical research that has examined cobranding has focused on the impact of using branded ingredients or components in another brand’s product (e.g., Levin et al., 1996; Simonin and Ruth 1998). As suggested by recent developments (Samu, Krishnan, and Smith 1999), cobranding goes beyond the use of branded ingredients or components. The contribution of an established brand to the new product may be based on ingredients or components, but potentially a greater contribution may be due to the image, expertise, status, companion products, customer franchise, or any other customer perceived benefit (Tauber 1988). Currently, there is no a conceptual framework of cobranding that captures the totality of Tauber’s (1988) conceptualization. While the lack of a conceptual framework for studying cobranding offers individual researchers the freedom to study cobranding phenomena from different perspectives, the position taken in this paper is that the lack of conceptual framework hinders cobranding research. A conceptual framework will provide guidelines for studying cobranding phenomena and will offer researchers the opportunity to highlight similarities and differences among the different types of cobranding strategies. The present paper advances research by proposing a framework useful to study cobranding. To foster research in the area, this paper presents a typology of brand extensions that includes cobranding. Specifically, the paper treats cobranding as a case of brand extensions. Such a view will allow researchers to draw on a previously developed framework (theories and methodology) and apply them in studying cobranding. First, relevant literature is reviewed. Next a typology of brand extensions is proposed. The paper concludes with theoretical and managerial implications. Ingredient or component branding strategy is the use of a branded ingredient or component on a product introduced by a brand (Norris 1992, 1993). A widely cited example of component branding strategy is the promotion of personal computers through the “Intel Inside” campaign. According to Aaker (1996) the “Intel Inside” campaign has generated more than ninety thousand pages of ads in a period of eighteen months totaling more than 10 billion exposures. Norris (1992) proposed that the ingredient branding strategy results in more efficient promotions, easier access to distribution, higher quality products, and higher profit margin. However, Norris didn’t empirically test these propositions. More recently, Levin et al. (1996) in a taste test study found that adding a well-known branded ingredient improves product evaluations of both unknown and well known host brands more than when an unknown branded ingredient was added.  

 

The Dynamics of Corporate Takeovers Based on Managerial Overconfidence

Dr. Xinping Xia, Huazhong University of Science and Technology, Wuhan, PRC

Dr. Hongbo Pan, Huazhong University of Science and Technology, Wuhan, PRC

 

ABSTRACT

Using a game theoretical real option framework, this paper presents a dynamic model of takeovers based on the stock market valuations of the merging firms. The model incorporates managerial overconfidence on merger synergies and competition and determines the terms and timing of takeovers by solving option exercise games between bidding and target firms within the same industry. The model explains merger waves, abnormal returns to the stockholders of the participating firms around the time of the takeover announcement, and the impact of completion on the timing, terms and abnormal returns. The implications of the model for shareholder abnormal returns and the impact of competition on shareholder abnormal returns are consistent with the available evidence. The model generates new predictions relating shareholder abnormal returns to industry characteristics of the participating firms, and the level of managerial overconfidence.  Mergers and acquisitions have been the subject of considerable research in financial economics. Yet, despite the substantial development of this literature, existing merger theories have had difficulties reconciling the stylized facts about mergers with the payment of cash (1). Two of the most important stylized facts about mergers are: First, the combined returns to stockholders are usually positive (see the recent survey by Andrade, Mitchell and Stafford (2001)); and second, acquirer returns are, on average, not positive (Andrade, Mitchell and Stafford, 2001; Fuller, Netter and Stegemoller, 2002). These two stylized facts are difficult to reconcile theoretically. The main aim of this paper is to provide a theoretical explanation for the above two stylized facts and examine the impact of competition on the abnormal returns of participant firms around the takeover announcement.  The basic elements of our theory are the following: First, a takeover deal is an efficient response to industry shock, which usually results in positive merger synergies. Many academic literatures find that mergers concentrate on industries in which a regime shift of technological or regulatory nature can be identified, making mergers an efficient response (e.g. Mitchell and Mulherin (1996), Andrade, Mitchell, and Stafford (2001), and Andrade and Stafford (2004)). Second, outside investors do not know the forthcoming merging activity until the announcement of the takeover. Managers usually have private information relative to the outside investors. Therefore, the investors usually do not learn the probable positive merger synergies until the announcement of takeover. Third, the managers of participating firms are overconfident about the merger synergies (2). On the announcement of takeover, the investors will receive much information from the managers, and the information is composed of private information resulting from asymmetric information and in-veracious information based on managerial overconfidence. Therefore, the rational investors will adjust the in-veracious information from managerial overconfidence, and infer the managers to be overconfident regarding the takeover, which will probably result in negative stock response to bidding shareholders.  Our theory helps explain merger waves, the abnormal returns to the stockholders of the participating firms around takeover announcement and the impact of completion on the timing, terms, and abnormal returns. The model generates implications that are consistent with the available empirical evidence and yields a number of new predictions. Notably, the model predicts that (1) mergers happen periodically, and are positively related to product market demand; (2) abnormal returns to bidding shareholders can be negative if the level of managerial overconfidence on merger synergies is high; (3) the combined returns to stockholders are usually positive; (4) abnormal returns to target shareholders are positively related to the size of the bidding firms; (5) abnormal returns to bidding shareholders are inversely related to the size of the bidding firms with a low level of managerial overconfidence or competition; (6) both bidder’s and target’s abnormal returns are usually positively correlated with the growth rate of stochastic output price in the specific industry and the variance rate of the growth rate of the stochastic output price; and finally, (7) competition slows the acquisition process, and usually leads to lower acquiring-firm abnormal returns and increased returns for the targets.   The analysis in the present paper relates to several articles in the literature. The hubris hypothesis predicts that acquisitions announcements should have a zero combined abnormal return and a negative acquiring-firm abnormal return (Roll, 1986). The neoclassical theory sees mergers as an efficiency-improving response to various industry shocks and indicates that acquisitions bring positive combined abnormal returns and acquiring-firm returns (Mitchell and Mulherin, 1996; Jovanovic and Rousseau, 2002). Compared with the hubris hypothesis and the neoclassical theory, our model not only reconciles the above two stylized facts, but also allows us to relate stockholder returns to the level of managerial overconfidence and competition.   Shleifer and Vishny (2003) consider a zero-sum game (in the long run) with exogenous timing. In their framework, outside investors do not incorporate the potential surplus associated with the takeover in the stock market valuations of participating firms. Therefore, the announcement of a takeover generates abnormal announcement returns. The present paper extends the model of Shleifer and Vishny (2003) in several important dimensions.

 

Contact us   *   Copyright Issues   *   Publication Policy   *   About us   *   Code of Publication Ethics

Copyright 2000-2017. All Rights Reserved