The Journal of American Academy of Business, Cambridge
Vol. 8 * Num. 2 * March 2006
The Library of Congress, Washington, DC * ISSN: 1540 – 7780
Most Trusted. Most Cited. Most Read.
All submissions are subject to a double blind peer review process.
The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Journal of American Academy of Business, Cambridge will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work. All submissions are subject to a double blind peer review process. The Journal of American Academy of Business, Cambridge is a refereed academic journal which publishes the scientific research findings in its field with the ISSN 1540-7780 issued by the Library of Congress, Washington, DC. The journal will meet the quality and integrity requirements of applicable accreditation agencies (AACSB, regional) and journal evaluation organizations to insure our publications provide our authors publication venues that are recognized by their institutions for academic advancement and academically qualified statue. No Manuscript Will Be Accepted Without the Required Format. All Manuscripts Should Be Professionally Proofread Before the Submission. You can use www.editavenue.com for professional proofreading / editing etc...
The Journal of American Academy of Business, Cambridge is published two times a year, March and September. The e-mail: email@example.com; Journal: JAABC. Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via the e-mail address above. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above. Address advertising inquiries to Advertising Manager.
Copyright 2000-2017. All Rights Reserved
The Myth of Inter-Period Allocation of Deferred Taxes: Industry–Based Analyses
Dr. Ron Colley, University of West Georgia, Carrollton, GA
Dr. Joseph Rue, Florida Gulf Coast University, Fort Myers, FL
Dr. Ara Volkan, Florida Gulf Coast University, Fort Myers, FL
The behavior of deferred tax balances in mining and real estate industries is analyzed and the accounting theory and procedures required by the FASB are examined in the context of the unit problem. The unit problem involves the selection of the appropriate perspective (either individual or aggregate) for applying measurement and recognition conventions to phenomena of interest. From an individual event perspective, the FASB's conclusions in Standard No. 109 – Accounting for Deferred Taxes (S109) regarding liability recognition are inconsistent with the definition of liabilities found in the Statement of Financial Accounting Concepts No. 6. In addition, the use of inconsistent perspectives by S109 creates disagreements with the FASB’s position, where both the individual and aggregate perspectives are used simultaneously as the basis of the FASB's decisions. The study argues that the income tax accounting issue should be viewed from an aggregate perspective and concludes that the flow-through method of accounting for income taxes should be adopted. The impact of eliminating deferred taxes and adjusting the liability and stockholders equity balances on the debt-to-equity (DTE) ratio is computed for the mining and real estate industries in the COMPUSTAT database (1995 – 2004). In addition, the ratio of the net deferred tax balance to total assets is computed. In the mining industry, for those companies persisting over the 10-year period, the ratio increases. Statistical results show that the decreases in the DTE ratio are significant in each industry and each year. The Financial Accounting Standards Board (FASB) issued Statement 109 (S109) to bring closure to accounting and reporting controversies concerning deferred taxes (FASB, 1992). S109 required companies to use the comprehensive inter-period tax allocation method for measuring and reporting timing differences on the financial statements and on the tax returns. The companies were required to use the asset/liability approach and the current tax rate to accumulate the deferred tax assets and liabilities that resulted when the financial accounting and tax bases of their assets and liabilities differed. For income statement reporting, the tax liability for the period was adjusted by the periodic changes in the deferred tax asset and liability balances, to arrive at the tax provision/tax expense. S109 further required that an allowance account be established if it were more likely than not that the deferred tax assets would not be realized. Finally, there were complex rules concerning the loss carry-forwards, tax planning strategies and their use in determining the balance in the allowance account, reporting asset and liability balances, tax rate and status changes, business combinations, and footnote disclosures. The FASB issued S109, which replaced S96 (FASB, 1987), after much compromise. S96 was to take effect after December 15, 1988. However, due to the complexities of implementing S96, the FASB delayed its effective date three times (e.g., FASB, 1991). Since its inception, the FASB has struggled with the controversy of changing the reporting requirements for deferred taxes promulgated in Accounting Principles Board (APB) Opinion #11 (AICPA, 1967) and the continuing controversies regarding the final pronouncement, along with the delays in implementing the final standard attest to the controversial nature of the issue. This paper examines the theory underlying the current accounting and reporting standards for deferred taxes within the context of the unit problem and argues for an alternative view. For the mining and real estate industries, the behavior of deferred tax liabilities is observed over a ten-year period from 1995-2004. To measure the financial consequences of using the flow-through alternative, the impact of eliminating deferred taxes and adjusting the liability and stockholders equity balances on the debt-to-equity (DTE) ratio is computed. The industry-based financial impact is examined using the COMPUSTAT database for two five- year periods, the boom period of 1995 - 1999 and the bust period of 2000 - 2004. In general, results affirm previous observations (Rue and Volkan, 1985) about the overall economy and indicate that deferred tax balances are stable or increasing and are not reversing, even during periods of economic boom, albeit in a much slower rate. Since the release of S96 and S109, several concerns have been raised about accounting for deferred taxes in the areas of: (1) inconsistent treatment of the deferred tax asset and liability (Wolk, Martin, and Nichols, 1989; Parks, 1988); (2) FASB's failure to allow for discounting of the deferred tax liability (Rayburn, 1987); (3) method's complexity and potential lack of usefulness (Burton and Sack, 1989; Gregory, Petree, and Vitray, 1992; Colley, Rue, and Volkan, 2004); (4) FASB's failure to deal with temporary differences that are permanently deferred (Jeter and Chancy, 1988); and (5) lack of relevance of deferred tax amounts under full recognition approach in predicting stock returns (Lev and Nissim, 2004), market value of firms (Guenther and Sansing, 2004), discounted value of individual and asset-level reversals of deferred tax balances (Guenther and Sansing, 2004), and future profitability of firms in the U.K. where partial recognition method was recently replaced with an approach similar to the one used in S109 (Gordon and Joos, 2004). These concerns have not been addressed by the FASB. These controversies will not subside until the FASB completely reconsiders its position and adequately addresses the unit problem (Devine, 1985). The unit problem involves the selection of the appropriate perspective for applying measurement and recognition conventions to the phenomenon of interest, ranging from accounting for individual events or transactions to accounting for aggregate events or transactions. The accounting process involves the identification, grouping and measurement of what are believed to be relatively homogeneous events. If events are not homogeneous, a problem can arise in selecting attributes of the group portrayed by the accounting process. The positions one takes regarding the income tax accounting issue are directly related to one's view of the unit.
Is South Carolina’s Integrated System of Personal Property Tax and Motor Vehicle Registration and Licensing a Burden on Interstate Commerce Such that it Violates the Dormant Commerce Clause of the U.S. Constitution?
Dr. Brad R. Johnson, J.D, Francis Marion University, Florence, S.C.
By means of a case study approach, this article argues that, as to South Carolina (S.C.) nonresidents, state and local administration and enforcement of S.C.’s decentralized and integrated system of (A) personal property taxation under Chapter 37 of Title 12 of the S.C. Code of Laws and (B) automobile registration and licensing under Chapter 3 of Title 56 of the S.C. Code of Laws burdens interstate commerce in terms of (1) the incoming business-related services of S.C. nonresidents and (2) the non-business personal travel of S.C. nonresidents. Specifically, this article argues that in the administration and enforcement of S.C. Code Ann. §§ 56-3-150(B) & -160 and S.C. Code Ann. §§ 12-37-2610 & -2630 against S.C. nonresidents, the policies, practices and procedures of S.C. counties violate the Dormant Commerce Clause of the U.S. Constitution, which may thereby subject the county and its employees to actual and/or punitive damages under the United States Constitution, particularly pursuant to the provisions of Title 42 of the United States Code, Section 1983. Within the context of this article’s case study, such liability is found in the deprivation of a S.C. nonresident’s fundamental (constitutional) personal right (a) to travel out-of-state (i.e., in S.C.) and (b) to earn a living by exporting his services to S.C. Specifically, under the facts of the case study, this article clearly shows that the interaction among certain provisions dealing with (A) the personal property taxation of automobiles (S.C. Code Ann. §§ 12-37-2610 and 12-37-2630) and (B) automobile registration and licensing (S.C. Code Ann. §§ 56-3-150(B) and 56-3-160) effectuates a substantial burdening of interstate commerce, where such provisions are determined to be unconstitutional, in that the benefits of these state provisions cannot be viewed as outweighing the burden on interstate commerce. Further, the county and its employees may be liable for actual and/or punitive damages if qualified immunity is unavailable because a nonresident’s fundamental (constitutional) personal right to travel out-of-state and earn a living is well-settled. It follows that the purpose of this article is to enhance the awareness of state and local public officials concerning their liability in situations when these public officials act under color of state law in a manner that burdens interstate commerce and correspondingly deprives a S.C. nonresident of “fundamental rights” secured and protected by the U.S. Constitution. The primary objective of this article is to identify the constitutional implications under the Dormant Commerce Clause of a county’s enforcement of S.C. Code Ann. §§ 56-3-150(B) and 56-3-160 by means of established policies, practices and procedures, so that such enforcement is ceased as being an unconstitutional grant of authority by the state to the county. In a case study approach, this article accomplishes its purpose and objective in a stepwise fashion as follows. First, in Part II, the legal framework associated with the Dormant Commerce Clause of the U.S. Constitution is established. Second, in Part III, the legal framework associated with S.C.’s decentralized and integrated system of (A) personal property taxation under Chapter 37 of Title 12 of the S.C. Code of Laws and (B) automobile registration and licensing under Chapter 3 of Title 56 of the S.C. Code of Laws is established. Third, in Part IV, the facts of the case study are identified. Fourth, in Part V, the constitutional law established in Part II (Dormant Commerce Clause) is applied to the facts of the case study to show that S.C. Code Ann. §§ 56-3-150(B) and 56-3-160 are unconstitutional grants of authority by the state to the county in specific violation of the Dormant Commerce Clause of the U.S. Constitution. Fifth, in Part VI, for purposes of identifying further research, this article will address other constitutional implications of the interaction among provisions dealing with (A) the personal property taxation of automobiles (S.C. Code Ann. §§ 12-37-2610 and 12-37-2630) and (B) automobile registration and licensing (S.C. Code Ann. §§ 56-3-150(B) and 56-3-160). Article I, §8 of the U.S. Constitution (i.e., the Commerce Clause) has two distinct functions. One function is to authorize congressional action regarding interstate commerce. The other function of the commerce clause is to limit state and local regulation of interstate commerce. This latter function is reflective of the so-called dormant, or "negative," commerce clause. In summary, the dormant commerce clause is a common law doctrine that stands for the proposition that a state or local law is unconstitutional if it places an undue burden on interstate commerce. Within this context, there is no constitutional provision that expressly declares that states may not burden interstate commerce. Instead, to regulate commerce among the states, the Supreme Court has inferred this mandate from the grant of power to Congress in Article I, §8. Accordingly, any state or local law can be challenged on the constitutional grounds that such law excessively burdens commerce among the states. In other words, even if Congress has not enacted legislation within a particular area of commerce (i.e., even if its commerce power lies dormant), a state or local law can be constitutionally challenged, nevertheless, as unduly burdening interstate commerce. The modern approach to the Supreme Court’s application of the Dormant Commerce Clause lies in balancing (i) the benefits of a particular local law against (ii) the burdens that such law imposes on interstate commerce. [Southern Pacific Co. v. Arizona, 1945] Accordingly, the primary issue in a dormant commerce clause case is whether the benefits of a particular local law outweigh its burdens on interstate commerce. However, the manner in which a Court balances (i.e., the balancing scale) is not the same in all dormant commerce clause cases. Instead, the scale (i.e., the standard for measurement) varies depending on whether the local law (i) discriminates against nonresidents, in favor of residents, or (ii) treats residents and nonresidents similarly. A state “may not, under the guise of regulation, discriminate against interstate commerce [e.g., incoming commerce]. .. Underlying the stated rule has been the thought [Inner Political Check Doctrine] . . . that when the regulation is of such a character that its burden fall principally upon those without the state, legislative action is not likely to be subjected to those political restraints which are normally exerted on legislation where it affects adversely some interests within the state.” [South Carolina State Highway Department v. Barnwell Brothers, 1938] Accordingly, a critical factor in any dormant commerce clause analysis is whether the local law discriminates against nonresidents or whether such law treats residents and nonresidents similarly. Further, whether the local law benefits local businesses or residents is of no import, the Dormant Commerce Clause is violated in either case. [Brown-Forman Distillers Corp. v. New York State Liquor Authority, 1986]
A Descriptive Overview of Islamic Taxation
Dr. Ali Reza Jalili, Stetson School of Business and Economics, Mercer University, Atlanta, GA
Currently, there are fifty-seven Muslim countries in the world covering one-fifth of the world’s landmass, including some vital strategic areas, and few other emerging Muslim countries are on the horizon. Additionally, Islam is prominent in several other countries and Muslims constitute a sizeable minority in yet other societies. With population of about 1.3 billion, Muslims account for one-fifth of the world’s inhabitants. Given the prevalent high birthrate in Islamic societies, in twenty years, one-third of the world’s population is expected to be Muslim. Muslim countries today control more than seventy percent of the world’s energy and account for 40% of the global exports of raw materials. Their economic and geopolitical relevance, both as suppliers of energy and raw materials as well as vast rich markets for various goods and services is increasing rapidly. The Western interest in the Muslim world, thus, is not a coincidence. Since the Middle-Ages Muslims and Islamic countries have occupied an important position for the Western governments and societies. The course of history and economic development in the twentieth century has substantially intensified this prominence. The history, politics, economics, sociology, behavioral patterns, and other social aspects of Islam and Muslim communities have been subject of continuous interest and investigation in the West. Parallel to this interest, during the twentieth century, there have been indigenous attempts to revisit and reinterpret Islamic Thoughts and revive Islamic practices in the Muslim societies. In the recent past, this movement has acquired an appreciable momentum as shown through political as well as ideological movements. Siddiqi (1981) lists 700 articles on Islamic Economics alone, covering up to 1975. Few doctrines in the history of humanity have had as strong a hold on their adherents as does Islam. For devote Muslims, Faith governs all aspects of their lives. The Muslim Holy book (Qur’aan) and Muhammad’s traditions and actual practices (Sunnah) make up the Islamic Law (Sharia) which contains and covers all that is needed for a believer to be blessed and delivered, both in this world as well as Hereafter. As such, a policy maker in a Muslim society must be aware of these subtleties and consider them fully in devising and implementing socio-economic policies. The question is no less important for all other parties interested in dealing with the Muslim communities on all political, economic, and international affairs. Today, all Muslim countries are among the developing nations. The decision makers in these societies, along with the International Agencies and Foreign Governments, are faced with a crucial and special challenge. Namely, how to take into consideration the underlying Islamic belief-system in formulating, forging, and implementing the appropriate international and developmental policies. Failure to adequately consider these aspects, will render the governing bodies incapable of meeting the challenges and could very well backfire through inducing tardiness in accomplishing the desired tasks, undermining the policies and plans, or even outright sabotage of the entire scheme. Episodes in Afghanistan, Algeria, Bahrain, Egypt, Iran, Iraq, Indonesia, Kuwait, Lebanon, Libya, Malaysia, Pakistan, Saudi Arabia, Sudan, Turkey, Qatar, U.A.E, and Yemen are few of the more recent memorable instances from a very long roster. Taxes and tax systems are among the most potent policy tools at any policy makers’ disposal. In many situations, tax policies can foster or hinder the plan in its entirety. Under Islamic jurisprudence, specific taxes and tax systems are prescribed. Accordingly, disregarding, degrading, or contradicting these prescriptions could prove catastrophic to the developmental policies and policy-making institutions in Muslim societies. Thus, familiarity with and understanding of the Islamic taxes, their underlying philosophy, their impacts, and the overall tax system are imperative for both decision makers in the Muslim world, as well as all outsiders who deal with these communities. To enhance the possibility of success, prior to devising or implementing any plan or policy, many questions must be answered. For instance, what are the Islamic taxes? What philosophy they subscribe to? What aims and policies do they pursue? What outcome do they seek? What are their impacts? What are their institutional requirements? Can they be integrated into modern tax policy and systems? Are they subject to changes and revisions? And so forth. The current study is an attempt to address some of these and similar questions. The task will be carried out in two installments. Part one, the present paper, is a descriptive explanation of Islamic taxes. Accordingly, it lays the foundations and presents the origin, the structure, and the basis of Islamic taxes. The second installment on this work will discuss the methodology and philosophy of Islamic taxation and will engage in their critical analysis and assessment. Islamic thoughts encompass several schools and interpretations. Shiite School and four Sunni Schools (Malekite, Shafeite, Hanafite, and Hanbalite) are the most popular and predominant interpretations. In this work, although some of the major and material differences are mentioned, as the rule these differences and details are not discussed and an exhaustive search of the literature or comparative studies is not performed. Instead of concentrating on details, the aim is to capture and present the essence and spirit of Islamic taxation. To investigate, explain, and sketch the outline of this tax system and lay the foundation for understanding and evaluation of its philosophy, methodology, institutions, and objectives. To accomplish the task, only main source (Qur’aan) along with a sample of secondary sources are examined and cited in this work. The primary reason for this choice is the fact that most of the secondary sources are either redundant or irrelevant. Many of these works are redundant and as such analyzing and citing a sample of works on each topic will convey the essence of the issue and provide adequate references. Some of the works are irrelevant because the authors are either truly confused or evasive. That is, they either mix several issues and discuss them without clear understanding of the categories they are discussing, or instead of engaging in the debate about the real issue at hand, present their own unsubstantiated interpretations with very little connection to the original source. Therefore, to avoid unnecessary controversies, a detail discussion and comparisons of different readings of the original sources will be deferred to a later work. The present paper, however, will provide all concerned parties with a point of entry to comprehend this aspect of Muslim societies and Islamic doctrine. This cognizance, in turn, should facilitate and expedite the process of planning, analyzing, decision-making, implementing, and evaluating the international and developmental tactics and strategies in or about Islamic world. To comprehend the Islamic tax system, Islamic views on several relevant economic categories should be noted. Appreciation of these notions will help researchers to grasp the Islamic tax system and its place within the overall Islamic doctrine. Chief among these concepts are ownership, concentration of wealth, justice, and appropriate consumption.
Exploring a Taxonomy of Global Leadership Competencies and
Dr. Stewart L. Tubbs, Eastern Michigan University, MI
Dr. Eric Schulz, Ph.D., Eastern Michigan University, MI
There is a substantial body of research evidence regarding the importance of leadership development to organizational success, Charan, Drotter and Noel (2001), Fullmer and Goldsmith (2001), McCall and Hollenbeck (2002), McCauley, Moxley and Van Velsor (1998), Viceri and Fulmer (1997, Whetton and Cameron (2005). There is no more important task with regard to leadership development than identifying the competencies and meta-competencies that comprise leadership. However, to date, there has not been agreement regarding just what are the Global Leadership Competencies that should be taught and learned. In this paper leadership is defined as, “Influencing others to accomplish organizational goals,” (Tubbs, 2005). Based on the model presented in this paper, the rationale is advanced that some aspects of leadership are more or less fixed at a young age while others are able to be developed even well into adult life (i.e., the Global Leadership Competencies). This paper describes the model and identifies fifty Global Leadership Competencies in the form of a taxonomy of Global Leadership Competencies and Meta-competencies Most importantly, leadership development efforts must be targeted on the outermost circle in the model.. Approximately $50 billion a year is spent on Leadership Development (Raelin (2004). Yet, two of the most frequently asked questions of leadership scholars is (1) what competencies and meta-competencies comprise leadership and (2) can leadership, in fact, be taught and learned. This paper attempts to answer both questions. Some aspects of leadership are more likely to be learnable and others are less so. For the purposes of this paper, leadership is defined as, “Influencing others to accomplish organizational goals,” Tubbs, (2005). Leadership is often discussed in terms of competencies, (Boyatsis (1982), Bueno and Tubbs, (2004), Chin, Gu and Tubbs (2001), Goleman, Boyatsis and McKee (2002), Whetton and Cameron, (2005). Competency is a term that describes the characteristics that lead to success on a job or at a task, Boyatsis (1982). Competencies can be described by the acronym KSA knowledge, skills and abilities. The model in Appendix A shows that leadership competencies can be represented by three concentric circles. These three circles describe three distinct aspects of leadership. The innermost circle includes an individual’s Core Personality. The second circle includes an individual’s values. The outermost circle represents an individual’s leadership behaviors and skills, (i.e., meta-competencies). The authors contend that (1) the attributes in the innermost circle are more or less fixed at a young age and are unlikely to be changed as a result of leadership development efforts; (2) that a person’s values are somewhat more malleable than personality characteristics, yet more stable and perhaps more resistant to change than behaviors; and (3) that the behaviors represented in the outermost circle are the most likely to be changed through leadership development efforts. Each of these circles are be discussed below. Personality represents the accumulation of enduring physical and mental attributes that provide an individual with his or her identity. These attributes result from the interaction of heredity and environmental factors. Determinates of personality can be grouped in four broad categories: hereditary, cultural, familial and social interactions. Each of these perspectives suggest that an individual’s personality is a relatively enduring characteristic formed early in their life. Genetic specialists argue that components of an individual’s personality are in large part heredity (Holden, 1988). Personality is also affected by an individual’s culture because it directs what an individual will learn and formats the context in which behavior is interpreted (Hofstede, 1984). While the culture dictates and restricts what can be taught, a person’s family plays a key role in the constitution of an individual’s personality development. The overall social context created by parents is vital to personality development (Levinson, 1978). Besides family influences on personality, social interactions in the environment effect personality by dictating what is acceptable and customary in the social group. A leader’s self-concept represents the centerpiece of that leader’s conscious existence. Self-concept refers to a leader’s perception as a physical, social and moral person. A leader’s self-concept is shaped by their self-esteem, self-efficacy and cognitive thought process (Brief and Aldag, 1981). Self-esteem is shaped by an assessment of one’s overall self-worth. Self-efficacy is represented by one’s faith in their ability to perform a particular activity. Finally, cognition concerns one’s knowledge, opinions or beliefs (Sullivan, 1989). Personality researchers have identified three enduring characteristics of individuals across time. These characteristics can be categorized by the prevalence of dominate personality dimensions, attribution of events impacting the individual and preferred manner of resolving unmet needs. Personality research has increasingly identified five dominant personality dimensions simply termed: the Big Five. The Big Five personality dimensions are extroversion, agreeableness, conscientiousness, emotional stability, and openness to experience (Barrick and Mount, 1991). The following represent characteristics of a person scoring high on each of the Big 5 personality dimensions: Extraversion: Outgoing, talkative, sociable, assertive; Agreeableness: Trusting, good-natured, cooperative, soft hearted; Conscientiousness: Dependable, responsible, achievement oriented, persistent; Emotional Stability: Relaxed, secure, and unworried; Openness to Experience: Intellectual, imaginative, curious, broad minded. Research from the Big 5 personality literature indicates that these personality dimensions are stable forms of an individual’s character from early childhood. Further, the Big 5 personality dimensions appear to be ethnocentric across cultures. Cross-cultural personality research has found stable Big 5 personality dimensions among individuals in such divergent nations as Russia, Canada, China, Poland, Germany, South Korea, and Finland (Blaylock and Rees, 1984). Repeatedly, conscientiousness has been found to be the personality dimension most related to job performance, including leadership and managerial behavior, among the Big 5 (Rice and Lindecamp, 1989). Successful entrepreneurs have been linked with the dimension of conscientiousness from the Big 5. A high degree of openness to experience, extraversion and conscientiousness among entrepreneurs has been described as proactive personalities. Those termed proactive persons demonstrate a commitment to purpose and persistence to a task. Further, entrepreneurs with a proactive personality as measured by the Big 5 personality dimensions were found to be action-oriented, less restricted by situational constraints and geared to alter conditions in their environment. The Big 5 personality research indicated that individuals scoring high in these dimensions of the Big 5 are naturally predisposed to behave in this manner (Ramsoomair, 1994).
The Web of Deception Money Laundering and Transnational Crime: “A Double Edge Sword of Power & Illusion”
Dr. Kathie Cooper, University of Wollongong, Australia
Dr. Hemant Deo, University of Wollongong, Australia
Money laundering has been the focus of perpetrators for both the regulators and accountants in the 20th century. The methods adopted by the perpetuators of money laundering have become more sophisticated so as to provide a web of deception or illusion so that the regulatory authorities are confused and therefore, the paper trail goes undetected. The Foucauldian framework (Foucault, 1977; Foucault, 1984) employed in this paper addresses that deficiency by making explicit the power and knowledge disciplinary relationships inherent in the dynamics of any financial institution and the need to under-pin the concept of money laundering through such a complex relationship. Money laundering has been around for as long as law breakers have needed to convert their ill-gotten gains to legitimate currency although the term did not gain official recognition until the Watergate scandal in 1972. Perhaps it is folk lore, but popular opinion (see for example, Wells, 2003; Richards, 1999) is that the term originated with the efforts of organised crime to “wash” dirty money through the acquisition of a cash-intensive business, specifically, a laundromat, so that the dirty money could be assimilated with the legitimate proceeds of business. Arguably, the need to legitimise illicit funds gained momentum after Al Capone was convicted for tax evasion in 1931. The days when money laundering was the province of organised crime, if it ever was, is well and truly over. Scandals such as BCCI and Enron demonstrate all too clearly that money laundering has become a pass time of the rich, famous and outwardly upright pillars of the community including executives of prominent companies and banks, accountants and lawyers. As Mitchell et al (1998) have noted “money laundering is increasingly undertaken by organised groups, corporations and elite occupations”. Money laundering has been facilitated by technological innovations and contemporary business practices including shell and shelf or nominee corporations, bank confidentiality and secrecy policies, guarantee and buy back arrangements and back-to-back financing arrangements as well as bribery, corruption, witness intimidation and insider information. The paper applies a theoretical Foucauldian framework to the issue of money laundering to better understand the power and knowledge interplays within such a complex process. The research paper is divided into a number of sections firstly, the money laundering process, followed by the Foucauldian theoretical framework, then some of the instruments used as deception tools in the money laundering process, then a section on how to combat this complex issue of money laundering and finally some future directions in this area. There are many definitions but simply put, money laundering is the process of concealing the proceeds of crime such as money or other property so it can be used for legitimate business, further criminal activities and merely personal enjoyment. The Australian Law Reform Commission Report 87, Proceeds of Crime cites a number of definitions of money laundering including that of the 1988 United Nations Convention against Illicit Traffic in Narcotic Drugs and Psychotropic Substances, the conversion or transfer of property, knowing that such property is derived from any indictable offence or offences, for the purpose of concealing or disguising the illicit origin of the property or of assisting any person, who is involved in the commission of such an offence or offences to evade the legal consequences of his or her actions or the concealment or disguise of the true nature, source, location, disposition, movement, rights with respect to, or ownership of property, knowing that such property is derived from an indictable offence or offences or from an act of participation in such an offence or offences (paragraph 7.4). As report 87 points out, this definition is the basis of most Australian legislation and regulations dealing with money laundering. Most commonly but not exclusively, the origin of money that needed to be laundered was drug trafficking but as the world has become globalised and information technology increasingly sophisticated, money laundering has become linked to activities including people, arms and cigarette smuggling, insider trading, bribery and corruption, embezzlement, prostitution, computer and other forms of wire fraud and tax evasion. The origins of the term ‘money laundering’ are blurred but the link with Al Capone and other mafia-type crime lords makes a good story. Essentially, crime organisations purchased cash-intensive entities such as laundromats and mingled illicit cash from narcotics running, prostitution and bootlegging with legitimate business income. The process is perhaps linked to Al Capone because he was convicted and jailed for failing to declare his illegal income for tax purposes. A recent case in Australia saw a former thief able to claim as a tax deduction the value of stolen money that was in turn stolen from him. The logic of the Court being, if you are compelled to declare and pay tax on income from illegal sources, you can also claim the loss of illegal income as a tax deduction. The laws of anonymity of sources of income for tax purposes have changed but the moral of the Capone and the recent Australian case appears to be that it is better to pay tax on your ill-gotten gains than risk going to jail for tax evasion. Of course, if you can launder proceeds of criminal activity to one of the world’s many tax havens, crime may well pay very nicely. Money laundering has been facilitated by the emergence of global markets and information technology making money a commodity that is geographically mobile through global stock markets, money and futures markets and interest rates. The use of a Foucauldian theoretical focusing on the social dimensions that Foucault identifies as significant, for example discipline, power and knowledge is appropriate to an analysis of money laundering. The aspects of power, knowledge and surveillance provide an insight as to the complicated nature of the money laundering problem faced in the world today. Money laundering from a Foucauldian perspective is observations and reflections on the dynamics and power structures inherent within a Financial Institution (FI) through its social setting and other institutional factors. Michel Foucault’s writings deal with issues of social behaviour in areas such as psychology, criminology, mental illness and medicine. Within a case study setting, he interpreted these human sciences by means of a lens whose focus was the conjoined nature of power and knowledge (Foucault, 1977). Intertwined with this focus, Foucault provided valuable insights into the historical development of social order. Through a process of “archaeology” he unearthed a sequence of historical events (Foucault, 1977), and focused in more detail on these by studying dramatic changes within that sequence, which he termed “genealogy” (Foucault, 1972, 1984). He further subdivided this genealogical process into “discursive formations” (Foucault, 1972, 1984), which provided the concepts of “disciplinary surveillance” and “disciplinary gaze”. These were given tangible representation by his “panopticon eye”, the instrument of ultimate surveillance.
Teaching the Job Stress Audit to Business School Students: Causes, Measurement, Reduction
Dr. Gene Milbourn, Jr. University of Baltimore, Maryland
This paper will provide an outline on structuring a consulting project for business school students on the topic of employee stress measurement and reduction. It will suggest a step-by-step program to lower high stress levels in an organization. Specifically, the paper will assist students in (1) selecting an appropriate stress measurement instrument; (2) identifying the organizational causes of the two major types of job stress—job ambiguity and job conflict; and (3) using the Rizzo, House, and Lirtzman stress model, the paper will formulate a practical approach to reducing high levels of job stress. The work of Harvard’s Herbert Benson is introduced as a remedy to the Type A Personality. While no intended to a literature review, some research is reviewed as appropriate for pedagogical purposes. We all know people employed by small and large organizations who have the habit of pitting themselves against the clock. These people are usually ambitious, competitive, aggressive, and highly success-oriented. They will likely be individuals who try to control as many situations as possible and may succeed in all endeavors--even in a casual game of cards. Such people believe they can control almost everything and make anything happen that they wish. When problems or threats to their control emerge, they respond with an even greater intensity and often become more aggressive, frustrated, and stressed when their efforts to prevail fail. Even when they succeed, wish they would have accomplished something faster or more effectively. While some personality factors--like those above--are linked with stress, some characteristics of organizations and jobs are as much to blame or more so. Work overload and work underload to many people are sources of endless frustration. Non-participation in company affairs as well as not felling secure are other causes of stress in organizations (see Robbins, 2005 for a review; also Parker and DeCotiis, 1983). While there are many psychoanalytic explanations for high stress levels, this article will mainly deal with the causes of stress that are controllable by managers. These causes are contained in two broad categories and are more pronounced in small organizations since they are typically managed by "non-professional" managers. The first is inadequate formalization practices or as they are sometimes called "principles of organization." The second root cause of high stress levels is the absence of supportive leadership practices (House, 1970; Cavannaugh et al 2000; Cummings, 1990).. When both of these practices are perceived by workers as being inadequate or unsatisfactory, the workers will be troubled by ambiguity surrounding their responsibilities and troubled again by conflict in trying to carry-out their responsibilities. Furthermore, since a manager is expected to manage oneself as well as others, this article will also discuss two ways of remedying the "type A" stress prone personality described at the outset of this paper. Job stress takes basically two forms: job ambiguity and job conflict. Job ambiguity refers to the lack of clarity surrounding a person's job authority, responsibility, task demands, and work methods. If a job is ambiguous, the worker has unclear work goals, procedures, and responsibilities and may be uncertain about his or her authority. The person suffering from job ambiguity simply does not know what is expected in terms of job performance. Students may experience job ambiguity when they do not know how to study for a test or what to study. Job conflict refers to the degree of incompatibility of expectations felt by a person on the job. In common sense terms, a person is caught in a decision quandary. A worker experiences job conflict when the worker must choose to do one thing over another and feels uneasy. For example, some staff employees may be reporting to the plant manager and to a staff manager with functional authority; each of whom wants the employee to follow specific orders. They may ask the employee to do two different things within the same time frame. The worker is in a dilemma about whose orders to follow. Another conflict arises when people are asked to perform duties for which they were not hired. Yet another example of conflict occurs when an individual is ordered to perform duties which to him or her are unethical. If the duties are actually performed, the person may suffer from guilt feelings and depression from having sacrificed strongly held principles. On the other hand, if the employee elects to disobey the supervisor, he may be subjected to some form of reprimand. Clear lines of authority, clearly defined jobs, and participative goal setting will insure that workers understand their own goals as well as those of the department and of the company. When there are clearly defined company, departmental, and personal goals, people will know what is expected of them in terms of duties, authority, and responsibility. Such human problems as aimlessness and anxiety will not occur when there is effective goal setting within clear chain of command. The organizational principles prescribing that each worker should have only one supervisor prevents a situation where a worker must try to fulfill the expectations of two supervisors. The organizational principles of "responsibility equaling authority" and having "authority delegated as far down the line as possible" insures that people feel important and significant to the firm while guaranteeing that employees are able to carry out their responsibilities without interferences such as those involving overlapping authority. Many types of frustration and conflicts result from being delegated authority for the success of an activity and then having it undermined by the very person who delegated the authority in the first place. Employees must believe that delegated authority is authentic and genuine if full commitment is expected from the subordinate.
A Legal Perspective on Outsourcing and Offshoring
Dr. Sam Ramanujan, Central Missouri State University, MO
Sandhya Jane, Central Missouri State University, MO
This article identifies the legal issues and controls used for contracting a project and its impact on conducting global business. It describes the wide variety of legal risks and its implications on different types of contract. Managers may use this paper as a framework for evaluating an IT outsourcing/ offshoring decision. Outsourcing and offshoring are complex business strategies that are meant for enhancing company’s profitability by improving operating efficiency and allowing management to focus on core business activities (Slaughter and Soon 1996). Successful outsourcing results when the objectives and driving force are clearly spelled out (Goo, Kishore and Rao, 2000). Companies need to comprehend some of the major issues such as compliance risks, potential legal and financial problems that can arise due to lack of data security, privacy, intellectual property rights and executive accountability. In this paper we will study and analyze the legal issues in Outsourcing such as Information Security, Privacy, Intellectual Property, Copyrights, Patent, Trade Secret and other regulatory compliance and their implication to business. To study these legal issues, we have categorized outsourcing and offshoring into four types based on nature of contract: 1. Outsourcing: It is defined as a company contracting in part or a whole project to a vendor based in same country. Example, Company A (a manufacturing company) is contracting its information systems project to Company B (a software development company), in USA. 2. Offshoring: It is defined as setting up company’s existing business function or division in a foreign country. Example, Company B decides setting up its own software development division in foreign country to take an advantage of a competitive market. 3. Outsource-offshoring: This happens when the outsourcing vendor go offshore for contracting part or whole project to third party vendor situated in another country. 4. Offshore-outsourcing: It can be defined as company contract its part or whole project to a vendor based in another country. Outsourcing is not a new phenomenon. During 1960s and 1970s outsourcing started making its way in the area of finance and operational support in time sharing areas or processing service due to lack of affordability of computers and non-availability of skilled IT personnel. When vertical integration became a prominent aspect in the 1980s, the outsourcing of programming application had lost its steam. IT was considered a valued in-house function. Organizations generally operated their information systems environment on a custom basis, developing IT infrastructure unique to each organization. In the 1990s as market matured, most of the companies were routinely outsourcing their Information systems functions as well as other functions like finance and taxation, business process units, call centers and other important function which were considered as taboo in initial stages. Today, many corporate giants like P & G, IBM, and Intel are looking for total solution provider such as Application Service Provider (ASP), Business Process Outsourcing (BPO), e-business hosting and their outsourcing deals are remarkably innovative. These deals are extremely complex in nature and based on the service providers capability of developing, implementing and maintaining the large projects by mobilizing resources in terms of software, hardware and human resources. (Gurbaxani, 1996). Regardless of its popularity, no research could determine the exact recipe for effective outsourcing performance. Prior research has reveled contradictory result about the performance of outsourcing in terms of efficiency, service quality and overall business satisfaction. The performance measures in this aspect were vague as they were depended on per project bases. The study conducted by Jae-Nam, et al further revels that in some cases the decisions were a trade-off between many contingent factors (Foley, 2003; Jae-Nam et.al. 2003). Based on prior research and legal precedence we have created a framework to help managers make outsourcing or offshoring decisions. As offshoring or outsourcing to affiliated or non affiliated entities in foreign countries evolves from low value and low exposure projects to increasingly complex projects involving core competencies and intangible assets it also introduces problem such as differences in culture, language, and technical infrastructure but more importantly legal and compliance issues. What happens if there are any direct or indirect threat occurs to these assets? How to protect these assets and prosper from the creating a collaborative relationship between a company and its service provider? What are those core issues and its legal implications a company needs to understand before it consider any of those options for contracting a project? For better understanding of these issues we have classified these issues into tangible and non-tangible issues. Tangible issues are issues like pricing, quality, schedule and other monetary, issues that can be measured in terms of quality and quantity. These issues have direct affect on the entire project. In this section we highlight the tangible issues that will impact the outsourcing decision. Pricing and other business benefits:The project price is at the core of any contract, a carefully thought through contract will take the financial terms and deal with some of the uncertainties. This can be as relevant to "fixed price" contracts as for those that anticipate price fluctuation. Liquidated damages, the impact of delays, effect of inflation, manpower and material shortages, insolvency and other issues must be addressed. The use of a sensible change control procedure can help anticipate and avert major financial fluctuations and problems. Performance and Quality: IT projects are process driven and require monitoring and measuring closely, to create a fruitful and long-standing relationship between client and vendor. CMM, ISO and Six Sigma are not just an option for a contract. To achieve effective co-ordination and understanding between offshore team and onsite team, quality training must be made mandatory. Problems may occur if a contract is inflexible when there are changes due to unforeseeable reasons. It is also important that client and IT service provider need to anticipate these problems in advance to avoid mismatch between the initial goals of the project and expected results of the project (Hayes 2003).
Strategic Offshoring from a Decomposed COO’s Perspective: A Cross-Regional Study of Four Product Categories
Kien-Quoc Van Pham, Pacific Lutheran University, Tacoma, WA
Decomposing country-of-origin (COO) effects for four dissimilar product exemplars originating from 18 countries (6 from each Triad area) elicited statistically significant different consumer preferences for specific country (ies) of origin for hybrid products. These COO dimensional preference findings from a survey of 170 non-traditional students and management seminar attendees representing 30 countries reaffirm that management needs to also consider and monitor COO consumer preferences and country stereotyping effects (CSE) beyond the cost benefits normally associated with strategic offshoring and global outsourcing/supply chain management practices. Country of assembly (COA)/country of manufacture (COM) emerged as the most important COO dimension overall. Global benchmark countries are identified for each product class operational dimension, and optimal pairings of regional trade areas in terms of COO dimensional preferences are suggested for global consumer market segmentation. While country of origin (COO), “Product Country Image” potential effects (as a single cue or as multiple cues) on consumer product selection, imputed quality perception and purchase intention have been extensively documented, the proliferation of hybrid products with multiple country affiliations (country of assembly-COA or country of manufacture-COM, country of design-COD, country of brand-COB, and country of parts/components-COP/COC) and corporate global offshoring, outsourcing upward trends warrant sustained investigation of consumer COO country-specific product dimensional preferences. Recent publications have focused on these multi-dimensional aspects of COO (Chao, 1993, 2001; Tse & Lee, 1993; Li et al, 2000; Insch & McBride, 1998, 2004; Chao et al, 2005), especially on branding (COB) with national, bi-national studies for single product or a limited array of products. However, few have addressed this phenomenon across multiple countries or from a regional level (Papadopoulos & Heslop, 2002; Balestrini et al, 2003; Cervino et al, 2005). This exploratory study provides a regional trade areas perspective of consumer product valuation in terms of Roth and Romeo’s (1992) COO operational product dimensions (prestige, design, innovation, and workmanship), and the identification of specific country-product-dimension preferences for four product categories that would allow for: 1) an optimal hybrid product dimensional mix, and 2) marketing “economies of scope” via either standardization across regional trade areas or the necessary customization (differentiation) of product offerings for specific regional trade areas. Beginning with Schooler’s (1965) seminal study, evidence to date supports the existence of COO effects as a product informational extrinsic cue (Liefeld, 1993), either as a single cue, uni-dimensional construct (Bilkey & Ness, 1982; Han, 1989; Ozsomer & Cavusgil, 1991) according to the Halo model or as multiple cues (Han & Terpstra, 1988; Cordell, 1992) with the Attribute Model. Consumers express preferences for products from some countries over those of other countries (Gaedeke, 1973; Bannister & Saunders, 1978; Erickson et al, 1984; Eroglu & Machleit, 1989; Cordell, 1992; Amine & Shin, 2002). Cordell (1992) made the argument that when deciding on overseas production locations, management must take into account not only resources and cost benefits, but also the effect that country of origin may have on consumer evaluations. Examining COO perceptions of fourteen countries and eight products, using perceived quality and choice measures, Cordell found that consumer preferences are more product-specific for industrialized countries relative to less developed countries. In addition, hypotheses that performance risk and brand moderate COO effects were upheld under most conditions. Roth and Romeo (1992), while evaluating COO effects in terms of the fit between countries and product categories, proposed a framework which allows for the matching of the importance of four product dimensions (prestige, design, innovation and workmanship) with COO perceived image along the same dimensions. Matches can be either favorable or unfavorable and management can utilize product-country match information to assess consumers’ purchase intentions and to assist in managing product’s COO. For all respondents, the correlation between country image and the willingness to buy is positive and highly significant, and country familiarity differences do not appear to affect the consumers’ use of image dimension(s) when assessing their willingness to buy. While numerous studies have documented the negative stereotype associated with products made in developing countries (Gaedeke, 1973; Lillis & Narayana, 1974; Banister & Saunders, 1978; Mohamad et al, 2000), a critical question arises as to how consumers may evaluate products with mixed COO dimensions, and more critically how consumers may evaluate a product if it is indicated as being designed in a more economically advanced country enjoying a strong reputation worldwide as consumers may rely on this information and impute quality. Chao (1993) addressed this hybrid product issue: he detected no significant country of design (COD) by country of assembly (COA) interaction effect, suggesting that poor perceptions of product quality associated with a particular country of assembly location cannot be compensated by having the product designed in a country with a positive design stereotype. He concluded that there is no advantage to using a country with superior perceived design capability to boost product quality perception if the product is assembled in an already poorly perceived country.
Alternative Options for Business Decisions Using Nearly Optimal Programming
Dr. Alan Olinsky, Bryant University, Smithfield, RI
Dr. John Quinn, Bryant University, Smithfield, RI
Linear Programming is a quantitative method for finding the optimal solution when there are limitations on resources. This technique is used extensively in a variety of areas, including the field of business decision making. One shortcoming of mathematical modeling in general and linear programming in particular, is that these models can only represent approximations of the actual constraints in the system. Therefore, the optimal solution might not be the best course of action for a company, since there might be unquantifiable restrictions that could not be represented in the model. This paper illustrates a technique called Nearly Optimal Programming that generates multiple solutions, very close to the optimal objective value. This allows the decision maker to choose from a variety of solutions, or even to combine different solutions to produce another solution with desired characteristics. The example is for a multinational company taken from the literature. Linear Programming (LP) is one of the most popular methods for quantitative analysis and is used extensively in business studies. For example, “LP has been used to solve optimization problems in industries as diverse as banking, education, forestry, petroleum, and trucking. In a survey of Fortune 500 firms, 85% of the respondents said they had used linear programming.” (Winston 2004, pg. 49). The principal goal of a LP model is to find the best decision for a given problem based on the availability of limited resources, as modeled with a linear objective function and a set of linear constraint inequalities. In this standard model, the objective function is to be maximized and there are n decision variables (xj) and objective coefficients (cj), and m functional constraints and resource availability levels (bi) with m×n technology coefficients (aij). Finally, there are nonnegativity constraints for the decision variables. However, the optimal solution (values for the decision variables at the best objective value z*) might not always be the most attractive one since the LP model is, at best, only an approximation of the actual business conditions. Some of the restrictions could be difficult, if not impossible, to model. Therefore, important constraints for the given business situation might be left out of the mathematical model. As stated in the classic Operations Research text by Hillier and Lieberman (2004, pg.15): A common theme in OR is the search for an optimal, or best, solution. Indeed, many procedures have been developed … for finding such solution for certain kinds of problems. However, it needs to be recognized that these solutions are optimal only with respect to the model being used. Since the model necessarily is an idealized rather than an exact representation of the real problem, there cannot be any utopian guarantee that the optimal solution for the model will prove to be the best possible solution that could have been implemented for the real problem. There are just too many imponderables and uncertainties associated with real problems. Similarly, Makowski, et al (2000, pg. 66) state: As noted by Brill (1979), a linear programming model may not take into account all the objectives and all the constraints that are important for the stakeholder. Many issues cannot be quantified satisfactorily and the calculated optimal solution x* is not necessarily the best solution in the real world. Better solutions may be found in the set of nearly optimal solutions … The solutions … are all good in terms of objective function value but can differ considerably in terms of decision variable values. More references to NOP programming can be found in Kennedy and Quinn (1998). In order to deal with this limitation of mathematical modeling, most LP software programs generally perform some type of sensitivity analysis. For example, many codes will calculate tables of objective coefficient ranges and right-hand side ranges, which provide information about how the optimal objective value will change with changes to the objective coefficients and right-hand side values, respectively, while the basic variables remain the same. Beyond this though, there are many approaches for investigating changes in solutions with modifications to the model. A summary of a few of these methods can be found in Chuang and Munro (1983). One of these methods is Proximate Linear Programming, where the right-hand side resources are only known with a certain amount of accuracy. A modification to this b-vector can be made where there are essentially two sets of constraints (with the right-hand sides being different) that are utilized to solve the problem (Gould 1972). Another of these methods is called Inexact Programming where the coefficients in the activity matrix A and the objective coefficient c-vector are not known exactly, but only within some ranges. A deterministic model can be developed in which the activity coefficients and objective coefficients are assigned either their smallest or largest possible values, depending upon the direction of the inequalities and whether the objective function is being minimized or maximized (Soyster 1973 and 1974). In the technique called Chance-Constrained Programming (CCP), it is possible to stipulate constraints so that they are met with some probability level as opposed to being met with absolute certainty. In other words, even though a feasible solution can be found, there is still a small probability that the constraint was actually violated. The original, seminal work for CCP was done by Charnes et al. (1958 and 1959). In Fuzzy Linear Programming, linear membership functions can be introduced in the LP model for the objective function and constraints, which allows for flexibility in solving the problem (Zimmerman, 1976). There are many other methods that can be utilized to treat uncertainty in the model, including variations and combinations of these approaches. The approach taken here is one that is very easily implemented.
E-Mails in the Workplace: The Electronic Equivalent of ‘DNA’ Evidence
Dr. Nadeem M. Firoz, Montclair State University, Upper Montclair, NJ
Dr. Ramin Taghi, William Paterson University, NJ
Jitka Souckova, Montclair State University, Upper Montclair, NJ
Recent technological advances have dramatically transformed the working environment. Written, unlike the telephone communication in the past, is now documentary evidence of communications. With growing use of email the battle of security and privacy has been heating up. Companies are under increasing pressure to monitor employees’ electronic activities and workers should assume that their every key stroke is being watched. The extensive use of the Internet has changed the way business is done in the typical workplace. Written message to almost anyone in the world is being delivered nearly instantly through the e-mail (Postini 2004). Information for a daily job tasks can be retrieved in seconds from the Internet. While these advances have aided productivity and business growth, they have also created new concerns over corporate security efforts and the privacy rights of employees. Workplace privacy is no longer just about the results of drug tests or question about sexual orientation, though they are still areas of concern. habits and Internet surfing now dominate the privacy issue. Information Technology in today’s business is an integral part of the infrastructure. Employees in every department require a computer terminal and Internet connection in order to do their job effectively. Even the employees who perform their jobs in the ‘field’ are required to carry a laptop, PDA, or other device that can electronically transmit information. This access to the world has introduced a number of new security related issues to the work force. One of the issues involves “the company’s right to maintain control over IT assets which provide employees with an easy way to silently perform personal activities. Employee monitoring is a very controversial topic; ranging from monitoring web access and keystrokes to installing biometric devices to monitor physical location and door entries.”(Bockman, 2004) Based on the large number of monitoring tools on the market, employers are certainly monitoring employee Internet activity. Employee monitoring is occurring more frequently now than in the past. Many employees have no idea they are being monitored. And if they find out that they are, they consider this monitoring action as a violation of their privacy. When employees’ and computer files are being monitored by their employer, both sides should know their rights and be aware of their legal positions. Employer should be familiar with the rights to prosecute or release an employee based on the monitoring result information; as well as the employee should know how to protect him or herself in such situation. It is also important to note that the current legal rights, concerning the monitoring and privacy issue, have some “gray areas and vary depending on the court and the interpretation of the laws.” (Muhl, 2003 ) As a result of this relatively new electronic security problem, many law suits and legal cases have developed with an employee believing they have a right to privacy while using company electronic data transfers and the internet. Knowledge of current laws and how they have been interpreted to protect the employer and company assets will help in understanding an individual’s right to privacy in the work place. There are two main reasons for monitoring at work place. First the businesses need to protect their systems and their business interests. Second is how these protection measures affect the rights of employees, and indeed, the rights of these people as citizens. (Lundy, 2003) The term “ monitoring” includes activities in which electronic monitoring of employees occurs on a continuous basis and is not periodic or random in nature, and such term shall include the periodic inspection of continuous video monitoring from an off site enforcement personnel as well as electronic identifiers or assessors such as electronic card of badge access systems. Employers monitor through sophisticated computer programs that automatically apply complex linguistic analysis to every single outgoing and incoming message in the workplace. There are many different types of surveillance software for companies to choose from. Employers can use computer software that enables them to see what is on the screen or stored in the employees' computer terminals and hard disks. Employers can monitor Internet usage such as web-surfing and electronic mail. Employees involved in intensive word-processing and data entry jobs may be subject to keystroke monitoring. Such systems tell the manager how many keystrokes per hour each employee is performing. It also may inform employees if they are above or below the standard number of keystrokes expected. Another computer monitoring technique allows employers to keep track of the amount of time an employee spends away from the computer or idle time at the terminal. (PrivacyRights.com, 2003). Several tools exist to monitor employee activities, from simple operating system logs to complex multi-user monitoring software packages. For example, software such as SurfWatch and LittleBrother allows employees to track virtually every move made by a worker using the Internet, including specific sites visited and time spent surfing. These products provide employers with the information they need to get an idea of how employees spend their time at work. Many of them provide the employer with a mechanism for preventing users from accessing websites that have been prohibited by the company. According to the survey, “51% of the employers use software to monitor incoming email, 39% have software to monitor outgoing email and 19% monitor the email being sent from employee to employee”. (AMA, 2003) Just a quick search on the Internet for employee monitoring tools finds that the market is flooded with products and tools that perform some type of monitoring. Sales of the employee-monitoring software are worthy about $140 million a year which is only about $5.25 cost per monitored employee per year. (Schulman, 2001)
Does Price Limit Spill the Stock Price Volatility of the Companies with Different Fundamental Value?
Chih-Hsiang Chang, National University of Kaohsiung, Kaohsiung, Taiwan
Some stock markets have employed a number of circuit breakers to avoid non-rational overreaction and price limit is one of them. While price limit is widely accepted benchmarks for the prevention of market crash, the question of whether price limit reduces stock price volatility has long attracted research interest. The purpose of this study is to test volatility spillover hypothesis by examining Taiwan Stock Exchange price limit system. The main difference of this paper with the previous literatures is that we explore the impacts of fundamental value on the price limit performance. The companies with distinct fundamental value react differently to the shock of good news and bad news because the fundamental value is a determinant of stock price determination. Therefore, this paper will analyze the effectiveness of price limit on the prevention of volatility spillover for the companies with different fundamental value. Empirical results indicate that volatility spillover is more obviously after limits were hit for the companies with better fundamental value than those with worse fundamental value. In order to prevent irrational overreaction on the part of investors, many stock markets have adopted price limit as an instrument to stabilize trading activities. However, views on price limit are quite diverse, whether in academia or in industry. Studies that support price limit point out that price limit can decrease stock price volatility and prevent overreactions and would not impede trading activity (Brennan, 1986; Lee and Kim, 1995; Westerhoff, 2003). Those in the opposing camp think that price limit increases stock price volatility; lowers market liquidity and realization; and obstructs price equilibrium (Lehmann, 1989; Coursey and Dyl, 1990; Lee, Ready, and Seguin, 1994; Hung, Fu, and Ke, 2001; Kuo, Hsu, and Chiang, 2004; Diacogiannis, Patsalis, Tsangarakis, and Tsiritakis, 2005). Since the market crashed on October 19, 1987 (Black Monday), regulatory agencies of capitalist markets have begun to take notice of the circuit breakers’ function to prevent stock prices over-fluctuate. Circuit breakers consist of trading halt and price limit with the latter being most employed by emerging markets to avoid frantic trading behavior. Since the founding of Taiwan Stock Exchange Corporation on February 9, 1962, price limit has been adopted as a stock price stabilizing measure to avoid investors suffering great losses due to drastic fluctuations of the stock prices and protect the investors. The range of the price limit can be adjusted according to domestic and foreign political and economic events and situations. In addition to Taiwan Stock Exchange, Japan, South Korea, Thailand, Malaysia, Spain, Greece and Finland are some of the countries that use price limit to stipulate the range within which share prices can rise or fall within a day. Although price limit has been put in practice for many years in Taiwan, the abolition of it is still the topic of debate among industries, government agencies and academia. The current price limit in Taiwan Stock Exchange is that the share price fluctuation in a day cannot exceed 7% above or below the previous day’s closing price. That range is small compared to the limits set by stock markets of the other countries previously mentioned. In addition, major stock markets in the world do not practice price limit. Hence, there exists a need to again investigate and discuss the need of the 7% price limit in Taiwan Stock Exchange. The purpose of imposing the price limit is to allow investors another chance to re-evaluate a stock's fundamental values. Without a doubt, factors concerning fundamental values have pivotal effects on the effectiveness of price limit. Also, these factors that may affect a company’s value (e.g.: profitability, debt servicing capacity, size, and growth capacity) may have differentiated impact on the effectiveness of price limit. A research by Wang, Wu, Shih, and Kuo (2000) pointed out that “it also indicates that the impact of price limits on small firm is more than large firm is.” The current price limit stipulation imposed by Taiwan Stock Exchange applies across the board to all listed companies—within 7% above or below the previous day’s closing price. However, to companies of different fundamental values, the degree which the price of a stock deviates from its actual value and the speed to adjust the price to the equilibrium price may differ. Therefore, one price limit cannot satisfy the fluctuation needs of shares of different fundamental values. Thus, share prices cannot fully reflect relevant information and cause investors losses. This study aims to investigate whether the price limit of Taiwan Stock Exchange supports the hypothesis of volatility spill-over. It also analyzes the influences fundamental values have on the effect of price limit. Compared to earlier literature, this paper has the following features. First, as a company’s fundamental values is a main factor in influencing share prices, share prices of companies having different fundamental values will be impacted differently by good or bad news. Therefore, this paper further investigates the differences of the stabilizing effect of price limit on shares of differing fundamental values. Secondly, we will discuss Taiwan stock market’s price limit and its effectiveness separately. We will also compare whether upper price limit has the same impact on share price volatility as lower price limit. The time frame of this study spans over a period of ten years; therefore, compared to related literature, this paper has a longer research time. The adjustment on share open prices on ex-rights and ex-dividend days and the tick size both affect the accuracy of research result. However, earlier literature that used Taiwan’s stock market as research subject did not explain the adjustment of ex-rights/dividend or tick size. This may have compromised the accuracy of the results. During the analysis of this research, we specifically followed the regulations of Taiwan Stock Exchange Corporation and made necessary adjustments for shares that reached the price limits on ex-dividend days. We further designed control group samples according to the tick size of Taiwan Stock Exchange to raise the accuracy of the results. his research uses Taiwan’s stock market as an actual case to explore the effectiveness of price limit. To increase the accuracy, this research employs the newest programming language .NET from Microsoft to process samples and to solve the problem of adjusting share prices for ex-rights/dividends. The time period studied spans from October 11, 1989 to September 26, 1999. The reason for choosing the aforementioned time period is that Taiwan stock market’s price limit has been maintained at 7% and thus eliminating the difficulty of comparing with different price limits. On the other hand, we have also made adjustments to open share prices on ex-rights/dividends days according to the regulations of Taiwan Stock Exchange Corporation. In our study, 18,184 and 18,745 of the samples reached the upper limit and the lower limit, respectively. The source of data is the archives of Taiwan Economic Journal.
Testing Wagner’s Law Using Bounds Test and a New Granger Non-Causality Test: Evidence for Taiwan
Dr. Chiung-Ju Huang, Feng Chia University, Taichung, Taiwan
This paper examines
government expenditures in Taiwan over the fiscal years 1966 through 2002, in
order to test whether follow Wagner’s Law. Due to having a small sample size,
the use of more traditional cointegration techniques may be unreliable.
Therefore, the Pesaran et al.’s (2001) bounds test for cointegrating
relationships is adopted. The results of the Pesaran bounds test indicate that
there a cointegrating relationship between government expenditures and output
does not exist. Furthermore, results from the new Granger non-causality testing
procedure developed by Toda and Yamamoto (1995) show that there is no causal
relationship between government expenditures and output. Therefore, our
empirical results do not support Wagner’s Law for Taiwan. The source of public
expenditure growth has been an important issue in economic literature. A number
of attempts have been made to identify the principle causes of growth in the
public sector. Wagner’s Law is one of these theories that emphasize economic
growth as the fundamental determinant of public sector growth. Therefore,
several studies have been devoted to test the validity of Wagner’s Law, which
postulates the tendency that government activities increase with economic
expansion. Empirical tests of this law on various countries have yielded
significantly different results. Although several multi-country studies
conducted by Wagner and Weber (1977), Abisadeh and Gray (1985), and Chang (2002)
conclude that most countries show trends supporting Wagner’s Law; studies
conducted by Ram (1986), Afxentiou and Serletis (1996), and Ansari et al. (1997)
find no strong evidence supporting Wagner’s Law. However, the validity of
Wagner’s Law is further supported by country-specific studies, such as studies
conducted for the United States by Ganti and Kalluri (1979), Yousefi and
Abizadeh (1992), and Islam (2001), studies on Pakistan by Khan (1990), studies
on the U.K. by Gyles (1990), and studies on Japan by Nomura (1995). However,
there is still dissent among recent research. For instance, studies on Mexico
conducted by Mann (1980), Nagarajan and Spears (1990), and Lin (1995), a study
on Greece by conducted by Chletsos and Kollias (1997), and a study on Taiwan
conducted by Pluta (1979) have all obtained mixed results concerning the
validity of Wagner’s Law. There are even studies that simply do not support the
Wagner’s Law. For example, studies for Canada conducted by Singh and Sahni
(1984), and Afxentiou and Serletis (1991), a study for Sweden conducted by
Henrekson (1993), and a study for Kuwait conducted by Burney (2002). In general,
the studies do not support the Wagner’s Law for under-developed or developed
countries. The focus of this study will be on Taiwan, one of the most recently
industrialized country in Asia. Taiwan is also the world's third largest holder
of foreign exchange reserves after Japan and China. Although a vast amount of
research has been devoted to testing the validity of Wagner’s Law on countries
around the globe, such studies on Taiwan are exceedingly rare. Of the
previously mentioned studies, only the studies presented by Pluta (1979) and
Chang (2002) have been on Taiwan. Therefore, this study attempts to examine the
validity of Wagner’s Law for Taiwan using
a more robust and
recently-developed estimation methods such as the Bounds Test proposed by
Pesaran et al. (2001), and a the new, Granger non-causality testing
procedure developed by Toda and Yamamoto (1995). The remainder of this paper is
organized as follows: Section 2 presents the study’s adopted methodologies.
Section 3 describes the data used in this study and discusses the empirical
findings. Finally, conclusions are offered in Section 4. Empirically, Wagner’s
Law investigates the long-run relationships between government size (as
generally denoted by government expenditures) and economy growth (as
conventionally denoted by output). Since there are different measures of
government size and output, there are also different empirical versions of
Wagner’s Law. Emulating Mann’s (1980) study, this study employs six different
versions of Wagner’s Law. Furthermore, concerning the use of real or nominal
data, we observed Beck’s (1976, 1979, 1981, 1982, 1985) studies that emphasize
the use of real (rather than nominal) government size and showed that real size
of the government sector has risen less than nominal size. Thus, to be
consistent with most of the empirical versions of Wagner’s Law, this study uses
real terms of government expenditures. The six different models of testing
Wagner’s Law are presented as follows: where GE = real total government
expenditures, GDP = real gross domestic product, GC = real
government consumption expenditures, N = population, GDP/N = real
GDP per capita, GE/GDP = real total government expenditures as a share of
real GDP, and GE/N = real total government expenditures per capita. The
bounds test proposed by Pesaran et al. (2001) is employed in this study because
its approach has two main advantages over common cointegration analyses (Engle
and Granger, 1987; Johansen, 1988; Johansen and Juselius 1990). First, the
bounds test procedure can be applied irrespective of whether the explanatory
variables are I(0) or I(1). Second, the bounds testing can be used on small
finite samples (Mah, 2000), as is the case in this study. Since no
conintegrating relationship can be made among variables that are I(1) and have
data with small sample sizes (Kremers et al., 1992) and the ECM and Johansen
(1988) methods are not reliable for studies like ours which have small sample
sizes (Mah, 2000). Furthermore, the conventional ADF test (like many other unit
root tests) suffers from poor size and power properties especially in small
samples (Harris, 1995). Since this study has a small sample size (37
observations), the cointegrating relationships for our six Wagner’s Law models
are estimated using Pesaran et al.’s approach – the bounds test, which is based
on the following unrestricted error correction model (UECM): In the equation
above; and are the first differences of the logarithms of government
expenditures and output, respectively; p is the optimal lag length for
UECM; is a disturbance term under the assumption of having white noise and a
normal distribution. To investigate the presence of a long run relationship,
Pesaran et al. (2001) proposed the bounds test based on the Wald, or
F-statistic. The asymptotic distribution of the F-statistic is non-standard
under the null hypothesis of having no cointegrating relationship between the
examined variables, disregarding of whether the underlying explanatory variables
are purely I(0) or I(1). The test is conducted in the following way: The null
hypothesis is tested by considering the UECM in equation (7) and excluding the
lagged variables and. More formally, a joint significance test is performed,
where the null hypothesis is:
A Comprehensive Study on Information Asymmetry Phenomenon of Agency Relationship in the Banking Industry
Gow-Liang Huang, National Kaohsiung First University of Science and Technology, Taiwan
Hsiu-Chen Chang, National Kaohsiung First University of Science and Technology & Yu Da College of Business, Taiwan
Chang-Hsi Yu, Yu Da College of Business, Taiwan
The objective of this study is to investigate the information asymmetry phenomenon of agency relationship in the banking industry. Information asymmetry phenomenon is one of the most important factors of agency cost, and it should make an effort to eliminate. Although information asymmetry phenomenon has been popular in banking industry, however, most studies only focus on deposits market or loans market separately. The present authors argue that the banking industry should be looked as a whole market that having a linkage relationship. It should be apply systematic standpoint integrative to explore the problems of information asymmetry gap. The present study therefore proposes a conceptual framework of bilateral agency relationship model for the banking industry. It includes five information asymmetry gaps among the bilateral agency relationship. The implications of these findings are discussed in the paper. A healthy banking system has no necessity to worry a run, but if a banking system, its bad loan is in clusters, the capital lapping is multifarious, according to international standard may have already bankrupted many times, have to face this risk. Resolving the bad loan problem is the premise that the bank health maintenance and, only transfer the bad loan to the assets management company is not enough, because the bad loan still yield continuously. The financial institution bankrupt and financial crisis both are also similar, the bank of the ill-health will close down finally. Recent banking literature has focused increased attention on the costs and benefits of banking relationships. If the market management of the bank should be efficient, it should consider the interests of the banks’ stakeholders and effective application of the fund. The stakeholder of banks involves the depositors and the borrowers. Hence, it generates two kinds of agency relationships between the depositors and the banks, and between the banks and borrowers. Within the first agency relationship, the depositors provide the financial source, and the banks help to do fund management effectively. In the second agency relationship, the banks provide fund resource, and the borrowers help to do fund government mainly by the way of investment and other manners. Under perfect information, market forces would enforce “good” banking practice because profit-maximizing banks would choose strategies with zero probability of bankruptcy (Kareken and Wallace, 1978). However, it contains the information asymmetry phenomenon in the financial market actually; therefore, the bankrupt probability of the banks will be existed. Based on the described above, we found the information asymmetry phenomenon is an important issue. However, according to the previous literature reviews, the greatly part of researches all emphasizes in the single side to acts for the study of the relationship, it still lack of the researches from the systematic standpoint and aim at the bilateral agency relation (depositors, banks, and borrowers). In addition, greatly part of researches put great emphasis on single industry only, seldom do one comprehensive study to domestic and all industries. However, in fact, this is a dynamic agency relationship of linkage. The present paper therefore proposes a bilateral-agency-relationship model on banks to deal with this issue, and will provide the research results to the expert and scholar’s reference of the related realm. During the 1960s and early 1970s, economists explored risk sharing among individuals or groups (e.g., Wilson, 1968; Arrow, 1971). The literatures described the risk-sharing problem as one that arises when cooperating parties have different attitudes toward risk. Agency theory broadened this risk-sharing literature to include the so-called agency problem that occurs when cooperating parties have different goals and division of labor (Ross, 1973; Jensen and Meckling, 1976). Specifically, agency theory is directed at the ubiquitous agency relationship, in which one party (the principal) delegates work to another (the agent), who performs that work. Agency theory attempts to describe this relationship using the metaphor of a contract (Jensen and Meckling, 1976; Eisenhardt, 1989). Agency theory is concerned with resolving two problems that can occur in agency relationships. It includes: (1) adverse selection problem, and (2) moral hazard problem. The problem of adverse selection refers to the misrepresentation of ability by the agent (Eisenhardt, 1988, 1989) and occurs when the principal can observe the agent’s behavior, but is incapable of judging the optimality of that behavior (a monitoring problem) (Mitnick, 1987). In contrast, moral hazard refers to the lack of agent effort (Eisenhardt, 1988, 1989) and arises when the principal can judge the agent’s optimal behavior, but is unable to observe it (an incentive problem) (Mitnick, 1987; Kurland, 1991). The agency theoretic approach, found in the economics, accounting, and finance literature (Jesen and Meckling, 1976; Fama and Jesen, 1983; Mitnick, 1987; Eisenhardt, 1989), focus on determining that optimal contract which governs the relationship between the principal and the agent. Agency theory is organized around the objective of efficiency and attempts to make the goals of the principal and of the agent congruent. The basic assumptions underlying agency theory are that people are self-interested, rational, and risk averse. The agency theorist examines the tradeoffs necessary to reach the efficient solution between the costs of monitoring the agent’s behavior (to resolve adverse selection) and the optimal output (to resolve moral hazard) under conditions of information asymmetry (Kurland, 1991). bimetallic standard. The controversies over bimetallism provide an analogy with the current debate over the adequacy of international reserves. Then silver’s role as a monetary metal was being downgraded to fiat token coins, in part because of the continual problems of keeping both monies in circulation when their rate of exchange was fixed by the government. Gresham’s law is sometimes applied to the tendency of coins with high gold content to disappear from circulation if they circulate at the same time with coins which have the same monetary value and a lower gold content. If the prices of the two coins with differing fineness were not stabilized, the coin with the lower gold content would depreciate relative to the other coin. The disappearance from circulation of the coin with the higher gold content reflects a tendency on the part of individuals to board, perhaps because of anticipations that the commodity value of this coin might rise to exceed its nominal value (Aliber, 1967).
Using Importance-Performance Analysis in Evaluating Taiwan Medium and Long Distance National Highway Passenger Transportation Service Quality
Yuan-Chih Huang and Dr. Chih-Hung Wu, Takming College, Taiwan
Dr. Jovan Chia-Jung Hsu, Kun Shan University of Technology, Taiwan
This research discusses relationship among customers’ characteristics, customers’ travelling characteristics and service quality; Service quality includes two levels, first, importance degree of service quality, namely customers’ expected service quality; second, satisfaction degree of service quality, namely customers’ perceived service quality; This research used purposive sampling, sent out 1980 questionnaires, collected 1950 questionnaires, return ratio of 98.4%,research results showed cross table analysis of customers’ characteristics(sex, age, profession, education attainment, income) and customers’ travelling characteristics (start to end point, choice of passenger transportation company, travel purpose, monthly ride frequency, trip timetable selection), test, majority (22 groups) showed observable relationship. Lastly, calculating the 24 questions mean value of expected service quality and perceived service quality, based on Importance-Performance Analysis, emergency exit facilities, seat comfortableness, vehicle interior cleanliness, traveling route, traveling safety, traveling steadiness, embarkation and disembarkation convenience, etc. Seven items, belonging to quadrant 1, (maintenance reinforcement area), vehicle interior noise pollution, vehicle washroom cleanliness, station waiting lounge cleanliness, ticket price structure, driver’s traveling habits, etc. five items, belonging to quadrant 2(improvement reinforcement area), air-conditioning effect, vehicle interior illumination, ticket purchase convenience, etc. three items, belonging to quadrant 4 (over-emphasized area), other remaining questions total 9 items, belonging to quadrant 3(secondary improvement area); based on analysis result, suggesting companies involved to separately select suitable service strategy, and consider company’s resources used on quadrant 4 to be moved to quadrant 2. Since 1978 when the entire route of Taiwan National Expressway was opened for traveling, only the Public Highway Bureau (predecessor of Taiwan Car Company) managed solely the national highway passenger transportation, Taiwan public highway passenger transportation industry was a controlled industry, thus, market competition was not big. Lately, along with the influence of open policy, National highway passenger transportation route has been opened, and the previous monopolized market has been broken. At present, many national highway passenger transportation routes have been opened to many passenger transportation operators, many routes have many passenger transportation operators at the same time, and thus market competition is getting intense day by day. In addition, because of stiff market competition, concept on service quality has become an important aspect in the operation management of vehicle passenger transportation companies, in order to fight for competitive superiority. In the past, with research related to Taiwan public highway transportation, some were included mass transportation research range and city public vehicles; others were discussing demands forecast, route selection, and vehicles group plan and achievement assessment aspect. Foreign research on public highway passenger transportation marketing aspect was few; mainly it was a choice behavior, achievement assessment, and control elimination of entire operation management level (Fielding and Anderson, 1983; White, 1997; Koppelman and Wen, 1998; Yan and Chen, 2002). Over the past two decades, the service industries in the U.S. and elsewhere in the world have grown at a phenomenal rate. Consequently, services are attracting increasing attention from academicians and practitioners (Cuningham and Young, 2002; Hopkins et al., 1993). Although many scholars have devoted to the discussion on the influence of perception value on the willingness of customers to patronize, entire pattern still has many areas deserving of deep discussion, specially regarding the incorporation of special influencing factor derived from specific industry (Oh, 1999). Thus, in recent years, there are many relative researches on mass transit service quality undergoing, but more of them emphasize on discussing service quality as starting point, few uses the Importance-Performance Analysis to undergo analysis and suggestion. In addition, this research uses the national highway passenger transportation industry of mass transit as research scope, the main reasons are because (1) In the past, Taiwan has not lifted control, thus opening of market, strengthening industry competitiveness require elevating travelers’ value and service quality affection in order to attract the backflow of customers, (2) In the past, national highway passenger transportation industry lacked the concept of transportation marketing, few discussion was made on travelers’ value level and service quality. Therefore, this research attempts using the medium and long distance national highway passenger transportation customers, to study population statistics variable (customers’ characteristics), the relationship between customers’ traveling characteristics and service quality, discusses the probable existence of each service quality gap of transportation industry, and service quality includes two levels, first, degree of importance of service quality, namely customers’ expected service quality, second, degree of satisfaction of service quality, namely customers’ perceived service quality. This research focuses on operators managing medium to long distance route (total kilometers exceeding 150km is considered as medium to long distance route) as research targets, and thru the Importance-Performance Analysis, based on its result, provides suggestion to national highway enterprises regarding the utilization strategy of company resources. Based on the above-mentioned research background and motivations, the main purposes of the study are : (1) exploring relativity of customers’ characteristics and customers’ travelling characteristics, (2) using Importance-Performance Analysis to discuss the relations of level of customers’ expected service quality (defined as Customers’ importance degree) and level of customers’ perceived service quality (defined as operators’ performance achievement) , providing suggestions to operators regarding service strategy. Nowadays, Service quality is the key in assuring continuous operation of a company, how to provide good service quality is an important topic. Therefore, this section will discuss service quality literature related to this research. In this section, first, discussion of service quality construction and measurement, and then reviewed relative research on the importance-performance analysis of service quality.
Evaluating Ethical Decision–Making of Individual Employees in Organizations—An Integration Framework
Miao-Ling Fang, Southern Taiwan of Technology University, Taiwan
Employees face an array of moral issues in their everyday decision-making. This paper is an attempt to better understand the ethical considerations of employees when they face with ethical dilemmas. A comprehensive review of the literature on ethical decision-making models in the workplace is presented. This article proposes an integration model containing a new set of variables and offers 17 research propositions. This study examines the influence of independent variables on the components of ethical decision making: cognitions (perceived ethical problem), moral evaluations, determination(intentions), and actions(ethical or unethical behavior). The independent variables include individual factors, situational factors, and characteristics of the moral issue itself. The relationship between independent variables and dependent variables is mediated by emotion. The moderating variable includes three types of relationship (instrumental ties, mixed ties, and rival ties). The process of an individual decision-making apparently is a management issue worth further examination. Every profession processes its own codes of ethics. Ethical conflicts are inescapable today as human interactions become increasingly frequent and complex. How to deal with the conflict and how to decide on a solution which doesn’t discriminate against any party involved are just two examples of ethical issues that occur frequently in the workplace. Ethical decision-making refers to a process in which individuals can freely make a decision based on the evaluation of the interests of all parties when facing ethical dilemmas. Many empirical studies have been done to support the correlation between business and business performance. For example, Verschoor’s(1998) survey on the biggest 500 companies in the United States showed that 33.6% of the companies in the survey attached greater emphasis on the codes of ethics in their annual report and recorded better performances. Morris (1997) examined 112 companies and found that companies which stress ethics have better images and reputation and yield higher long-term interests. Employee’s ethical awareness and decision-making intent have been empirically proven to be influential on company performance (Morris, 1997, Wu, 2000). In the absence of ethics, these individuals tend to promote their self-interests at the expense of others in the organization when resources are unevenly distributed (Ye, 2000). Individual employee’s unethical behaviors may bring about short-term benefit but will also damage a company’s long-term interests. For instance, a sales transaction completed by immoral means may not be honored by customers. In this sense, employees’ ethical decision-making plays a predominant role in a company’s performance and thus deserves more attention and assistance from the management. This paper attempts to identify the variables that affect an individual’s decision-making process in ethical dilemmas. The author's primary goal is to develop concrete and applicable implementation methods for an ethical policy through systematic analysis and discussion of existing academic works. The hope is these approaches can someday contribute to the improvement of business management practice. To achieve the above objectives, the author analyzes the procedures and variables that have been postulated in existing academic literature and develops an integrated model that clearly illustrates a process of decision-making. Cottone & Claus(2000) adopted similar approaches but failed to propose a synthesized model that can explain individual ethical decision-making and behavior. This study proposes an integration model to identify components of the ethical decision making process from the individual perspective. Guy(1990) argued that ethical decision-making in the workplace involves individual morality and work related judgment. He summarized the characteristics of ethical decision-making as follows: (1) the decision affects two or more values; (2) the decision-maker is faced with a dilemma; (3) the process is filled with uncertainty, and unknown consequences await; (4) the power to decide is scattered among many parties or within the organization. The second feature indicates the comparison and selection among values are needed in the process; the third shows that inadequate information, including lack of control over a situation and miscalculation of interests, may lead to unethical decisions even with strong ethical attitude. The last characteristic means an ethical decision is the product of power play by all interest parties, and may also mean it is beyond a single individual’s ability to make a decision. These features clearly depict the complex and difficult nature of ethical decision-making. Rest(1979) proposed the Model of Moral Action that distinguishes four major components intrinsic to the ethical decision-making process: ethical sensitivity, prescriptive reasoning, ethical motivation, and ethical behavior. A person should first be able to identify the ethical issue, then judge the issue based on his/her personal ethics, and resolve to comply with this ethical judgment and finally engage in an ethical action. This model describes how various cognitive structures and procedures combine to produce an individual’s ethical behavior. However, people sense same ethical dilemmas differently, and even those with ethical sensitivity may feel or understand the dilemma in different ways. For example, some may think reporting misconduct to supervisors outside the chain of command is an unethical act because it’s disrespectful to the immediate superior. Some, however, believe that such an individual act, as long as benefiting the general welfare of the group, is not in violation of a code of ethics. Moreover, those who believe the reporting is morally wrong hold divergent viewpoints toward the consequences the violator should suffer. Some support heavy punishment due to the seriousness of the act, while the others argue the oral reprimand is sufficient. For that reason, an understanding of an ethical dilemma should exist before the actual decision-making process begins.
Predicting Turnover Intentions: The Case of Malaysian Government Doctors
Dr. Sarminah Samad, Universiti Teknologi Mara, Malaysia
The purpose of this study was to determine the relationship of organizational commitment and job satisfaction with turnover intentions. Consequently the study examined the influence of organizational commitment and job satisfaction on turnover intentions. Based on organizational commitment postulated by Meyer and Allen (1991), theory of job satisfaction by Hezberg (1973) and turnover intentions by Bluedorn (1982) a study was conducted among 300 government doctors working in government hospitals in Malaysia. The results hypothesized that organizational commitment and job satisfaction were negatively related to turnover intentions. The study also revealed that organizational commitment and job satisfaction made a negative influence on doctors’ turnover intentions. Among all the facets of independent variables, affective commitment appeared to be the most significant predictor to turnover intentions. Based on the implication of the research findings, several suggestions are put forward. Turnover intentions, organizational commitment and job satisfaction have been the focus of interest of many industrial and organizational psychologists, management scientists and sociologists. This is because empirical studies have reported that turnover intentions can reduce the overall effectiveness of an organization (Smith and Brough, 2003). Meanwhile literature have documented that in 1970s about three thousand studies have been done on job satisfaction (Locke, 1976) and voluminous of research are conducted on organizational commitment (Meyer and Allen, 1997). Much of the interest in this research is due to the concern for the behavioral consequences of job satisfaction and organizational commitment. Other topics that have attracted a great deal of interest among scholars are the relationship of job satisfaction and organizational commitment with productivity, absenteeism, turnover, retirement, participation, labor militancy, sympathy for unions and psychological withdrawal from work. Loher et al. (1985) argued that analysts have given much consideration on the antecedents of job satisfaction and organizational commitment. Consequently, literature have highlighted that most of the research conducted treats organizational commitment and job satisfaction as the ultimate criterion variable. This study, however, focused on the relationship of job satisfaction and organizational commitment on turnover intentions and the extent of job satisfaction and organizational commitment predict an outcome of theoretical and practical interest for organizational scholars, namely turnover intentions. Turnover is referred as an individual’s estimated probability that they will stay an employing organization (Cotton and Tuttle, 1986). Therefore, the identification of factors that influence turnover intentions is considered as important and to be effective in reducing actual turnover (Maertz and Campion, 1998). Among the factors that influence turnover intentions are organizational commitment and job satisfaction. Organizational commitment has been defined and measured in several different ways due to diverse definitions and measures in the scholarly literature. However these definitions and measures share a common theme in that organizational commitment is recognized to be a bond of the individual to the organization. The most referred concept of commitment is the three scales of commitment developed by Meyer and Allen (1991) that measure commitment in terms of affective, continuance and normative commitment. Meanwhile job satisfaction is a contribution of cognitive and affective reactions to the differential perceptions of what an employee wants to receive compared with what he or she actually receives (Cranny et al., 1992). There are a number of job satisfaction theories in the organizational studies. Among the most popular theory that always been referred in organizational behavior studies is Hezberg’s two factor theory (1973). Hezberg’s theory, based on two basis types of needs: 1) the need for psychological growth or motivating factors and 2) the need to avoid pain or hygiene factors. Koslowky (1991) and Vandenberg and Nelson (1999) revealed that both organizational commitment and job satisfaction predicted turnover intentions over time. A study conducted among MIS employees indicated that job satisfaction and organizational commitment were the most direct influences on the turnover intentions (Igharia & Greenhaus, 1992). Several reviews reveal consistent negative correlations between organizational commitment and turnover (Allen and Meyer, 1996 and Mathieu and Zajac, 1990). Studies have reported that the correlations are stronger for affective commitment and turnover; and significant relationship are found for all three components of commitment (Meyer & Allen, 1997). To date there is no conclusive agreement among analysts and scholars that either job satisfaction or organizational commitment is a significant and useful predictor for organizationally relevant behavior such as turnover. Hudson (1991) has argued the concept of job satisfaction as lacks of behavioral referents, its link with productivity was based on a naive theory of human behavior and too individualistic. In addition, Hudson (1991) suggested that commitment is in the right direction, as it expresses behavioral intentions (the main intention to stay in the organization). However, it also suffers the problems that beset job satisfaction. Therefore, Hudson (1991) moved away from research based on attitudes toward more behavioral research. Researchers generally still argue on the relative merits of job satisfaction and organizational commitment for explaining behavioral outcomes that include turnover intentions. Analysts always deal only one or the other in their analyses. Therefore, considerable studies have been placed on organizational commitment rather than job satisfaction. Moreover, the cross-cultural studies have revealed of inconclusive findings on the nature of the relationship between job satisfaction and behavioral outcomes in the working environment. A research conducted by Cole (1971) documented that Japanese employees did not rate high on job satisfaction measures compared to employees in the United States. According to Lincoln and Kalleberg (1990) such difference is due to higher levels of commitment by Japanese employees to the economic success of their firms.
International Strategies and Knowledge Transfer Experiences of MNCs’ Taiwanese Subsidiaries
Yi Ming Tseng, Tamkang University, Taipei, Taiwan
This research views the activities of international expansion on the part of MNCs as a process of knowledge transfer, and investigates the marketing knowledge transfer modes of MNC subsidiaries in Taiwan. Three modes of transfer are widely recognized in the literature: the global knowledge mode, host country knowledge development mode, and the standardized knowledge transfer mode. Results show that the types of global strategy adopted by MNCs clearly explain their selection of the knowledge transfer mode. Further, market similarity and strategic importance are also closely related to the selected transfer mode. Rapid changes in the nature of global competition have driven international managers and management researchers to search innovative ways to approach new challenges, tackle problems and answer questions as to how to manage complex multinational corporations most effectively. This has meant having to develop new theoretical perspectives with which to examine issues, such as those concerning the management of a set of foreign subsidiaries with diverse external environments and a wide range of internal skills and competencies. Researchers in organization theory (Levitt and March, 1988) as well as strategic management (Prahalad and Hamel, 1994) have identified organizational learning as one of the most important subjects for scholarly inquiry. A common thread among network theory (e.g., Ghoshal and Bartlett, 1990), organizational learning(e.g., Hedlund, 1986; 1994), and evolutionary theory (e.g.Kogut and Zander, 1993) is their focus on the multi-relationships within MNCs, and the view that the multinational organization as a whole can greatly benefit from the transfer of resources and competencies within the firm. This research examines the central role played by global strategies as they relate to the process of knowledge transfer as MNCs expand into international markets. By focusing on one particular type of competency – marketing knowledge, this research departs from past research that has traditionally focused on technology and other technical knowledge transfers. With only a few exceptions (Inkpen and Beamish, 1997), marketing knowledge has yet to receive proper conceptual and empirical attention as a competent source of competitive advantage that can be transferred inside MNCs. Indeed, the strategic significance of marketing knowledge to a firm’s international competitiveness warrants closer scrutiny. The goals underlying the motivations for this study are twofold. The first core purpose of this research is to examine the relationship between global strategies and the modes of marketing knowledge transfer. Secondly, this study attempts to determine whether or not the impact of market factors explored before continue to exist when businesses enter into the knowledge transfer model. One of the most important issues of an MNC’s international business operations is its decision on its global strategy. Global strategy refers to the corporate competitive principles that are adopted when multinational corporations compete with global competitors and local firms in worldwide markets. It is comprised of the building and operating of the global value chain activities, allocating resources, and of establishing subsidiaries all over the world (Yip, 1995). Managers of MNCs must coordinate the implementation of their firms’ strategies among various business units in different parts of the world in different time zones, different cultural contexts and in different economic conditions. MNCs have the ability to exploit three sources of competitive advantages which are unavailable to domestic firms (Bartlett & Sumantra, 1989; Sumantra & Nohria, 1993; Yip, 1995): Global efficiency: MNCs can maximize ‘location efficiency by locating their facilities anywhere in the world that yields them the lowest production and or distribution costs or that best improves the quality of the services they offer their customers. Similarly, they can build factories to serve more than one country, and lower their costs by capturing ‘economies of scope’. MNCs pursuing global efficiency are regarded as following a “global integrated strategy”. Multimarket flexibility: Unlike domestic firms which operate in the context of a single domestic environment, international firms can respond to a change in one country by implementing a change in another. MNCs pursuing multimarket flexibility can be regarded as following a “multidomestic response strategy”. Worldwide learning: An astute firm may learn from national differences and transfer the outcome of learning to its operations in other countries. MNCs pursuing worldwide learning can be regarded as following a “home replication strategy” Knowledge transfer capability is one of the most important advantages of MNCs. Through the transfer and adaptation of knowledge, subsidiaries of MNCs build and develop their competitiveness over local firms. Knowledge transfers inside MNCs are also related to theories of organizational learning (Tienessan et al., 1997), that is, subsidiaries become global nodes by learning effectively and systematically from their parents. Basically, subsidiaries can establish their knowledge system in two ways. As for the first and the most frequently employed, knowledge is directly transferred from the parent company. In this way, the knowledge transferred from the parent can be classified into two categories: knowledge which is globally developed and distributed the global subsidiaries, and that which is developed from the parent’s home market, but may not be suited to other host country markets. These two categories are equally critical, but no previous research has demonstrated whether the ways in which these two kinds of knowledge are transferred are also different. The second way for subsidiaries to build a system/base of knowledge is to develop relevant knowledge pertaining to the host market by themselves. Although this may take much time, the end result may better correspond to the local needs, and might, at the same time, reduce the number of potential problems which can occur in the transfer process. Marketing knowledge is the know-how required when marketing activities are executed, and includes marketing research, channel operation, promotion, product design, and marketing information systems, and so on. In that marketing knowledge is one of the most important ownership advantages of MNCs entering foreign markets, many of the MNCs which become market leaders are those which develop excellent marketing capabilities. Marketing knowledge is different from technological knowledge which narrowly focuses on product research and development, inner design and manufacturing. Given its characteristics of continuousness, technological knowledge can usually be documented, codified and easily transmitted; to be sure, new technological developments usually are rigorously based on technologies previously developed. But marketing knowledge is a different story since it sometimes evolves from long-term experiences and trial-and-error, all the while cultivating tremendous insight into target markets, consumer behavior, and competitors. Furthermore, it can formulate the future vision of the marketing principles of an industry, with some knowledge perhaps changing the usual rules of competition by “jumping” across the common senses, a practice which can be traced back to “strategic intent,” as proposed by Hamel and Prahalad (1994).
The Study of the Motivation and Performance of the Incubators’ Strategic Alliances: Strategic Groups Perspective
Dr. Wen-Long Chang, Shih Chien University, Taipei, Taiwan, R.O.C.
Jasmine Yi-Hsuan Hsin, University of British Columbia, Vancouver, Canada
This paper applies the concept of strategic groups to the motivation and performance of the incubators’ strategic alliance. Surveys are conducted with 76 incubators in Taiwan. The result shows that all incubators can be divided into three strategic groups according to the similarity of their resource ownership and strategic thinking: strategic group with dominance over information resource, strategic group with dominance over business administration resource and strategic group with dominance over technical and human resources. In addition, due to the diversity of each strategic group’s resource dominance, incubator’s motivation for taking on a strategic alliance varies. The performance of each strategic alliance varies as well. Since 1996, Small and Medium Enterprise Administration (SMEA), Ministry of Economic Affairs, Taiwan, has taken on an active role to reinforce incubation policies in order to promote the start-up and innovation of small and medium enterprises in the hope to integrate the knowledge of different sectors: government, business, academia, and research institutions, so that SMEA can assist schools and both public and private sectors to set up their own incubation centers. SMEA aims to eliminate the difficulties faced by small and medium start-ups and to strengthen technology innovation skills of those start-ups. The goal is to upgrade those enterprises into knowledge industry with high added values, and to enhance Taiwan’s industrial competitive edge. The number of incubators, which have been launched with the help of government or solely by private sectors, have exceeded over 70 in Taiwan by the end of year 2004. The density of the incubators is counted as the highest in the World. Over 1,600 enterprises have been assisted with the help of the incubators. The domain of these enterprises include: information technology, electrical engineering, multi-media communication, bio-technology, environment protection, Medicare, telecommunication, aviation and aeronautics, civil engineering, chemical engineering and petroleum, raw material, storage-to-go, tourism and entertainment, and education, culture and art, etc. The total incentive investment has exceeded over eight hundred and twenty-five million USD. Within eight years, the domestic incubators have grown into maturity, and gradually formed strategic group (Lai, 2002). The depth and the breadth of the industry will continue to expand. However, incubators’ business and profit earning model are not well established, and the percentage rate for incubators receiving government subsidy is still high, and it is expected that the competition of strategic group will be more severe in the future(Chang, 2001; Hung, 2004). The majority of incubators in Taiwan are founded by schools; hence, many incubators are restricted in obtaining necessary resources due to schools’ managerial bureaucracy and education policies. It further restricts the incubators’ ability to cultivate those enterprises which would like to station in. How to improve the competitive advantage of incubators through incubators’ strategic alliances or integrated alliances among different professions are the vital mechanism and the key factors to ensure the steady development of the incubator industry in the future(Lai, 2002; Chang, 2004). The paper is an application of the concept of strategic to incubator industry. We intend to relate the concept of strategic groups to compare their motivations and effectiveness of strategic alliances. The research serves as an important indicator for incubators to select their strategic alliances and further to enhance incubator industry’s overall competitive advantage. A strategic group refers to enterprises which belong to the same industry and follow the same or similar strategies. (Asker, 1995; Cool & Schendel, 1987; Dess & Davis, 1984; Hunt, 1972; Peteraf & Shanley, 1997; Wiggins & Ruefli, 1995). The structures of strategic groups evolve over time, which may lead to different business performance due to the competitive strategies adopted by different strategic groups; therefore it is crucial to understand the significance of strategic groups to industries. Recognizing the formation of strategic groups enables an industry to understand the resource allocation and the business better and gives an enterprise the ultimate competitive advantage. Despite the importance of strategic groups to the formation of incubators’ competitive strategies is well recognized, the related research is scarce. Lai (2002) once studied Taiwan incubators’ business performance by adopting the “strategic group” based perspective. The research shows that the business performances of incubators’ strategic groups vary according to the difference of resources. It also suggests that in order to have the ultimate business performance, incubators need to strive for the accumulation of core resources and to unite as strategic alliance with other professional groups. Later on, Chiang (2003), by adopting “resource-based perspective” in studying incubators’ strategic groups, also reached the same conclusion, that is, strategic alliance is the ultimate solution for incubators to obtain resources. In fact, even in the past, many scholars had adopted the “resource-based perspective” to analyze strategic groups. Fuente, Zúñiga and Suárez (2004) had adopted the same methods to analyze the managerial performances of the Spanish banks. Gimeno & Woo (1996) also adopt the method to analyze the cause of trades in each industry. The benefit of using the “resource-based perspective” to analyze the incubators’ strategic groups is that there are various types of incubators. Since strategic group may have different competitive advantage and resource, each strategic alliance’s motivations and performances may vary depending on the different competitive strategies adopted by incubators. In other words, to understand the resource advantage of an incubator can be really helpful for incubators to understand the competitive advantage of them and to find the best strategic alliance. Based upon the above analysis, this research will analyze incubators’ strategic groups that could possibly exist in Taiwan by adopting the “resource based perspective”. The first hypothesis in the research is that:
The Application of a Quantile Regression to the Relationship Between Debt Financing and Equity Financing by Dual-Issue Cases
Min-Tsung Cheng, Ching Yun University, Taiwan
The allocation of financing sources contributes to the “success” of corporate financing strategy. Theoretically, what is the relationship between debt financing and equity financing? The literature regarding capital structure considers either substitutive or complementary financing sources. Hovakimian et al. (2004) assume that debt can be applied as a substitute of equity; on the other hand, a theory developed by Mehar (2005) argues that debt and equity are complementary sources of finance. This paper adopts a methodology of quantile regression approach, similar to the work of Fettouh et al. (2005), to examine the relationship between debt and equity. Evidence shows debt and equity are complementary sources of financing amid high-equity financing firms, partially consistent with the finding of Mehar (2005). What is the relationship between debt financing and equity financing? The literature regarding capital structure considers either substitutive or complementary financing sources. Hovakimian et al. (2004) identified a novel concept of “dual issues” in corporate financing behavior; that is, the practice of a firm issuing both debt and equity in the same year, in contrast to the previous research of choosing only one sort of financing instrument. Having a rare opportunity to reset their capital structure at relatively low cost, firms that follow a dynamic trade-off strategy will choose a combination of new debt and equity, as alternative sources of finance, with the assumption that debt can be applied as a substitute for equity. By offsetting the deviation from target leverage caused by the accumulation of earnings and losses, debt ratio is close to the target; therefore, debt can be applied as a substitute of equity. On the other hand, Mehar (2005) argues that the leverage ratio of a company mainly accounts for its operation and financial activities, including sales, profits, inventories, and working capital. Based on a theorem developed by Mehar, debt and equity have been proved to be complementary sources of finance. In practice, the allocation of financing sources contributes to the “success” of corporate financing strategy. Due to complicated circumstances in business, the extent of the substitutive or complementary relationship between debt and equity merits further investigation. Since most previous literature related to capital structure employs ordinary least squares (OLS) techniques, the empirical results are likely to be affected by the limitations of those techniques, such as inefficient or biased estimates. While the distribution of data may be skewed, the conditional mean is likely to be influenced by outliers to become non-representative. Koneker and Hallock (2001) document that the basis of estimated coefficients by the quantile regression can explicate the conditional medium functions among variables. When data is asymmetrical, the estimates of conditional medium function generated by the quantile regression approach, which is not affected by the outliers, are more representative than the results of OLS. When the sample data are more symmetrical, estimated produced by quantile regression and OLS will be similar, but the quantile regression is able to estimate observed points outside the central location. Consequently, using quantile regression in this study avoids the possible inaccuracies arising from OLS. Most importantly, the estimates derived using the quantile regression approach show comparable efficiency to the least square method. This study, therefore, adopts a methodology recently developed by Fattouh et al. (2005), who applied quantile regression to capital structure in South Korea, to enhance the ability to draw inferences and to make a contribution to research on the issue of corporate financing behavior. In summary, the empirical processes are devised to examine the relationship between debt financing and equity financing. To further integrate the research, this study not only uses dual issues as a sample classification basis, but also incorporates the quantile regression approach to elucidate the corporate financial behavior. The next section of this research describes the data and methodology. Section 3 presents the empirical results, and Section 4 concludes the paper. The scope of the study is to apply the quantile regression technique as the research method to test the relationship between debt financing and equity financing of the financial policy among dual-issuing cases. This section illustrates the selected sample and dataset, the research method, and the empirical hypotheses. The sample was collected from a large panel of quarterly financial datasets covering the 10-year period from 1995 to 2004. All selected firms were incorporated in dual issues, corresponding to financial records required, and listed in Taiwan Stock Exchange, which yield 3,634 observations. The financial data of the firms employed in this study were obtained from the Taiwan Economic Journal Database. Following the work of Mackie-Mason (1990) and Hovakimian et al. (2001, 2004), the criteria for dual issues are defined by meeting the following two requirements: A firm whose variation of debt in amount (current-issue liabilities minus pre-issue liabilities) divided by pre-issue total assets exceeds 5 percent. A firm whose variation of equity in amount (current-issue equity minus pre-issue equity) divided by pre-issue total assets exceeds 5 percent. Assessment of the explanatory power of the regression model, in general, is based on the value of error. To obviate offset between the positive error value and the negative error value, the adopted techniques include the ordinary least square by the sum of errors square, the absolute value of errors, or the minimization of absolute deviation. Peculiarly, the absolute deviation method is the core notion of the quantile regression model. The quantile regression model introduced by Koenker and Bassett (1978) extends the view of ordinary quantiles to linear models, in which the conditional quantiles have a linear form. The conditional quantile regression model can be expressed as follows:
A Study on the Factors of Manufacturer Profitability: The Moderating Effect of Different Industries
Shu-Ching Chan, Jin Wen Institute of Technology, Taiwan
Wenching Fang, National Taipei University, Taiwan
Directed by governmental policies and global prosperity, the information and electronic industry is a mainstream industry to which investors pay close attention and job seekers are attracted in Taiwan. This study discusses non-information and electronic manufacturers that have been ranked as top businesses in Taiwan, together with mainstream industry manufacturers with good business performance. The study examines strategies for steadily gaining profits in the international market. Studies suggest that that key factors leading to profitability for information and electronic manufacturers and non-information and electronic manufacturers have modified in recent years. The effects of R&D, marketing expenditures, and the employment of professional workers on such profitability are significant for Taiwan’s information and electronic manufacturers, but not for Taiwan’s non-information and electronic manufacturers. In recent years, non-information and electronic manufacturers have steadily gained profits in the international market, primarily because this group can effectively manage the costs of its value chains and develop both global logistics and resource integration. Progressing from its world-recognized role as “Made in Taiwan” to its role as “ Made by Taiwan”, Taiwan has long been a global player in terms of manufacturing capability. In 1998, the nation became the world’s third largest producer of information products. Having experienced the situations of the industry in that period, Shih (1996) proposed the notion of the “smiling curve” and believes that the added values for manufacturing are minimum at the bottom of that curve. Therefore, if Taiwan’s businesses simply rest at this stage without making any further progress, then they are predestined to face a “profit squeeze”. It is necessary for the manufacturing industry to extend themselves toward both ends of that smiling curve — namely R&D at the upstream and branding at the downstream. Compared to other medium- and small-sized businesses, large enterprises have more resources and greater risk tolerance. In Taiwan, although only 2% of enterprises are large enterprises, large enterprise sales make up 70% of total sales. Among them, domestic sales take up 18% whereas exports take up the other 82%. As for the existing business structure of Taiwan, large enterprises control exports, whereas medium- and small-sized businesses control internal sales. Amidst technological development and globalization, the importance of an improvement in R&D and marketing for the export-dependent large manufacturers cannot be emphasized enough. As a result of governmental policies, the information and electronic industry can obtain more R&D technological and financial support, compared to that type of support for other industries. In recent years, Taiwan’s economic growth has been mostly attributable to growth in the information and electronic industry. Large information and electronic companies, such as Taiwan Semiconductor Manufacturing Company (TSMC), Compal Electronics, etc., have become the focus of domestic and foreign investors. To the contrary, stocks for the plastic industry, iron and steel industry, textile industry, etc., relatively speaking, are not favored by investors. To date, only 47 companies have continued their operations in the textile industry, an industry that once had 77 publicly listed companies. Among them, 32 publicly listed companies have a share price below a par value of NT$ 10. The share prices of the iron and steel industry, plastic industry, car industry, etc., are also lower than one third of the information and electronic industry on average. Large information and electronic companies, such as TSMC, Hon Hai, UMC, etc., are not only favored by investors, but also attract large numbers of outstanding talent to join them by offering stock bonuses to their employees. According to the survey findings released by the Taiwan Job Bank, these companies are the first choice for the majority of new graduates, because their jobs match their talents and interests. Among all graduates, 60% of fresh graduates join these companies, because of good welfare and high salaries. Therefore, we believe the peak in the development of the information and electronic industry in Taiwan in recent years is primarily caused by their efforts in R&D and marketing and the support of their professional technical staff members. However, the means to corporate growth and profitability do not just stop there. How do non-mainstream enterprises, just like the so-called mainstream information and electronic manufacturers, stand on their own feet and gain profits, considering their limited resources and investment risks? Are R&D, marketing activities, and the employment of professional technical staffs the keys to profitability? This article discusses the profitability strategies for those non-information and electronic manufacturers that can steadily grow in the international market and can be ranked as Taiwan’s top 500 businesses along with information manufacturers. First, we discuss the effects of three factors, namely, R&D, marketing, and staff quality on the profitability of the non-information and electronic industry in the most recent three years and then compare the information and electronic industry to the non-information and electronic industry. Then, through examination of publicly listed information and reports of various companies, we organize other strategies for the non-information and electronic industry to attain steady profitability and a steady leading market position in the most recent three years. The contributions of this article can be described as follows: 1. Contributions to investors: Through the organization of this article, investors can control the situations and future trends of the existing internationally competitive non-information and electronic industry and provide viable choices outside the information and electronic industry. 2. Contributions to businesses: Due to a small domestic market and a shortage of natural resources in Taiwan, large enterprises can take advantage of their rich resources to expand their international markets. However, due to cultural differences and differences in resources in various countries, Taiwan’s experiences are worth referencing for large enterprises that are similar to those in Taiwan and who also intend to develop their international markets. 3. Contributions to academia: Due to the development of information technology, the steady prosperity of the US in the 1990’s has aroused the interest of scholars in heated discussions on the information and electronic industry. The analysis and discussion of this article is beneficial to the balance of academic research and promotes the discussion of international issues on the non-information and electronic industry. All these areas are worth further study by scholars. R&D is intended to refine the professional know-how related to products and manufacturing processes, including basic research, product design, manufacturing process, and service procedures. Through product design and development and the enhancement of manufacturing processes, enterprises can upgrade their performance (Kotabe 1990a). If a manufacturer has outstanding product design, it can gain an advantage through competitor differentiation and thus gain greater rewards. Likewise, a manufacturer can lower production costs by innovating its manufacturing processes and at the same time boost its product quality (Hitt et al. 1997). Porter (1986) has suggested this area become important when stepping into the international market. According to studies conducted by such scholars as Hufbauer (1970), Mansfield (1981), and Kotabe (1990b). If manufacturers have a stronger and larger research and development orientation, a significant positive relationship between R&D intensity and business performance results. The majority of studies suggest that R&D intensity will affect corporate profitability (Lau 1996; Hatfield 2002), but it does not necessarily have a positive impact on corporate profitability in the short term. Boer (2002) also believes that industrial R&D activities are high-risk investments with deferred compensation.
A Study on Efficiency and Productivity of Turkish Banks
In Istanbul Stock Exchange using Malmquist DEA
Dr. Birgul Þakar, Kadir Has University, Istanbul, Turkey
This paper considers the study of the Turkish commercial banking performance listed in Istanbul Stock Exchange in terms of their ability to provide maximum outputs with the given set of input consumption, i.e. Malmquist DEA analysis with output orientation have been adopted. Malmquist DEA analysis methods have been employed to determine the affects of variable returns on banks efficiencies and resulting Malmquist indices have been used to evaluate changes. The model uses five input variables as i) branch numbers, ii) personnel number per branch, iii) share in total assets, iv) share in total loans, v) share in total deposits. The Share figures used are for whole Turkish banking sector and not the sector shares of Istanbul Stock Exchange. The five output variables selected as i) net profit-losses/total assets (ROA), ii) net profit-losses/total shareholders’ equity (ROE), iii) net interest income/total assets, iv) net interest income/ total operating income, and v) non-interest income/total assets. Results of the Malmquist DEA analysis is discussed from different perspectives. An examination of bank efficiency listed in stock exchange is important for several reasons. Financial markets in Turkey have undergone significant change over the last decade, as a result of deregulation and globalization. These drivers of change were particularly strong over the second half of the 1990s, which may be characterized as a period of a series of financial reforms were introduced, whose main objectives were to boost the efficiency and productivity of banks by limiting state interventions and enhancing the role of market forces. Banks moved away from simply being intermediaries toward providing a range of financial services, from insurance to funds management. All of these factors have had a significant influence on the operations of Turkish banks. This paper is a study to examine the effect of scale efficiency on the productivity of eleven Turkish banks listed in Istanbul Stock Exchange looking for their performance indicators after a recent crisis in Turkish banking sector. It considers the scale effect on income structure and profitability. Data Envelopment Analysis (DEA) techniques have been applied to estimate scale and profit efficiency. Malmquist productivity indices have been used to examine changes in the productivity. The observation period had been selected as ten quarters between the 31 December 2002 and 31 March 2005. The organization of the paper is as follows: Section II reviews the literature on Turkish banking sector studies using DEA techniques. Section III briefly mentions DEA techniques and underlying concepts. Section IV discusses data structure , methodology and its strengths. The selected variables and reasoning behind the selection with the modeling framework are also discussed. Section V presents empirical findings and discussions on findings. The paper concludes summary of findings with suggestions for the future research. A number of studies have applied DEA and DEA based Malmquist indices to question the efficiency and productivity change respectively in the Turkish commercial banking industry. Oral and Yolalan (1990) analyses the operating efficiency and profitability of bank branches. The results show service-efficient bank branches are the most profitable ones and this evidence suggests that there exists significant effect of service-efficiency and profitability for Turkish bank branches. Zaim (1995) selects only two representative years (1981 and 1990) to distinguish type and post liberalization eras and compares the efficiency scores of different organization forms and their scale adjustment. The results indicate that financial liberalization has a positive effect on both technical and allocative efficiencies, and state owner banks appear more efficient than private banks. Yolalan (1996) analyze the efficiency of Turkish commercial banks over the period 1988-1995 by using financial ratios. The results indicate that state-owned banks are the least efficient group, followed by the private banks, and that foreign-owned banks are the most efficient. Jackson, Fethi, and Inal (1998) analyze the efficiency and productivity growth in Turkish commercial banking using DEA based Malmquist index. They evaluate the efficiency and productivity changes of each bank over the 1992-1996 periods. The results show that except for the financial crises period of 1993-1994, foreign and private banks are more efficient than their counterparts. Yýldýrým (1999) evaluate policy and performance in Turkish commercial banks in response to financial liberalization after 1980 and to the macroeconomic instability. The results show that the sector did not achieve any sustained efficiency gains in the liberalized era wit continuing scale in efficiency. The less profitable state owned banks seem to be more efficient than others and there is a relationship between the scale and technical efficiency and bank size. Jackson and Fethi (2000) analyze the technical efficiency of individual Turkish banks using DEA for the year of 1998. They evaluate that larger and profitable banks are more likely to operate at higher levels of technical efficiency and the capital adequacy ratio has a statistically significant adverse impact on the performance of Turkish banks. Denizer, Dinc, and Tarýmcýlar (2000) evaluate the banking efficiency the years of 1970 and 1994 are selected pre and post-liberalization environment and investigate the scale effects on efficiency by ownership. This study analyzes the production and intermediation approaches and assumes that the banking operations in Turkey occur in a two-stage framework. The results indicate that a decrease in efficiency in post liberalization era. The other finding is that Turkish banking system had a serious scale problem due to macroeconomic instability. Cingi and Tarým (2000) analyze the efficiency and productivity change in Turkish commercial banking using the DEA-Malmquist Total Factor Productivity Index over the period of 1986-1996. The results indicate that three private banks are highly efficient where the four state banks are not efficient. Other finding reveals that difference in efficiency is mainly due to scale economics.
Application of the VAIC Method to Measures of Corporate Performance: A Quantile Regression Approach
Huei-Jen Shiu, National Chengchi University
This research applies a new accounting tool for measuring the value creation efficiency in the company, namely the Value Added Intellectual Coefficient (VAICTM) of Pulic (1998). Based on the year of 2003 annual reports of 80 Taiwan listed technology firms, it also examines the correlation to the resources of corporate allocation, and focuses on differences between firms in different quantiles of corporate performance. Conditional quantile regressions show that while variables are significant throughout the distribution, there are considerable differences, including differences in sign, and in their impact on firms with different degrees of performance. The empirical applications indicate that the nature of the technology industry in Taiwan is that of transforming intangible assets such as intellectual capital into high-value-added products or services in a way identical to the claims of Pulic (2004). Conventional accounting systems were developed for manufacturing economies and for measuring the value of tangible assets, but with intangibles such as the rate of change they find it difficult to account for. In addition to accounting systems, there are several internal and external measures of intellectual capital. The Skandia Navigator was one of the first internal measures to calculate and visualize the value of intangible capital by stating that intellectual capital (IC) represents the difference between market and book value (Leif 1997). Others are human resource accounting, the intangible assets monitor, and the balanced scorecard. External measures include market-to-book value, and Tobin’s Q and Real Option theory (Shaikh 2004). The central question of the measuring systems: “Do traditional measures of corporate effectively capture the new emerging intellectual-based measures with the same constructs?”has acquired new significance in the context of developing accounting. This empirical study applies a new accounting tool: Value Added Intellectual Coefficient or VAICTM, developed by Ante Pulic (1998) as his trademark together with his colleagues at the Austrian IC Research Centre (Pulic 2000; Bornemann 1999). VAICTM is designed to help managers leverage their company’s potential. The key contribution of VAICTM is to provide a standardized and consistent basis of measurement and to effectively conduct comparative analysis across various sectors locally and internationally. The related issue of VAICTM is motivated by growing evidence in literature, much of the research stemming from the work of Pulic (1998). Bornemann (1999) suggested a correlation exists between intellectual potential and economic performance. Williams (2001) discovered that a firm with high level of VAICTM appears to reduce disclosures of intellectual property when performance reaches a threshold level for fear of competitive advantage being lost. Moreover, Firer and Williams (2004) indicated that the associations between the efficiency of value added (VA) and profitability, productivity and market valuation are generally limited and mixed. Overall, physical capital remains the most significant resource for corporate performance in South Africa. In addition, a study by Mavridis (2004) confirmed the existence of significant performance differences between the various Japanese business groups. The later study by Pulic (2004) showed that under today’s conditions of value creation, quantity is not relevant. The literature related to VAICTM from Taiwan is quite limited. The latest research by Wang and Cheung (2004) suggested an integrated theoretical model to investigate the impact of intellectual capital on business performance. To investigate measures of corporate performance, this study mainly concentrates on Taiwan listed technology firms as their special attributes are intellectual property intensiveness and striving for innovating products or services to enhance their comparative advantages. Using the VAICTM index, the study examines its association with three measures of corporate performance, namely profitability, productivity and market valuation (Firer & Williams 2003). Initially, the research employs tests for the degree of asymmetry for each distribution of a dependent variable. Then, as the skewness of distribution is represented in the study, the conventional technique of least squares is unable to fully resolve the heterogeneous effects on firms. Compared with quantile regression estimators, a quantile regression (Koenker and Basset, 1978) is a technique for estimating models for conditional medium functions and the full range of other conditional quantile functions. The quantile regression approach is robust to the assumptions of least squares, which include situations like non-Gaussian distribution, or a relatively large proportion of outliers. Most importantly, estimates of the quantile regression approach show comparable efficiency to the least squares. Lastly, the study incorporates correlation, linear multiple regression analysis, and the conditional quantile regression approach as statistical methods, and makes further comparisons. The study has two major aims. The first is to introduce the VAICTM method as a tool for assessing the efficiency of current business. The second is to incorporate the conditional quantile regression method to evaluate the correlation of VAICTM with measures of corporate performance. The next section of this paper describes the data and methodology. Section 3 discusses the empirical results, while section 4 presents the conclusions. In this study, data were collected from a sample set reported in the year of 2000 to 2003 annual reports of 80 listed technology firms in Taiwan. As the technology sector plays a crucial role in the economy of Taiwan, its innovation in products and services, and driving factor for competition is mainly accounted for by intellectual capital. The data employed in this study were obtained from the Taiwan Economic Journal Database. The method of Value Added Intellectual Coefficient (VAICTM) was initially put forward by Ante Pulic (1998) and further developed by Manfred Bornemann (1999). It gives a new insight into the measures and monitors of value creation efficiency in the company using accounting based figures. VAICTM is designed to effectively monitor and evaluate the efficiency of value added (VA) by a firm’s total resources and each major resource component, focusing on value addition in an organization and not on cost control (Pulic 2000, Bornemann 1999, Firer and Williams 2003). The VAICTM is defined as a composite sum of three separate indicators:
A Study of the Factors Impacting ERP System Performance—from the Users’ Perspectives
Ching-Chien Yang, National Central University, Jhongli, Taiwan, ROC
Ping-Ho Ting, Tunghai University, Taiwan, ROC
Chun-Chung Wei, Chungchou Institute of Technology, Taiwan, ROC
Many companies excessively emphasize the information technology, they ignore the most important factor for implementation IS success should be people-centered. This research examined the factors that impact ERP system performance from the system users’ perspectives. We empirically investigate the using experiences of ERP system users of those have implemented ERP systems middle-size companies in Taiwan on 2004. Our research found that the different implementation planning (implementing consultant services, education and trainning, and specific staffs for implementation ERP), characteristics of implementing organizations (customized processes, use partial or all system functions, and personnel), and users’ characteristics (the working age, department/segment, and position) will significantly influence the ERP system performance. An ERP system is an integrated enterprise computing system to automate the flow of material, information, and financial resources among all functions within an enterprise on a common database. Davenport (2000) proposed that implementing the ERP systems bring many benefits for the organization including reduction of cycle time, promotion the flowing efficiency of information, generating the financial information fast, proceeding the E-business, and assistance in development new organizational strategies. Many companies are implementing ERP packages as a means to reducing operating costs, enhancing competitiveness, increasing productivity and improving customer services (Martin, 1998; Mirani and Lederer, 1998; Pliskin and Zarotski, 2000). Mabert et al. (2000) surveyed the US manufacturing firms found that ERP systems implementation benefits are concentrated more on quickly providing high-quality information within the firm. Many researchers have investigated the critical success factors of ERP implementation (Huang et al., 2004; Nah et al., 2003; Murray and Coffin, 2001; Bingi et al., 1999). But those researches were seldom from the systems users’ perspectives to explore success factors for implemening ERP system. Many companies excessively emphasize the information technology, and they ignore that the most important factor for management should be people-centered. Changing the personal behavior will promote the usage efficiency of the information (Marchand, et al., 2000). From the organizational perspectives, implementation ERP systems will encounter resistance since the implementation usually require people to create new jobs relationships, share information, and make decisions that they never have made before (Appleton, 1997). According to a national survey of Danish experiences, a main barrier for ERP implementation is the resistance towards change (Deloitte and Touche, 1998). Implementing a new information system will change the users’ individual jobs and daily procedures. System users will be asked to execute operations or jobs differently, and they need more training on how the system would change business processes and to use the system (Ross, 1998; Deloitte & Touche, 1998). This study selected ERP performance measures from the related literature (DeLone and McLean, 1992; DeLone and McLean, 2003; Saarinen, 1996; Skok et. al., 2001; Mirani and Lederer, 1998; Lee et al., 2002; Liberatore and Miller, 1998; Mabert et al., 2000). We proposed six dimensions to measure the implementation ERP system performance from the system users’ perspectives. The six dimensions are information quality, system functions, system quality, users’ satisfaction, use attitude, and system efficiency. The primary objective of this research is to examine the ERP system performance will be or not influence upon the different implementation planning, organizational characteristics, and users’ characteristics, then to provide an insight into ERP implementation. The first section is introduction. Section 2 provides the relative literatures review of ERP systems. In the next section, we describe the methodology and data of this research. Section 4 presents the research results. Section 5 is conclusion the findings of our study. Generally, the system vendors developed ERP systems in advance, and they often offer numerous options representing best practices defined by ERP vendors (Teltumbde, 2000). ERP vendors designed their packaged ERP systems to be the universal package software for various industries and organizations. Many organizations use system consultants to facilitate the implementation process. Piturro (1999) argued that the consultants may have experience in specific industries, comprehensive knowledge about certain modules, and may be better able to determine which suit will work best for a given company. While opinions vary with respect to what third parties should be able to control, the company should keep control and accept full responsibility for all phases of the project. Piturro (1999) suggested that a major concern stems from financial ties to the recommended software vendor and lack of expertise and experience in ERP appropriate to the business. The learning that takes place in the organization is centralized around the system, and the dependence on power users increases (Baskerville, Pawlowski, & McLean, 2000). The critical power user’s role means a comparatively large amount of organizational and technological knowledge, which are informally concentrated on relatively few people. Kwasi (2004) argued the higher-level personnel in the organization who designed the information systems implementing manner might have a greater understanding of why the specific implementation planning is being designed. There exist significant knowledge gaps of implementing ERP systems between the managers and end-users. The appropriate communication and training are needed for the end-users. Kraemmergaard and Moller (2000) suggested that the communication could go more into details about how the implementation can change the individual jobs and daily procedures. Good communication and training can minimize the anxieties of the end-users, and then they accept the new information systems and technology. Lack of user training and failure to completely understand how enterprise applications change business processes frequently appear to be responsible for problem ERP implementations and failures (Crowley, 1999).
Factors Constraining the Growth and Survival of Small Scale Businesses. A Developing Countries Analysis
Stephenson. K. Arinaitwe, Breyer State University, London Centre
Small-scale businesses play a crucial role in contributing to overall industrial production, export and employment generation in developing countries, as noted by Kazmil & Farooquie, 2000. They form an integral part of a well growing and expanding national economy. Small-scale businesses have been a means through which accelerated economic progress and rapid industrialization can been achieved. That’s the reason why the dynamic role played by these enterprises in developing countries has been highly recognized and applauded. Despite the recognition given to small-scale businesses as potential sources of economic growth and development in developing countries, their contributions have always fallen short of expectations. Therefore this analysis will try to find out and discuss the major challenges that have limited such businesses from delivering their expected benefits of poverty eradication, economic recovery and other developmental goals to the economies of developing countries. In order to arrive at meaningful conclusions, the analysis will exploit all the available literature on small-scale businesses in developing countries. It will investigate the challenges of applying assumptions of positive relationships experienced within developed countries to developing nations. Technological capabilities and how the lack thereof is a considerable constraint for small-scale businesses, in developing countries will be discussed, along with the need for technological support. Finance and small-scale businesses, as well as the three components for a grassroots campaign will be overviewed. And finally, balancing promotional strategies with environmental sustainability will be reviewed in order to better understand the challenges that small-scale businesses in developing countries must overcome in order to survive. By the late 1990s, over one billion people lived in abject poverty globally. Hundreds of millions of people were trying to survive on less than a dollar a day. In developing countries, disease, political conflict, little to no formal education, and environmental problems exacerbated an already horrifying situation. Although many developed countries, such as the United States, Canada and Japan enjoyed rises in income levels and lower employment rates, developing countries around the world were suffering more than they had been a decade earlier (Woodworth, 2000). Several programs have been implemented over the past decades to assist those who are poverty-stricken and have produced mixed results. In the 1960s, Modernization programs were implemented. These programs involved industrialized nations attempting to jump-start developing countries’ economies. These often failed because, according to Woodworth, they contradicted indigenous cultures and values, or were too capital intensive to maintain, as is illustrated by the large power dams that were constructed for electricity generation. In the 1970s, the Green Revolution emerged. This program attempted to superimpose Western agricultural practices and processes on developing countries. The use of methods, such as large tractors and chemical fertilizers gave unexpected outcomes. Negative impacts, such as rising cancer rates in indigenous people and soil depletion, occurred in areas around the globe (Woodworth, 2000). The World Bank, United Nations and a group of others changed tactics focusing on a Basic Needs approach in the 1980s. Basic items such as health care, access to clean water, suitable housing and basic education were the core of these programs. However, these efforts were very expensive and very difficult to sustain. With the failures of these efforts, a new line of thinking has emerged. Experts now see that the traditional cultures of developing countries are not necessarily an obstacle to economic development. These traditional cultures and modern cultures are not mutually exclusive. It is now believed that the two disparate cultures can find value in interfacing and interacting with one another (Woodworth, 2000). This is one of the critical components to small-scale business development in developing countries. In the late 1980s and the early 1990s, experts saw the potential for small-scale businesses to be the catalyst of great political, social and economic development in lesser-developed countries around the world. In areas where poverty was a standard for generations, market-based economies were emerging. As Giamartino noted, in 1991, many anticipated that small business and entrepreneurship would be the leading forces to new economic development, in historically tightly controlled systems. However, even with this potential, Giamartino believed the excitement should be tempered. The positive relationship between economic development and entrepreneurship has been supported in America (Birch, 1987). However, there have been far less studies performed regarding a similar relationship between entrepreneurship and developing countries. This does not necessarily negate the possibility that this positive relationship exists. Yet, according to Giamartino, “it’s a broad assumption that there are similarities in economic development processes across developed and developing economies, (and that these assumptions) would seem to be both ethnocentric and unsubstantiated”. The first challenge that occurs when generalizing economic development trends from developed countries, to be applied to developing countries, is that there is a significant difference in the economic data that is available in the developed countries, as opposed to developing countries. As an example, Standard Industry Classifications, employment security data, demographic data, and more are widely available (Giamartino, 1991). Although this information is becoming more readily available for developing nations, it is not near the quantity of data available for a developing country in Africa in comparison to the United States, United Kingdom or Japan. A second challenge that occurs when trying to apply developed country economic trends to developing economies “is the lack of consideration of current models of entrepreneurship development” (Giamartino, 1991). Internal and external components facing entrepreneurship in developing countries is significantly different from those faced in developed countries. These include; financial resources, characteristics of locations, characteristics of employees, governments, taxes, laws and regulations, as well as free trade policies and critical items such as infrastructure and the existence of enterprise zones. Even in comparing economic trends in one developing country to another developing country, internal and external components will vary widely.
On Using Benefit Segmentation for a Service Industry: A Study on College Career Counseling Services
Professor Chaim Ehrman, Loyola University Chicago, Chicago, IL
The need for a marketer of consumer goods to segment his market has been well documented in the literature. Segmentation strategies can be based on demographics such as age, income, religion, location, ethnic background, education, etc. Alternatively, segmentation can be based on psychographics, in which the segmentation strategy will focus on consumer life style characteristics. A third approach is to segment your market based on key benefits sought by consumers. However, the literature documents application of segmentation strategies for products, which are tangible items. In this paper, the focus is on a service, and benefit segmentation will be demonstrated how it can be used for the service industry. Universities in the 21st Century are facing a Buyers’ Market. Assume that students are “buying” higher education, and Universities are “selling” higher education through their curriculum, faculty, libraries, resources, etc. In the 60’s and 70’s there were many veterans from the armed forces who became college students, thanks to the help of the GI Bill. There was a shortage of colleges, and students were grateful to be accepted by any college. Hence, it was a Sellers’ Market, because the Universities could be picky and choosey whom they would accept as students. However, in the next 30 years, many new colleges and Universities opened up, faster than the rate of new students. Now we have a Buyers’ Market, because the students can pick and choose the university of their choice. The Universities are competing fiercely to get more students. Many students select the University of their choice based on the success of the placement service or career center of the University. This is logical, since many students go to College to select a career path for their future. Therefore, colleges and Universities are very interested in enhancing, promoting and strengthening their career center, since it is a key selling point in getting more students. The Medill School of Journalism at Northwester University had, at one time, one of the best placement records for their students (almost 80%)! This factor clearly attracted many students to their school. Students at Loyola University of Chicago were selected to identify key attributes that can be used to evaluate performance of career centers. Subsequently, a survey asking the respondent to rate their career center was distributed to 500 students, and we have 462 completed surveys. The demographics were gender, your College of study, and your major. They key attributes were: 1.Helps me to Discover Myself and my talents. 2.Services me in my requests for job search, mock interviews. 3.Has convenient hours – not during class time exclusively. 4.Posts Location and Hours in a convenient place. 5.Advertises in Student papers about hours, new jobs, etc. 6.Files are current and up-to-date. The well known approach of Martin Fishbein was used to create an Attitudinal Score. This approach is also known as a Linear Compensatory Model. This is a two step process. First, respondents were asked to rate their career counseling center on their performance on these 6 attributes, using a 1-5 scale; 1 is very poor, 2 is poor, 3 is satisfactory, 4 is good and 5 is excellent. Then the respondent was asked to rate the importance of each attribute for the given respondent. We ran a quick validation on Martin Fishbein’s model. Data were collected on a variable called Happy: Are you happy with the Career Counseling service at your school? A correlation coefficient score was computed between “Happy” and Attitude. The score was .69 with a “p value” of .000. This is a fairly high correlation coefficient. The results of the Fishbein Model are given in Table 1 Insert Table 1 Here. The overall mean score was 3.58 with a standard error of .034 This is a disappointing score. One would hope to find a score greater than “4” which is good. Using 2 standard error units above and below the mean score, the overall attitudinal score for 457 career centers is between 3.546 and 3.614. This attitudinal score is barely acceptable to any university who would like to attract students in a Buyers’ market. In order to appreciate possible differences among students, Market Segmentation was used based on Demographics. Our initial step was to segment the respondents by School. See Table 2. Insert Table 2 Here. Unfortunately, there seems to be uniformity across schools. Next segmentation by major was applied. Results can be found in Table 3. Insert Table 3 Here. Again, there really was no appreciable difference by major. Finally, segmentation by major was used. The mean attitudinal score between males and females are practically identical! See Table 4. Insert Table 4 Here. One would seem to be in a no-win situation as far as segmentation is concerned! Psychographics were never collected and may be irrelevant since the entire sample base deals with college students. In the next Section, Benefit Segmentation will be explored.
Business International: An Analysis of the International Market
Dr. Mehenna Yakhou, Georgia College and State University, Milledgeville, GA
Dr. Vernon P. Dorweiler, Michigan Technological University, Houghton, MI
Business has taken steps toward internationalizing their markets, and production. This research focus is on two avenues of global extension: entry into national markets encounters national policies on imports; requiring local production for local marketing, or restrictions on imports. Current terms of international participation are included. This paper concludes with decision by firms, on how far to go toward globalization. The modern business era has introduced two international terms (Quintella, 1997): globalization, and internationalization. This approach to business is based on the initiatives by two parties: the business corporation, and the entry national. The first is incentive-based, to provide a willingness to undertake venturing the technology. The second focuses on risks of international marketing (Sullivan, 1991). The corporation will likely need to restructure, to meet conditions in the international environment. The rationale shows the impact involved with an international strategy. The risk-reward requires a basic understanding of the international environment; this pertains to the national policies of countries of entry. A useful concept is offered by Jones (2002). The corporation will need to apply its own means to the entry task: information processing, communication, transportation, and organization. Kedia (2002) provides a brief statement on each approach, with a view toward a rationale to undertake each. Note that firms apply options allowed by national policy. Kedia (2002) describes four categories: global, international, multinational and transnational. See the Appendix, for definitions. Each category indicates how far the firm chooses to engage in the international environment, with transnational as the least. Restructuring in the international arena is not ordinary, but one needed to cross national borders, and to empower corporate governance for local management in the international business. The differences involved require authority transfer, to encourage productivity, flexibility and responsiveness (Jones, 2002). It should be recognized that authority brings responsibility, for resources, competition, and return of value. Details of restructure are presented below. A decision to “go international” must be on a factual basis. This is feasible through innovation of technology. To be assured of an operating basis, a comprehensive analysis is essential. Laudicina (2005) presents three categories needed: factors, location, and usage of the product(s). Kotler (2003) provides a four-dimensional analysis. The order of the Kotler analysis is treated differently in the international approach, as (1) national place (location), and (2) global product (usage). The other two of Kotler’s analysis are (3) price, and (4) promotion. International is found in those two, by adding the Global Consumer. The following description follows the Kotler analysis. See the Appendix. From a world-wide market, the corporation must determine which national location is “optimal” for their implementation efforts. There are current groupings of nations: industrialized; developed; developing; and controlled. These groupings give insight into lack of opportunity for a company: countries of low development, and controlled markets. Note that both free market and controlled market conditions introduce government control. So then, corporations have limited freedom in markets, with limited opportunity for the required effort. Industrialized nations offer opportunity, and competition. Opportunity includes product innovation, to meet individual market needs. Competition is both internal to the nation, and are imports from the international market. It is clear that “globalization” is an overstatement. Few, if any, corporations are truly in a global market. An exception is the recent e-mail market. Generally no market dominance is found on a global basis (Agmon, 2003). A first question on product is whether one form fits all consumers. The answer is two-fold: an average consumer, and a special consumer. The average consumer will accept the product as-is. Note that product proliferation is the norm, in free-market countries. So, there is no “global” product sufficient to meet needs. Where a national customer demands specific characteristics, the global product is inappropriate for their use (Laudicina, 2005). There are situations where national infrastructure requires unique products. Examples are electrical products; and these that reflect social policy, including food, drugs, and medical-related. To describe a single consumer as representative of a global consumer is improbable. Differences, key to consumption, are many. A requirement here is that differences are accessible to business. Consumers are attracted by different marketing approaches. That difference includes nil, as certain cultures do not accept promotion, beyond one-on-one. Three characteristics of a consumer show differences: age; lifestyle; and psychological makeup. Age introduces a wide spectrum of needs: children, for clothing, durables, housing; aged, for health services, home furnishings, vacations. Lifestyle includes living location (urban, rural), culture (education, government), income (multiple levels).
A Voice Crying in the Wilderness for Auditor Independence: Abe Briloff and Section 201 of the Sarbanes-Oxley Act of 2002
Dr. Deborah Prentice, University of Massachusetts, Dartmouth, North Dartmouth, MA
Abraham Briloff, an accounting academician and practitioner for over 60 years, is a major historical voice for the tenets of Section 201, of Title II, entitled Auditor Independence, of the Sarbanes-Oxley Act of 2002 (SOX). Section 201is entitled Services outside the scope of practice of auditors and makes it certain practices unlawful for a public accounting firm when conducting an audit. SOX came into effect subsequent to the wave of accounting scandals in 2002. In his 1965 doctoral dissertation, The Effectiveness of Accounting Communication, and in numerous published writings, Mr. Briloff had indicated his opposition to these practices. Finally, in 2002, these practices were outlawed. This article examines Mr. Briloff’s longstanding claims and suggestions and points out how they were finally resolved by Section 201. From these it is apparent that Mr. Briloff foresaw the need for the content of Section 201 of SOX. In the spring and summer of 2002, a wave of accounting scandals erupted in the United States. These were linked to the business failures of Enron, Worldcom, Global Crossing, and other large firms. High-ranking officials of these firms and a number of other leading companies admitted to intentionally misstating their accounts. Their offenses were often aided rather than hindered by the public accounting firms that audited their financial statements. The scandals brought billion-dollar financial restatements for many of the firms involved, and unleashed a series of accounting humiliations, sharp stock price corrections, and stories of corporate malfeasance in firms with previously stellar reputations. Many investors were employees who held the bulk of their life savings in company stock that became worthless, Enron being a notable example. The scandals affected major public accounting firms as well. As a result of the Enron scandal the Big Five accounting firm Arthur Andersen suffered a criminal indictment and conviction, leading to the firm’s swift dissolution. As a result of these occurrences, agencies in all three US governmental branches became involved. On July 9, 2002 President George W. Bush gave a speech that centered on the year’s accounting scandals. Earlier, Congress and the SEC launched a series of investigations of the corporations and accounting firms involved. Many of these have led to criminal charges and convictions. One of the major results of these scandals and the ensuing governmental involvement has been major new legislation, supported by the President. The most far-reaching legislation resulting from the scandals is formally entitled An Act to Protect Investors by Improving the Accuracy and Reliability of Corporate Disclosures Made Pursuant to the Securities Laws, and for Other Purposes. Its short title is the Sarbanes-Oxley Act of 2002, or SOX. We will refer to this piece of legislation simply as SOX. This article focuses on one particular segment of SOX, Section 201 of Title II of the Act. Title II is entitled Auditor Independence; Section 201 is Services Outside The Scope Of Practice Of Auditors. Thus Section 201 covers a subset of the auditor independence topic. One longstanding voice for auditor independence is that of Abraham J. Briloff. The primary purpose of this article is to note the importance of auditor independence, to show a relevant excerpt from Section 201, and to demonstrate how Mr. Briloff’s vision for auditor independence and decades of “crying in the wilderness” for it are echoed in the content of Section 201. A secondary purpose is to discuss Mr. Briloff’s career and articles. The reason for this section is to explain the importance of auditor independence and services outside the scope of practice of auditors, the focal point of this article. It is generally acknowledged that the recent accounting scandals would have been prevented or greatly lessened if public accounting firms had conducted their audits in an independent manner. These firms failed to heed the admonitions for auditor independence from within their own industry. It is essential to an accurate understanding by stockholders and the public of how their businesses are performing that managers of publicly-held firms present fairly the financial status of their businesses. Managers are expected by law to publicly report their financial results according to generally accepted accounting principles. Without this accurate reporting the presentations of financial status and creditworthiness, for example to the stock and bond markets and to tax collectors and for loan applications, by publicly-held firms would lack believability. The reality is that, prior to SOX, managers had greater short-term incentives to report their financial results in ways that benefited themselves rather than the stockholders and public. To combat this misreporting of financial results, auditors serve as monitors for the stockholders and the public. Their purpose is to diligently check the firm’s financial statements and issue a public report stating whether the statements appear to conform to generally accepted accounting principles. Without auditors managers are left to be guided by their short-term incentives. Two problems arise with this system. The first is that, in the US, auditors are paid by the firm’s managers rather than by the stockholders. The second is that managers may attempt to “buy off” their auditors by giving them short-term incentives also, such as such as having auditors perform work unrelated to their audit. In many pre- SOX cases this brought about collusion with management and ruined their independence. This is why auditor independence is important. Auditors performing services outside their scope of practice have a greater tendency to let the stockholders and public down, weakening our financial system and society. Mr. Briloff has a lengthy career as a distinguished accounting professor, practitioner, and consultant. He has devoted his entire career to the advocacy and performance of accounting. He is a Professor Emeritus at Baruch College of the City University of New York and an accounting practitioner in New York City. Mr. Briloff has operated his accounting “boutique”, as he calls it (Briloff, 2004), since 1938, and is still operating the firm with his daughter, Leonora. He received his B.B.A. in 1937 and an M.S. Ed. in 1941 from Baruch. Mr. Briloff’s CPA certificate came in 1942. He joined the Baruch faculty in 1944. His Ph.D. was earned in 1965 from New York University. Former Supreme Court Justice and SEC Commissioner William O. Douglas wrote the Foreword to the published volume of Mr. Briloff’s doctoral dissertation, The Effectiveness of Accounting Communication. Auditor Independence was the centerpiece of this work and was important to future books and articles. Mr. Briloff’s other books are Unaccountable Accounting: Games Accountants Play (1972), More Debits Than Credits: The Burnt Investor's Guide to Financial Statements (1976), and The Truth About Corporate Accounting (1981). For approximately 30 years Mr. Briloff wrote a column for Barron’s in which he frequently discussed irregular accounting practices of companies. In his column he also analyzed breaches of ethics and audit professionalism among auditing firms.
The Effects of Nonmonetary Sales Promotions on Consumer Preferences: The Contingent Role of Product Category
Dr. Shu-ling Liao, Yuan Ze University, Taiwan
Sales promotion as a major marketing communication tool has attracted extensive research attention which mostly addressed price promotion and its utilitarian driving force. Nonprice promotion with the provision of mixed benefits that might avoid losses in brand image and profit, however, was insufficiently investigated. The present study explores the preferential effects of nonmonetary consumer promotions moderated by product category. Results show that both product-related and reward-timing nonprice promotions take a major part in influencing consumer preferences for the sales promotion. The preference for same-product sales promotion is stronger than for other-product sales promotion. Also, instant-reward sales promotion is better preferred over delayed-reward sales promotion. The contingent role of product category is discovered in product-related consumer promotions. Same product as promotional benefit generates best preference when the promoted product is in the convenience goods category. If other product than the one on promotion is provided as buying incentive, it was found to top the preference for shopping goods. Rewarding consumers with same product instead of something different from the promoted one is a more effective match to convenience products and specialty goods, whereas providing other product as a reward object is more suitable for shopping goods. Sales promotion as a pivotal component of marketing mix has been heavily used as a major incentive tool to pull consumers to stores and increase short-run sales volumes. Since 1980s, researchers have constantly proposed a variety of concepts to illustrate how sales promotion might affect consumer purchase behavior via overcoming "consumer entropy" (Beem & Shaffer 1981), inviting consumers to engage in transactions (Kotler 1988), heightening the psychological value associated with the transactions (Thaler 1983), or by providing consumers with a script of purchase behavior (Gardner & Strang 1984). No matter what effects of sales promotion would be, all the convictions alleged by the preceding studies indicate that sales promotion may activate or facilitate certain consumer psychological mechanism based on the notion that sales promotion "affects consumer by acting on basic mental processes common to all decisions" (Schindler & Rothaus 1985). Although sales promotion has become a ubiquitous element of consumer marketing, large portions of ineffective promotional activities indicate a great need of refining and redirecting the focus of the impact sources. Numerous studies have focused on consumer attitudinal and behavioral responses to price promotion and its utilitarian benefits (see Dobson et al. 1978, Gupta 1988). The most studied sales promotion effects span from how generally price promotion alters perceived value and brand choice (e.g., Alvarez & Casielles 2005, Darke & Chung 2005, Dawes 2004, Raghubir & Corfman 1999, Tan & Chua 2004, Wathieu, Muthukrishnan, & Bronnenberg 2004,) and shifts consumption and category demand (e.g., Bell, Iyer, & Padmanabhan 2002, Njis, Dekimpe, Steenkamp, & Hanssens 2001, Pauwels, Hanssens, & Siddarth, 2002) to specific scrutiny on the effect of particular type of promotional tool such like bonus pack promotions (e.g., Gurreiro, Santos, SilveiraGisbrecht, & Ong 2004, Hardesty & Bearden 2003, Smith & Sinha 2000) and coupon promotions (e.g., Garretson & Burton 2003, Garretson & Clow 1999, Guimond, Kim, & Laroche 2001, Krishna & Zhang 1999, Kumar & Swaminathan 2005, Laroche, Kalamas, & Huang 2005, Laroche, Pons, Zgolli, Cervellon, & Kim 2003, Raghubir 2004b, Yin & Dubinsky 2004). With review of the ample literature in price promotions, the nonprice promotions by contrast are little known. Chandon, Wansink, & Laurent (2000) indicated that the study of sales promotion should be expanded to a more comprehensive extent and strongly suggested the need to differentiate monetary and nonmonetary promotion and their respective effects. As pointed out by Palazon-Vidal & Delado-Ballester (2005), since most of the past sales promotion research has focused on monetary promotion and its sales impact, the differential role of sales promotion entailed in nonmonetary promotions to assist long-term brand-related effects was unfortunately ignored. Taking into account the inundation and potential brand effect of nonmonetary promotions in sales promotion practice, the present study proposes that the essence of different types of nonmonetary promotions may vary in their capability of creating appropriate incentives and preference to consumers. Consumers’ purchase of different types of products also underlines various reference points for evaluating additional value incurred from nonmonetary promotions. Considering type of nonmonetary promotions and product category together, the purpose of this study is to focus on the effects of two types of nonmonetary promotions, product-related and reward-timing related, and the moderating role of product category. Sales promotion includes all other forms of marketing communication activities apart from the ones associated with advertising, personal selling, and public relations. Types of sales promotion that aim at final consumers mainly are consisted of retail promotion and consumer promotion. Consumer promotions are the forms in which manufacturers can offer promotional deals directly to consumers. Couponing, sampling, price packs, value packs, refunds, sweepstakes, contests, premiums, tie-ins and etc. are common types of consumer promotions used by manufactures. In retail promotions, retailers also provide direct incentives to shoppers such as price cuts, displays, feature advertising, free gifts, store coupons, contest and premiums. In addition to the classification based on the source of sponsor of sales promotion, researchers have developed several forms of typology in relation to timing of rewards (instant or delayed reward, see Aaker 1973, Quelch 1989, Shimp 1997), goal of sales promotion (consumer franchise building or non-CFB, see Prentice 1977; trial induction, consumer attraction, and image building, see Shimp 1997), nature of incentives (product-related or price-related, see Beem & Shaffer 1981; economic or psychological incentive, see Dommernuth 1989; price reduction or value addition, see Quelch 1989; monetary or nonmonetary, see Campbell & Diamond 1990). Recently, Chandon et al. (2000) proposed a benefit matrix of sales promotion mapping promotional tools along the dimensions of utilitarian and hedonic benefits.
An Analysis of Factors Which Influence Small Businesses’ Decision to Have a Website and to Conduct Online Selling
Dr. Sumaria Mohan-Neill, Roosevelt University, Chicago, IL
Using data from a national sample of 752 U.S. small business firms, this paper addresses four general research objectives concerning the decision to have a website and to conduct online selling activity by firms. The first objective is to present an overview of the frequency distribution of websites in small firms, and the frequency of online selling by these firms. The second objective is to explore why some firms with a website do not use it to sell goods and services over the Internet. The third objective is to explore the reasons why firms without a website do not currently have one. Finally, the fourth objective is to address the expectation of firms, without a website. What are their expectations concerning having of a website for the business in the near future? The Internet bubble has burst, and much of the hype is over, but we are nonetheless faced with radical technological changes, which have revolutionized the competitive marketplace, and even small businesses cannot ignore the opportunities and threats, which comes with the Internet. This paper presents a significant part of the picture on the current use of technology and websites by small businesses. It utilizes data from a national sample of 752 U.S. small business firms. Previous research studies on this dataset have described the overall usage of the Internet by small firms (Mohan-Neill 2004a), the computer and Internet usage by small business and their correlation with owner’s gender and education (Mohan-Neill 2004b). Differences in Internet usage by industry sector have been analyzed and reported (Mohan-Neill 2004c), and an initial analysis of the interaction between owner’s gender, industry sector and Internet usage has also been reported (2004d). A more detailed analysis of the correlation between online environmental scanning activity with firm’s size, industry sector and firm’s sales growth is currently under review for journal publication (Mohan-Neill 2004e). The current paper focuses on the firm’s decision to have a website, and its use of the website for selling goods and services over the Internet. McCollum (1998) argued that doing business on the Internet has become a competitive necessity for many small businesses. In virtually all industries, large corporations and government agencies are telling suppliers to trade with them online or risk losing their business (McCollum 1998). Research shows that small firms use the Internet for a variety of online business activities (Mohan-Neill 2004a, Mohan-Neill 2004e). Mehling (1998) reported that small businesses were the slowest sector to embrace E-commerce. The purpose of this study is to continue building on the big picture of technology and website usage by small businesses, by adding another significant building block, which focuses on the small firm’s decision to have or not have a website. In 2000, Straub reported that a new website is launched every 7.5 seconds. The Internet has produced the double-edged sword of more customer access and more competition. Straub (2000) suggested that by creating a critical sales and procurement channel that’s simple to use, an e-marketplace invites small business owners into the game. Furthermore, an e-marketplace saves time researching, locating and negotiating with multiple vendors that can meet its needs. Portnoy (2000) reported survey results, which indicated that 17% of small businesses were selling products and services over the Web. The survey predicted that online selling was expected to double in the next year. This prediction was done while the Internet bubble was still inflated. Post- Internet bubble, Chabrow (2001) argued that as the economy slows and the Internet-hype cools, spending on online ventures face greater scrutiny. Companies with greater success were using a cautious approach guided by measurable financial goals. During the last few years Ebay has facilitated the growth of many small online stores or vendors. However, Cohen (2002) recently reported about the dissatisfaction among many E-bay small vendors. He outlines the problems these small businesses have with Ebay’s restrictive policies and escalating fee structure. In a survey of 500 small business owners, most believe selling on the Web will be important to their future and 50% said cost was the biggest barrier (Mehling 1998). The current study gives an overview of a national sample of 752 small businesses. This paper addresses issues concerning the decision to have a website and to conduct online selling activity by firms. It presents an overview of the frequency distribution of websites in small firms, and the frequency of online selling by these firms. It also explores why some firms with a website do not use it to sell goods and services, and why firms without websites do not currently have a site. Finally, it explores the expectations of firms without a website, concerning the future development of a website for the business? One cannot over-exaggerate the importance of this area of research. The advantages of a fairly substantial national random sample of small firms can only enhance the quality and significance of the findings. Researchers who have struggled with small convenience samples over the years will have a keen appreciation for this characteristic of the study. A descriptive survey design was employed, and data was collected from a stratified random sample of U.S. small businesses. The sample size was 752 companies. A national stratified random sample of small businesses was drawn from Dun and Bradstreet files in 2001. The data was collected by the executive interviewing group at the Gallup Company, on behalf of the National Foundation of Independent Businesses (NFIB), and was funded by a number of corporate benefactors. Since over 60 % of employers have between 1-4 employees, a simple random sample would not yield a large enough representation of “larger” small employers. Small business is defined as any firm with 1-249 employees. The total sample of 752 firms is divided into 3 categories based on firm size. The smallest firms have between 1-9 employees and they represent a total of 352 (47 %) firms in an overall sample. The intermediate size firms (those with 10-19 employees) comprise 200 (27%) of the overall sample. The largest of the small business in the sample (with 20-249 employees) comprised 200 or 27 percent of the overall sample (Figure 1).
Maximize Audit Fees and Minimize Audit Risk: “A Recipe for Auditing Success or Failure?”
Dr. Kathie Cooper, University of Wollongong, Australia
Dr. Hemant Deo, University of Wollongong, Australia
The audit profession is always mindful of the costs of the auditing process that it undertakes. When the audit firms tenders for an audit it is done on a very competitive basis. If the audit firm wins the tender then the resulting consulting services is where the audit firm makes its revenue. The way to achieve this is to do a risk based audit, which is timed based. The audit firm therefore, in most cases trying to take as much consulting works as it possibly cans the aim being to maximize audit fees and minimize audit cost “the road to audit success”. The aim of this paper is to highlights some of the pitfalls once an approach such as this is undertaken using the recent collapse of Heath International Holdings (HIH) case study in Australia. The Royal Commission Report into the collapse of Australian insurer, HIH, demonstrates how a previously respected accounting firm, Arthur Anderson, breached its own audit manual requirements in order to keep a client. Evidence to the Royal Commission further demonstrates the folly of the decisions made by Arthur Anderson executives to maintain not only a difficult client, but also a maximum risk client at the expense of the firm’s reputation and, ultimately, its very existence. Meanwhile, the demise of Arthur Anderson as one of the world’s Big Five accounting firms in the wake of the unexpected collapses of not only HIH in Australia but also Enron and WorldCom in the USA is little compensation to those left financially benefited as a result of reliance on the viability of these companies as reflected in audited financial reports. The remaining Big Four firms have little to be happy about either as the corporate scandals have not only impacted adversely on the reputation of the accounting profession but have potentially diminished its autonomy. Audit risk is an inherent part of any audit process and the auditor has to ensure that all of the foreseeable risks of the client are taken into account to have accountability within the whole audit process during the initial audit engagement process. Lucci (2003), citing Cotton (2002) maintains that notable financial scandals in the USA including Enron and WorldCom cost shareholders US$460 billion. This figure almost makes the approximately AUD$5.3 billion of debts left by HIH in 2001 pale into insignificance. However, the monetary value of the losses is not the issue. Rather it is the breach of the public trust that warrants consideration. As HIH Royal Commissioner, Justice Owen observed (21.1) that auditors play a vital role in the financial reporting process. This is particularly so in relation to companies such as HIH, the true financial position and performance of which is a matter of national economic significance. A properly conducted audit should enable users of the financial report of the company (including regulators, shareholders, policyholders, lenders and other creditors) to rely upon the accounts with a degree of confidence. The audit process should be designed to provide the company and users of its accounts with early notice of potential risks affecting the company’s short- or long-term viability. While Justice Owen did not undertake the review of Arthur Anderson’s audit of HIH with a preconceived notion that it was deficient, the evidence to the Commission led to such a conclusion (21.6.3) that users of financial statements have varying expectations about the audit certificate. In my view Arthur Anderson’s approach to the audit in 1999 and 2000 was insufficiently rigorous to engender confidence in users as to the reliability of HIH’s financial statements. This detracted from the users’ ability to properly appreciate HIH’s true financial position. The role of the audit alluded to by Justice Owen was not imposed on the profession but was fashioned by it as part of its professionalisation strategy. Professional organization is a recognised strategy directed toward affecting closure of a particular occupation or area of knowledge. Closure in turn, facilitates the achievement of hegemonic domination and possibly a monopoly over that occupation or area of knowledge. Accounting, as an organization, has translated its major resource, a claim to superior knowledge and skill, to achieve hegemonic domination of not only accounting practice but also the determination of appropriate accounting methods and principles and most importantly in the present context, a monopoly over the statutory audit function. Furthermore, according to Lee (1990, p138) the skill and knowledge of the professional are crisis relevant in that they are intended to create in the mind of the client, the impression that professional skills are a means of avoiding disaster. Carr-Saunders and Wilson (1933, p497), Johnson (1972, pp13-14) and Larson (1977, p58) adopt similar views. All three see claims to professional altruism as engendering in the eyes of the community, the notion that professions are a stabilizing influence in society. However, unexpected corporate failures and concomitant losses such as those identified above engender the view that the audit function facilitates disaster rather than avoid it. As a consequence, a pall spreads over the espoused munificence of the accounting profession. As Lucci (2003, p.213) observes that [a] survey of leading accounting firms in the United States demonstrates that only about half of the accountants polled believe their profession can fully recover from the disgrace brought on by the Enron and WorldCom scandals. A further consequence of corporate failures and the implication of auditing in the unexpectedness of those failures has been legislation curtailing the autonomy of the profession. Regulatory responses to corporate scandals have traditionally been attempts to impose more stringent regulations on corporations. In the wake of the Enron, HIH and other corporate debacles, the limelight has focused on those likely to be most intimately involved in mismanagement and its concealment. The US Sarbanes-Oxley Act is arguably the most far reaching and the most detrimental to the public prestige and status of the accounting profession. Specifically, the Sarbanes-Oxley Act, section 101 creates the Public Company Accounting Oversight Board to oversee the audit of public companies that are subject to the securities laws, and related matters, in order to protect the interests of investors and further the public interest in the preparation of informative, accurate, and independent audit reports for companies the securities of which are sold to, and held by and for, public investors and establish or adopt, or both, by rule, auditing, quality control, ethics, independence, and other standards relating to the preparation of audit reports for issuers.
A Study on Relations between Industrial Transformation and Performance of Taiwan’s Small and Medium Enterprises
David W-S. Tai and C-E. Huang, National Changhua University of Education, Taiwan
With the liberalization and internationalization of the global economy, Taiwan’s labor-concentrated traditional industries are progressively loosing their competing advantages. Therefore this research studied Taiwan’s SMEs, through transformation process to establish competitive advantages and better performance, among the changes in environment. This research focused on Taiwan’s SMEs, and discuss through actual demonstration to define the relationship between “industrial competing environment, industrial transformation strategy and organizational performance”. With the data from 184 SMEs, through the analysis of LISREL,this study found that: 1. Industrial competing environment is clearly affecting the industrial transformation strategy, when facing greater changes in industrial environment, transformation process goes higher. 2. Industrial competition environment must go through industrial transformation strategy to affect the performance. SMEs plays an important role in the process of Taiwan’s economic development, especially for Taiwan’s exporting business and the growth in national income. Until 1980s, managing environment and industrial structure went through a great revolution, SMEs are facing impacts such as insufficient labor, prime cost rising, fast changes in technology, advances in environmental awareness, competitions from foreign developing countries, etc. Along with the situations of Taiwan’s politic and economic status, are all affecting the management of the SMEs. Also, Taiwan has officially entered the World Trade Organization, businesses are facing impacts from internationalization. How do SMEs react, go through strategic transformation and quickly adjust their resources, to resolve the crisis or to open a new marketing opportunity, these are rather important lessons for the SMEs. As a result, this research studies SMEs to figure out the business status. Facing management problems and unbreakable bottleneck situation, SMEs must take action on industrial transformation and upgrade. As a matter of fact, before an industry decided to proceed with the transformation, basically its characteristic has already started to change, industrial structure no longer stays the same, such transitional process and those unusual phenomenon are called “Inflection Point” (Grove, 1996). Therefore, this research planned to discuss Taiwan’s SMEs’ most common reacting pattern for strategic transition when facing changes in industrial competition environment and under the effect of interior characteristics, from the theoretical basis of strategic inflection points. According to previous statement, the purpose of this study is to discuss what elements of the exterior environmental and the interior management difficulties to affect the industrial transformation. Also, to construct a complete matching pattern of SMEs transformation from interactions among business industrial managing difficulty and strategic transformation, and effect among organization performance. According to Taiwan Ministry of Economy, the definition of SMEs was classified into two groups: Manufacturing and Servicing Industries. Manufacturing Industry: This includes manufacturers, construction industries and mining industries. Company capital under 80 million NT dollars, and has employment of less than 200 people.; servicing Industry: This includes farming, lumberjack, fishing, livestock farming, water, electricity, fuel, gas, business and service industries, with capital under 1 hundred million NT dollars a year, and has employment of less than 50 people. From the viewpoints of Huang (1995) and Wu (2000), SMEs are facing two industrial management problems: First: manufacturing and marketing management. Systematic planning and organizing the usage human resources in a business affects the performance and morale. Besides, due to the limitations in employees and funding of the SMEs, results as concentrating on manufacturing activities, and unable to control the profitable supplemental values of marketing. Second: financial structure problem. Most SMEs aren’t familiar with financial systems and planning, this would affect judgment and investigation. And lack of fund, loans would take too much capital and are suitable for a short-time emergency usage. Hence, in order to divert risks, related industries came out one after another, making it impossible to get the funds together and then became the obstacle for the growth of the industry. Porter(1980) brought up that the five forces in industrial competition are suitable for understanding the industry and creating the competing strategic analysis structure, these five forces includes the initiation of new competitors, threats from substitutes, buyer price-negotiating, provider price-negotiating, and the competing power of the present competitor. Grove (1996) also considered that when the factors effecting the industrial competition, occurs a ten times transformation, industrial competition is facing the strategic inflection point, and power of the factors affecting the industry, power of technology, power of the customers, power of the providers, power of the united organizations and power of the business standards are the six main reasons affecting the industry. Therefore this research combines Porter’s and Grove’s concept to discuss Taiwan’s SME’s present industrial competition. Burgelman & Grove (1996) defined inflection point as the transition of industrial motion status, a winning strategy transformation or present technology and technique being replaced by new ones, and these changes would clearly affect the growth of profit in an industry. Strategic inflection point indicates old structures, business methods and competing methods being replaced by new structures, new business methods and new competing methods, various kinds of power are going through a huge transition process (Grove, 1996).Chou (1997)considers the “Industrial Transformation” is a large scale of revolutionary, basic, comprehensive change inspired by the changes in environment and competition, a behavior phenomenon trying to adjust or change the industry present business structure, a stasis break though, reestablishing the industry vitality. Chen (1995) classified transformation strategy into “comprehensive transforming strategy – transforming into new type of business”, “Partial transforming strategy – keep part of the previous business, but mostly transform into new types of business” and “Multi-angle business management – previous business remains as it were, but add new business into the system”. Therefore this research combines documentations from Chen (1995) and Wu (2000) and classifies industrial transformation strategy in three main strategies: Overseas investment strategy, domestic partial transforming strategy and domestic multi-angle strategy. The definition for overseas investment strategy is that under the consideration of market, ingredient and labor, industry relocates its production line or even the whole company to another country; partial transforming strategy means to keep part of the original structure, partially transforming into new business; and multi-angle transforming strategy is to maintain the old manufacturing business, add in new elements or manufacture productions to enhance the business or lower the risk.
The Impact of Message Framing and Involvement on Advertising Effectiveness - The Topic of Oral Hygiene as an Example
Chia-Ching Tsai, I-Shou University ,Taiwan
Ming-Hung Tsai, E. C. K. Hospital, Taiwan
In this study, we used oral hygiene advertisement to investigate the impact of message framing and involvement on advertising effectiveness. We found that negatively framed messages were more effective than positively framed messages under high involvement conditions, but reverse outcomes had been obtained under low involvement conditions. In a society with advanced communications technology, advertising has become one of the main tools for industries to communicate with consumers. How to frame the messages as to raise the effectiveness of communication is an important subject to explore. Message framing can be categorized as: (1) positive messages emphasizing the advantages or benefits the product can bring to the consumers; and (2) negative messages emphasizing the losses or disadvantages consumers may suffer for not using the product (Meyerowitz & Chaiken, 1987; Maheswaran & Meyers-Levy, 1990). However, some scholars think that messages of identical content, if framed differently, will have different impacts on consumers (Rothman, 1993). Meyerowitz and Chaiken (1990) discovered in their research that negatively framed messages can produce the best advertising effect on high involvement consumers. Conversely, positively framed messages will have better advertising effect on consumers of low involvement. Furthermore, Smith (1996) pointed out that most of the topics in studies on advertising message framing are health related, such as breast cancer, testicular cancer, and skin cancer. When decision makers face this type of topics, they all want to avoid patient losses. This means that this type of messages is quick to let decision makers think of the unpleasant aspects. However, on the topic of oral hygiene, the losses the patients suffer are not as high as the sufferers of the aforementioned diseases. Furthermore, for most of the people in the test, oral hygiene is for maintaining health. Its purpose is different from breast cancer prevention, whose main purpose is disease detection. Therefore, this study aims to explore the impact of different types of message framing (positive/negative) and involvement (high involvement/low involvement) on advertising effectiveness regarding the topic of oral hygiene. Message framing means using a positive or negative way to communicate an advertising message. According to past studies, there are two types of message framing. First type is the negative or positive way the message itself is communicated. Positive messages emphasize consumers’ benefits after using a product. Negative messages emphasize consumers’ loss for not choosing this particular product (Meyerowitz & Chaiken, 1987; Maheswaran & Meyers Levy, 1990). The second type is the negativeness or positiveness of the message itself. It uses the positive aspect of a product or idea (for example, beef containing 75% lean meat) to communicate positive messages or vice versa (for example, beef containing 25% fat) (Levin & Gaeth, 1988). The framing used by Levin and Gaeth was specially designed for an experiment. Normally speaking, very few products would actually disclose their negative aspects to consumers in actual practice. Therefore, this study uses the first type of message framing. Past studies believe that positive message framing produces the best advertising effect. In Levin and Gaeth’s study (1988), results show that positive messages are better rated by people participating in the test. Eagly and Chaiken (1993) also believe positive message framing is more persuasive. However, some studies pointed out that although positive message framing get the expected reaction more easily, in actual practice, the outcome may still differ depending on the situation (Gaeth et al., 1990; Levin&Gaeth, 1988). Rothman (1993) believes that messages of identical content will produce different impacts on consumers if the messages are framed differently. For example, in health-related topics, negatively framed messages are more persuasive in dealing with early detection or high risk behavior issues (e.g., promoting pap smears) (Meyerowitz & Chaiken; 1987). Conversely, positively framed messages are better in dealing with health maintenance or low risk behavior issues (e.g., promoting the use of condom or AIDS prevention). Hence, message framing does affect consumers’ buying behavior. Advertising effect of negative or positive message framing will be affected by different scenarios. Involvement is the level of interest or the status in the self-structure for an object that an individual holds in his mind. Zaichkowsky (1985) believes that involvement is the level of association a person has toward a particular thing based on the person’s basic needs, values, and interest. The level is affected by personal, realistic, and affective factors. According to the behavior a person exhibits in dealing with the object of involvement, Zaichkowsky (1985) has divided involvement into involvements with products, advertisement and purchase situations: 1. Involvement with products: This refers to the priority or the subjective opinion a consumer has toward a product. 2. Involvement with advertisements: This refers to the reaction or message processing the consumer has for an advertisement. This means the level of concern or the psychological condition the consumer has toward an advertisement. Usually, the higher the involvement with an advertisement, the higher the attention the consumer has toward the advertising message. 3. Involvement with purchase situations: This refers to the level of concern the consumer has in a particular purchase. This usually is linked to involvement with products, but the two are not equivalent. The higher the involvement with purchase situations a consumer has, the more product-related information he will gather in the purchasing process. The consumer will also spend a large amount of time considering. Therefore, this study wishes to explore the topic of consumers and oral hygiene and find out how different levels of involvement will affect the advertising impact of message framing. According to Zaichkowsky (1985), involvement with products is usually linked to involvement with purchase situations. Therefore, this paper will use involvement with advertisements to manipulate consumers’ involvement. Besides, Petty, Cacioppo, and Schuman (1983) discovered that high involvement consumers adopt a central route to process messages. This means that the attitudes of these consumers are formed through pondering related information and deduction. It is a sensible processing procedure. Low involvement consumers adopt the peripheral route to process messages. The attitude is formed through some cues in the communication context (e.g., background music, advertising spokesperson). It is formed through simple deduction and no further consideration has given to the content of the message. The processing procedure is of sensibility processing procedure.
ISO 9000 and Financial Performance in the Electronics Industry
Dr. Philip W. Morris, CPA, CFE, Sam Houston State University, Huntsville, TX
Over the last two decades, businesses have increasingly shown an interest in quality and quality-related issues. The conventional wisdom states that quality is an important component of survival for today’s companies. The quality movement has seen the development of an international quality standard known as ISO 9000. Although developed in the late 1980s, ISO 9000 became increasingly popular, in the U. S. and internationally, during the 1990s. From an academic standpoint, the link between quality and financial performance is still a poorly researched area (Wruck and Jensen, 1994; Easton and Jarrell, 1999). Likewise, the link between ISO 9000 and financial performance is still undetermined. This study was designed to help better explain the link between quality, as indicated by ISO 9000 certification, and financial performance. Specifically, this study examined financial performance of U. S. firms in the electronics industry. ISO 9000 certification served as an indication of quality management practice. Financial data from Compustat was used to determine financial performance. Firms that become ISO 9000 certified (quality firms) were anticipated to have superior financial performance to firms that were not certified to ISO 9000 standards. The results failed to support the hypotheses. Quality-related issues continue to be of concern to both management and managerial accountants. However, the literature has suffered from some major problems. In much of the literature, the anecdotal benefits of quality are listed as unquestionable fact (Garvin, 1988). Consequently, most of what is written about quality is grounded on coherent argument rather than in empiricism. Much of the literature about quality is by people who are either quality consultants or who have some type of connection to quality-related businesses (Cole, 1998). Research in the area of quality management practices suffers from the multiple definitions (Wruck and Jensen, 1994; Matta et al., 1998; Easton and Jarrell, 1999), varying degrees of implementation (Reed et al., 1996), and lack of specific implementation dates (Easton and Jarrell, 1998). For instance, what is total quality management (TQM)? What are the factors that make-up TQM? How does one decide whether an organization is following a TQM philosophy? Should the researcher take management’s word on it? Does the researcher decide, and if so how? Furthermore, when did the company incorporate TQM practices into its management practices? Can a researcher look at a company and say it successfully implemented a TQM program on July 16, 1998? However, the development of ISO 9000 allows many of these difficulties to be overcome. Why should ISO 9000 certification translate into quality? ISO 9000 is a quality standard. It provides guidelines that are generic and can be applied to any type of organization. ISO 9000 relies heavily on documentation. In order to become certified, a firm must document its processes and follow that documentation (Stamatis, 1995). Critics argue that this reliance on documentation is a shortcoming of ISO 9000. Proponents of ISO 9000 argue that this process of documentation leads to better processes due to greater communication throughout the organization (Joubert, 1998). As communication increases and processes are documented, inefficiencies are highlighted and brought to the attention of management. The result should be elimination of the inefficiencies, a reduction in costs, and an increase in quality (Fine, 1986; Deming, 1982; Gilmore 1990). These quality improvements should not be a one-time event since the certification process is ongoing. Even after a firm is certified, it must go through surveillance audits once or twice a year (Stamatis, 1995). These surveillance audits are less thorough than the original audit, but they are conducted by the independent third party, and like any other type of audit, the detection of some type of deviation sends the auditors looking for more deviations. In this manner, the firm has an incentive to continue to comply with the ISO 9000 guidelines because the independent third party auditor has the power to revoke the certification of a company. In addition to the surveillance audits, each firm must undergo a complete re-certification audit every three or four years (Taormina, 1995; Peach, 1997; Johnson, 1998; Patterson, 1995). From a research standpoint, ISO 9000 offers some major definitional and timing advantages. A firm is “certified” by an independent third party; therefore, the type or degree of quality practice implementation is known. Furthermore, the type of certification (i.e., ISO 9001, ISO 9002, or ISO 9003) and the exact date that certification was awarded is also known (Peach, 1997; Patterson, 1996; Stamatis, 1995; Johnson, 1998). This eliminates the definition and implementation date problems mentioned previously concerning research on quality management practices. If a company is certified to ISO 9001, it complies with some minimum level of quality management practice. Furthermore, this compliance has been verified by an independent third-party. In addition, knowing the date at which this certification was issued allows the researcher to pin-point dates and provides for comparability. On the other hand, ISO 9000 certification is not a perfect solution for these research problems. For instance, two firms that are both certified to ISO 9000 will not have exactly the same quality management practices. One firm may have much higher quality management practices than the other. Still, both companies have reached a certain minimum level of quality practices (Reiman and Hertz, 1996). Likewise, the date of certification is only an indication of when the firm was officially certified by the independent third party. By that point in time the firm had satisfied the certification requirements; however, they may have reached that level at some point before that time. Several other shortcomings are possible with ISO 9000. ISO 9000 is a process standard and not a product standard. Therefore, one cannot say that the products of an ISO 9000 certified company are of higher quality than those of a non-certified firm. Instead, one can only say that the processes of the firm meet a minimum standard (Cole, 1998). Another possible shortcoming is that certification does not cover an entire company. Instead, certification is usually for a plant, factory, or facility; therefore a company with multiple plants may only have one of these plants certified. Also, a firm may potentially have multiple plants certified. For instance, Du Pont has over 100 facilities certified (Simmons and White, 1999). Clearly, this is a problem when researching financial performance using financial data; however, it is not insurmountable. For instance, some researchers (Anderson et al., 1999; Simmons and White, 1999) consider the first certification of one of the firm’s facilities to have an impact on the financial performance of the entire firm. The idea is that becoming ISO 9000 certified is such a major undertaking that it could have quality impacts on the entire firm (Papps, 1995). Furthermore, when a firm has multiple sites certified, other researchers have treated the time of the first certification as the time of interest (Anderson et al., 1999; Simmons and White, 1999).
A Framework for Interpreting the Antecedents of CEO Compensation: An Organizational Adaptation Perspective
Chen-Ming Chu, Chung Yuan Christian University, R.O.C.
Hsiu-Hua Hu, Ming Chuan University, R.O.C.
Nai-Tai Chu, Chung Yuan Christian University, R.O.C.
This study provided a framework for interpreting the antecedents of CEO incentive compensation from an organizational adaptation perspective, which encompassed strategic choice and environmental determinism. The purpose was to develop a better understanding of the key determinants that influence the incentive structures for CEOs and other top decision-makers in organizations. The result revealed that external environmental factors were found to have the strongest influence on CEO incentive compensation, in particular, technology intensity and industrial life cycles. Consequently, companies operating in the hi-tech and new emerging industries were found to provide a higher ratio of incentive-based compensation. Executive compensation is perhaps the most crucial strategic factor at the organization's disposal, given that it can be used to direct managerial decisions and is likely to have a significant effect on the company’s future. Debates continue on how to optimize the incentive structures of Chief Executive Officers (CEOs) and other top decision-makers in organizations. Also, criticism in the popular business press abounds about CEOs being overpaid and when benchmarked, with peers and their firm’s performance, many executives appear to be rewarded at a level that is excessive. Companies have tried all sorts of methods for providing incentives to executives, such as bonuses, stock, stock options, phantom stock plans, employee stock ownership plans, deferred compensation and so on. The growing interest in such pay-for-performance schemes has come from the realisation that often the interests of top executives and shareholders are not aligned, and that sometimes contracts need to be designed to induce executives and managers to work more closely in the company’s interest. Given the power and influence of a CEO on critical strategy-related decisions, it is imperative that CEOs be compensated on a basis that is consistent with the goals of the organization. As a result, a substantial portion of their total compensation package will be variable and tied closely to the achievement of specific business objectives and corporate financial goals, as well as the attainment of the executive's individual performance objectives. In order to provide a better understanding of CEO incentive structures in organizations, this study provided a framework for interpreting the antecedents of CEO incentive compensation from an organizational adaptation perspective, which encompassed strategic choice and environmental determinism. The literature on compensation concurred that most firms nowadays offered some form of incentive-based compensation in order to motivate and reward their top executives toward increased performance and to align their interests with those of the shareholders. Incentive compensation such as pay-for-performance systems, which appealed to notions of fairness and equity by rewarding employees according to their contributions to the company, motivated employees to work harder toward achieving corporate goals and objectives (Lawler, 1988). If pay was fixed and not linked to work performance, it could easily become a hygiene factor (dissatisfaction) without any motivator function (satisfaction) for employees (Herzberg, Mausner & Snyderman, 1959). A common problem however, is that incentive systems were often designed and administered in ways that seriously undermine their capacity to motivate employees due to a lack of expectancy. Expectancy theories of motivation (Behling & Schriesheim, 1976; Nadler & Lawler, 1977; Vroom, 1964) stated that to motivate employees, incentive systems must establish a strong relationship between employee effort and performance (expectancy), clearly linked performance to rewards (instrumentality), and made rewards large enough to justify the effort required to earn them (valence). To better influence the valence and expectancy components of motivation, Milkovich and Milkovich (1992) suggested that pay should be tied to performance criteria that are within the influence of individual employees, meaning that the amount of pay based on contributions that an employee made can vary considerably from one person to the next. Under different contingency relationships, pay earned may also have different symbolic meanings and indirectly affect motivation (Mahoney, 1991). Aside from the motivational aspect, businesses also adopted incentive compensation programs in order to minimize agency costs and maximize efficiency. Lambert, Larcker, and Weigelt (1993) highlighted the increasing use of agency theory in compensation-related studies, in which the firm consisted of a principal, who supplied the capital and earned a profit, and an agent (in our study, the CEO), who provided his labor in exchange for compensation. The assumptions of agency theory were that agents were motivated by self-interest, were rational actors and were risk averse. An agency problem occured however, when the principal and agent have incongruent goals and different risk preferences (Eisenhardt, 1988; 1989). The principal can solve this problem by: (1) constantly monitoring the agent’s job performance; or (2) making a contract based on the probable the outcomes of the agent’s behavior (Demski & Feltham, 1978; Eisenhardt, 1988). However, it was still difficult to precisely define and monitor the requisite behavior, the principal will tie performance to compensation by an outcome-based contract that aligns the interests of both parties (Stroh, Brett, Baumann, & Reilly, 1996). Put simply, the contract would make the agent’s compensation an increasing function of performance (Abowd, 1990; Lambert, Larcker & Weigelt, 1993). Gomez-Mejia, Tosi and Hinkin (1987) also found that performance was a key determinant of CEO compensation.
An Overview of Knowledge Management Assessment Approaches
Dr. Martin Grossman, Bridgewater State College, Bridgewater, MA
While knowledge management (KM) continues to gain popularity as a corporate strategy, the acceptance of standardized KM assessment approaches has lagged. The development of metrics to assess a firm’s knowledge-based assets is inherently problematic due to the intangible nature of such resources. Nonetheless, assessment is of vital importance for valuation purposes as wells as to let managers determine whether particular KM initiatives are working. There has been a recent surge of interest in the area of knowledge management assessment and a host of new methods and frameworks emanating from both the academic and practitioner communities. And yet with all of the different alternatives available, there is a dearth of empirical evidence to suggest one approach is more appropriate than another and little consensus. This paper provides an overview of the knowledge management assessment landscape and suggests some areas for future development. Although the study of knowledge has its roots in antiquity, the field of ‘knowledge management’ as a self conscious discipline is a recent phenomenon. Peter Drucker was one of the first management gurus to laud the centrality of knowledge in the organizational context (Drucker, 1994), stressing that the collective knowledge residing in the minds of its employees, customers, suppliers, etc. , is the most vital resource of an organization’s economic growth, even more than the traditional factors of production (land, labor and capital). The criticality of knowledge-based assets to the firm is often reflected in the disparity that may exist between a company’s market capitalization and its book value. Companies such as Microsoft and Cisco, for example, are valued at levels remarkably disproportionate to the actual hard assets held by these companies. Such overvaluations can be attributed to intellectual capital (IC), defined as the sum of everything everybody in a company knows that gives it a competitive edge in the market place (Stewart, 1991). The field of knowledge management continues to gain momentum as it enters its second decade. According to one estimate, 81% of the leading companies in Europe and the U.S. are utilizing some form of KM (Beccera-Fernandez, et al., 2004). Indeed, KM is being adopted by some of the world’s largest and well known corporations, such as Accenture, Cable & Wireless, DaimlerChrysler, Enrst & Young, Ford, Hewlett Packard, and Unilever (Rao, 2005). A survey of CEOs of U.S. companies found that knowledge management was judged to be one of the most important trends in today’s business environment, surpassed only by globalization (MacGillivray, 2003). Knowledge management is not only being adopted at the corporate level; it is being recognized as an important aspect of national economic growth and is being taken seriously by international development institutions (Malhotra, 2003; Passerini, 2003). Over the past decade, we have seen KM emerge as an academic discipline, with more and more universities and colleges offering specialized courses and programs in the subject. Accreditation and curricula standardization bodies have acknowledged the importance of knowledge management skill-sets in today’s hypercompetitive knowledge based economy, and have advocated its inclusion in information systems (IS) curricula (Gorgone et al., 2005; Hunt, 2004). Additionally, we are seeing increasing activity in academic research relating to KM and a growing number of institutions offering KM degree programs at both the undergraduate and graduate levels (Stankowsky, 2005; Sutton, 2002). Enabling information technologies that foster collaboration and the sharing of knowledge also hold a key position in the KM landscape. Vendors are offering a new breed of tools and techniques, broadly classified as knowledge management systems (KMS) that facilitate or map the flow and transfer of knowledge. Examples are intranets and extranets, groupware, data warehousing and data mining tools, search engines, content management systems, enterprise knowledge portals, online Communities of Practice, and social network analysis (Rao, 2005). With this groundswell of activity, it is easy to overlook the fact that KM is still an emerging discipline which lacks a solid theoretical foundation. Much work still needs to be done in the field to formalize the frameworks, taxonomies, and procedures that are necessary to serve practitioners and which are critical to solidify its position as a unique and valuable discipline. There are many examples of KM failures documented in the literature, often attributed to non technical factors such as management buy-in, knowledge hoarding, lack of trust, etc. (Stankowsky, 2005). Measurement is perhaps the least developed aspect of KM because of the inherent difficulty of measuring something that can not bet seen or touched. However, if the discipline of KM is to survive and make a long-lasting contribution, it will need to achieve greater levels of standardization and better metrics to assess its effectiveness. The frequently quoted maxim you can’t manage what you can’t measure underlies much current management thinking. Motivated to a great extent by the quality movement in the U.S., the incorporation of metrics into disciplines such as project management and software engineering (e.g. Mean Time between Failures, Lines of Code, etc.) has become common-place. Especially in today’s post dot-com, hypercompetitive environment, managers are hard pressed to justify investments in technology and demonstrate solid returns. This is particularly true in the area of knowledge management, which has been dismissed by some as just another IT related fad (e.g. Wilson, 2002). The prevailing view is that successful knowledge management involves deep cultural transformation within the organization, requiring the deliberate actions of management as well as employees. Like any other organizational management system, the chances of success are greatly enhanced if there is a systematic process in place that allows for its measurement. Firestone and McElroy (2003) consider the development of a comprehensive metrics system to be one of several critical success factors necessary to allow KM to evolve to the next level. The ‘new knowledge management’, as it is called, requires a more formalized assessment framework since the present ad hoc approach is too slow and uncertain. Similarly, Bose (2004) regards measurement as one of four key enablers underlying KM strategy, along with culture, technology, and infrastructure. While all four are critical, measurement is perhaps the most difficult to manage due to the inconsistent understanding of the underlying KM concepts, the existence of so many different frameworks, and a lack of a unified terminology. The need for a clear-cut roadmap for organizations embarking upon a KM initiative is evident, and is reflected by the rising popularity of the Chief Knowledge Officer (CKO), a new top level corporate job function whose mandate is to manage organizational knowledge resources as part of overall corporate strategy. Among the most important functions of the CKO is the identification and justification of the metrics the organization should use to implement an effective KM strategy. With over 25% of Fortune 500 companies currently employing CKOs and another 43% with plans to do so within the next few years (Bose, 2004), KM assessment is likely to remain a vital topic for some time to come.
Research Trends on Patent Analysis: An Analysis of the Research Published in Library’s Electronic Databases
Dr. Kuei-Kuei Lai, National Yunlin University of Science and Technology, Taiwan
Mei-Lan Lin, National Yunlin University of Science and Technology & Far-East College, Taiwan
Shu-Min Chang, National Yunlin University of Science and Technology & Nan Kai Institute of Technology, Taiwan
Researches on patent analysis are often applied in the management of technology, and the topics of this arena are becoming increasingly diversified. Various themes studied in patent analysis are easily obtained through library’s electronic databases. Not only do we use a library’s electronic database, we also apply a citation technique to figure out the relationship between target literatures. Two hundred and fifty three target literatures regarding the subject of patent analysis were retrieved from the electronic system of a library. According to our investigation, research on patent analysis has been continuous and has been extensively discussed for the past ten years. Patent citation and patent statistics are two major keywords for retrieving related papers. The journals of Research Policy and Scientometrics contain most of the papers which are relevant and specific to patent analysis. Narin, F. and Meyer, M. both have a relatively higher productivity of being published. In terms of citation frequency, our results show that Griliches, Z. 1991 has the highest citation rate. In addition, the rate of cooperative publishing goes as high as 61%. Some literatures specific to patent analysis are based on previous researches. From an evolutionary perspective, this study brings new insights of intellectual development on patent analysis and also makes it easier to understand specific topics of researches. Patent analysis is an important topic in the management of technology. Academics and corporations acquire information of technological development and competitive intelligence from patent databases. The number of patents issued is often in terms of technological R&D output. The simplest way to measure R&D productivity is to count the number of patents. Griliches (1990) indicated additional insight into patents. Patents not only can measure inventive activities in the way of output but also as input. In other words, patent statistics can act as economic indicators. Furthermore, patent statistics are also technological indicators by improving patent information to judge technological change (Basberg, 1987). One can transfer patent data into valuable information or into intelligence by patent analysis. Usually, patent analysis has two purposes for general use, competitive analysis and technology trend analysis (Liu & Shyu, 1997). Nevertheless, patent information has a high strategic impact and has been studied in multidimensional subjects. It is not easy for a newcomer to have the entire picture of patent analysis, concept formulation and the evolution of knowledge diffusion. From a strategic evolutionary point of view, the science system will trace the development of scientific trends and over time reshape the fitness landscape. This is a way of employing citation in order to understand the process of knowledge accumulation. We defined patent analysis as methods for analyzing patent data. The standard for patent analysis is the same across the globe, all industries, and all companies. Basberg (1987) synthesized earlier patent literatures into three types. One type relates to the legislation and the functioning of the patent system. The other two are rationale of the system and patents as technical information. Regarding the third type, he also pointed out three groups of patent statistics as technology indicators. Patent analysis in strategic planning is dissected on the propensity of innovative activities, and the research trajectories between science and technology (Pavitt, 1985). Tsuji (2001) systematically classified literatures of patent analysis into three groups. The first group was international comparison. The others were econometric analysis and technology changes. In addition, using patent statistics as indicators of innovative activities were based on the three perspectives of economics, bibliometrics, and descriptive comparisons for policy uses. Besides, Ernst (2003) recommended the value of patent information for strategic planning, and he built a conceptual framework showing patent functions in technology management. Patent information is a useful source in five areas. Patent analysis can be employed in competitor monitoring, assessing the attractiveness of technologies, technology portfolio management, the identification of external sources for knowledge generation, and human resource management. Similarly, many researches exemplified numerous methods of applying patent analysis, especially in technology competition analysis, investment evaluation, patent portfolio management, research management, product scope surveillance, corporate valuation, as well as acquisition and merge, etc (Aston and Sen, 1988; Breitzman and Mogee, 2002). These above studies are concerned with the relationship between technological change and economic development, the diffusion of technology, and the analysis of the innovation process. After reviewing some literatures on patent analysis, we propose a framework of taxonomy in terms of different levels of analysis. Ten research areas are systematically induced through four dimensions. According to the analysis scope and prior literatures, we synthesize the areas of research applications from patent analysis to policy making and international comparison, science and technology, knowledge spillovers, competitive intelligence, technology licensing, corporate strategy, business function, technology development, and product management. Some relevant papers are listed in Table 1. It is very interesting that some explore citation relationship between science and technology and attempt to understand any linkage among papers and patents in a national and international comparison (Meyer, 2000; Vereek, al et., 2002). In fact, using bibliometric analysis on an intellectual structure is not of recent origin. In the 1960s, Price (1965) and Klesser (1963) proposed the concept of research front and the technique of bibliographical coupling to estimate content-relatedness. Furthermore, Yagi (1965) developed a citation matrix for scientific papers to examine the internal connections among papers. In another case, Burton (1995), used citation matrices to group journals, and he defined clique as the strength of relations between two actors. The similarity or the strength of relations of two literatures reflects the mechanisms of the intellectual science network. The reference, journal and author networks were frequently studied by the method of cluster analysis and MDS (Multidimensional Scaling) (White and Griffith, 1981; Hargens, 2000; He & Hui, 2002). From the perspective of dynamical aspects, Braam, Moed, & van Raan (1991) took time-dependent factors in the citation trajectory into account. Based on contingency and citation theory, Scharnhorst (1998) showed consideration for the historical dimension of the dynamics of the evolution of science. It allows for a better understanding of new insights of scientific trends in the citation landscape by selection and mutation of evolutionary dynamics. In the light of science and technology, the technique of citation analysis is also used to investigate their affiliation. Lai & Wu (2005) proposed a patent classification system for learning the relationships among technological groups and the evolution of a technology group. In general, it is clear that proper use of citation analysis of bibliometrics can help us interpret future trends of specific topics of research.
Effects of Job Satisfaction and Perceived Alternative Employment Opportunities on Turnover Intention: An Examination of Public Sector Organizations
Dr. Ing-San Hwang, National Taipei University, Taiwan
Jyh-Huei Kuo, National Taipei University, Taiwan
This study analyzes the effects of job satisfaction and perceived alternative employment opportunities on turnover intention. Thanks to the support of the Taiwan government, the data collection has been conducted using personnel from various government departments. The present research surveyed 259 executives and staff employed in the government. The results show that solely job satisfaction does not have a significant relationship with turnover intention. However, the interaction between job satisfaction and perceived alternative employment opportunities does have a negative effect on turnover intention. In fact, perceived alternative employment opportunities have a positive effect on turnover intention. The conclusion suggests that more reliable measures should be developed when discussing the turnover intention in public sector organizations. The study of personnel turnover has attracted academic attention in the field of human resources management for several decades. It is widely believed that a significant amount of turnover adversely influences organizational effectiveness (Hom and Griffeth, 1995; Hom and Kinichi, 2001). By identifying the determinants of turnover, researchers could predict turnover behaviors more precisely and managers could take measures in advance to prevent turnover. Among the determinants of turnover, job satisfaction plays a major role in most theories of turnover (Lee et al., 1999) and operates as the key psychological predictor in most turnover studies (Dickter, Roznowski, and Harrison, 1996). According to Hom and Kinichi (2001), testing theories of how loss of job satisfaction progresses into job-termination has dominated turnover research over the past 25 years. The correlation between job satisfaction and turnover has been demonstrated in many meta-analytic findings (Trevor, 2001). However, such bivariate relationships do not address the importance of interactions in turnover prediction. The primary focus of this study is to go one step further and to investigate interactions of factors on turnover prediction. The public sector is often recognized for having a low turnover rate, especially when compared to that of the private sector. However, it might be asked whether low turnover rate is really equal to greater job satisfaction. Are there other exogenous variables affecting this relationship? Since public sector organizations have different recruitment, training, remuneration and pension fund systems from the private sector in Taiwan, it is clear that the environment variable should be taken into consideration. This study incorporates the variable of perceived alternative employment opportunities as the proxy variable of the environment when developing the analytical framework. By doing so, the explanation of turnover will be more complete. With the support of the Taiwan government, this research surveyed 259 executives and staff of government departments. The results should be of use to other pubic sector organizations when investigating their determinants of turnover. The remainder of this paper is organized as follows. Following on from this introduction, the next section provides a literature review and develops hypotheses. And that is followed by a discussion of the research methodology adopted in this study. The descriptive statistics of the variables used in this analysis are then presented, while the penultimate section provides results of the analysis and an explanatory discussion. The implications of findings are provided in the closing section. The relationship between job satisfaction and turnover is one of the most thoroughly investigated topics in the turnover literature. Job satisfaction has long been recognized as an important variable in explaining turnover intention. It is defined as the positive emotional response to a job situation resulting from attaining what the employee wants and values from the job (Lock et al, 1983; Olsen, 1993). This implies that job satisfaction can be captured by either a one-dimensional concept of global job satisfaction or a multi-dimensional, multi-faceted construct of job satisfaction that captures different aspects of a job situation; and which can vary independently and, therefore, should be measured separately. Additionally, job satisfaction is the extent to which employees like their work. Porter and Steers (1973) argued that the extent of employee job satisfaction reflected the cumulative level of “met worker expectations”. That is, job satisfaction is the extent of employee’s expectation that their job will provide a mix of features (such as pay, promotion, or autonomy), and for which each employee has certain preferential values. The range and importance of these preferences vary across individuals, but when the accumulation of unmet expectations becomes sufficiently large, there is less job satisfaction and greater probability of withdrawal behavior (Pearson, 1991). Busch et al., (1998) also pointed out that those who are relatively satisfied with their jobs would stay in them longer, i.e. reducing personnel turnover, and such staff are likely to be less absent. Trevor (2001) summarized extant literature and argued that the empirical results of job satisfaction’s correlation with turnover ranged from -0.18 to -0.24; meaning that job satisfaction is negatively correlated to turnover. This study adopts turnover intention in preference to turnover as the dependent variable, because turnover intention is highly correlated with turnover, and the adoption of turnover may have a survival bias and thereby lead to an incorrect conclusion. Accordingly, turnover intention was chosen as the better analytical variable in this research. Based on the above consideration, the following hypothesis is proposed: H1: Job satisfaction negatively affects turnover intention. Previous studies suggest a stable negative relationship between job satisfaction and turnover. However, job satisfaction alone has been found to account for a small percentage of the total variance in a turnover model—less than 15 per cent (Blau and Boal, 1989). As is typical of most research on turnover, the focus was on members leaving rather than entering the organization. Moreover, as in much turnover research, attention was concentrated on the members voluntarily leaving the organization (Price, 1977). When employees consider leaving the organization, they will consider their attitudes toward their present jobs, and also evaluate what possibilities there are in the external environment. That is, they need to know the alternative employment opportunities in the labor market. Some studies used employment rate to represent the condition of the labor market. Price (2001) proposed a mediating process between labor market opportunities and turnover. Price (ibid) notes that propositions without intervening processes are often incomplete, so we have begun to offer mediating variables where appropriate. The inference of this is as follows: more opportunities produces greater employee awareness of alternative jobs in the environment, employees then evaluates the costs and benefits of these alternative jobs and, finally, if the benefits of the jobs appear to be greater than the costs, employees quit their jobs. In the public sector, the employees in an organization also need to consider the labor market conditions. However, the employment rate may not be an appropriate variable to evaluate the labor market. In Price’s (2001) reflections on the determinants of voluntary turnover, the environment variable—opportunity—is also proposed. Opportunity is the availability of alternative jobs in the environment, and is the type of labor market variable emphasized by economists. For public sector organization employees, the alternative job opportunities are limited in some specific organizations. The total employment rate for a country or a local geographical area may not serve as a good explanatory variable. Some scholars propose perceived alternative employment opportunities to be the analytical variable (Hulin et. al, 1985; Steel and Griffeth, 1989).
Return on R&D Investment Across High-Tech Products’ Life Cycle
C. Catherine Chiang, North Carolina Central University, Durham, NC
In this paper, I model the return on research and development (R&D) spending across different stages of a product’s life cycle and show that, for a high-tech product, return on R&D is heterogeneous across different product life cycle stages. Specifically, return is highest in the mature stage before the innovator firm offers any price concession on the product and lowest at the decline stage when successive price concession is provided. The model developed in this paper also provides specific guidelines for product pricing and development strategies, so that high return on R&D can be achieved. In this paper, I build a simple but stylized model to find a theoretical relationship between a firm’s return on R&D investment and its product life cycle stages. Specifically, I model the return on R&D spending across the stages of a product’s life cycle and show that under the assumptions that resemble the market environment of high technological industries, return on R&D is highest during the mature/saturation stage and lowest at the decline stage. I further identify specific conditions under which highest return on R&D can be achieved at mature stage or at the saturation stage. For a high-tech firm, the life cycle stage of its product(s) is an important indicator in setting its product development and marketing strategies. Because a high-tech company differs from other companies in the amount (and intensity) of its R&D spending, to the managers and shareholders of a high-tech firm, return on R&D is a more important factor in valuing the firm than return on other assets. The objective of this paper is to build a model of return on R&D across different stage of a product’s life cycle and find the conditions under which return on R&D would be highest. Although the extant research has argued and shown heterogeneity in firm performance across different product life cycle stage, the conclusions are based solely on empirical evidence (Agarwal 1997; Chiang and Mensah 2004; Rink et. al. 1999). No theoretical link has been established regarding the relationship between return on R&D investment and product life cycle stages. A theoretical model between return on R&D and product life cycle stage is of interests to academics because it provides a mathematical structure to explain the observed business phenomenon and helps further the research in the area. A theoretical link is also of interests to technological (marketing) managers because it suggests the best development (pricing) strategy at each stage of a product’s life cycle. The model developed in this paper provides insights and has important practical implications for firms engaging in high-tech product development. It suggests that firms may achieve higher return on R&D by acquiring firms that have products development closer to mature stage than buying firms with products that are very early in the product life cycle. It also provides specific guidelines about the pricing strategy as a product proceeds to the later stages of its life. The remainder of the paper is organized as follows: Section 2 provides a brief review of current literature on life cycle studies. Section 3 describes the characteristics of the life cycle stages of a typical high-tech product. Section 4 presents the model, and Section 5 summarizes and concludes. Most product life cycle research aims at documenting the revenue pattern throughout a product’s life. Although various patterns have been documented, it is found that the majority of the products fall into a bell shaped revenue line across four stages of a product’s life - introduction, growth, mature, and decline, with a peak at the stage of maturity (see Rink and Swan 1979 for literature review). In addition to documenting the revenue pattern of products or firms, a number of studies examine the heterogeneity of firm performance across different life cycle stages. For example, Anthony and Ramesh (1992) test whether the stock market response to sales growth and capital investment is a function of firm life cycle stage. They find that, from the growth to the stagnant stage, the market response coefficients of unexpected sales growth and unexpected capital investment decline monotonically. Greenstein and Wade (1999) examine the product life cycle in the commercial mainframe computer market and find that a product’s failure rate is a function of product life cycle stage. In an industry study on prepackaged software firms, Chiang and Mensah (2004) find that market valuation of R&D spending is higher in the growth and mature stages of product life cycle stages than the introduction and the decline stages. In summary, current life cycle research either empirically documents products’ revenue curve or provides evidence that firm performance varies across different life cycle stages; however, no mathematical structure has been developed to explain the documented business phenomenon and to identify manageable factors that can help firms attaining their targeted performance. A mathematical model contributes to the life cycle research because it provides a basic structure from which more complicated life cycle situations can be analyzed – either by factoring in more parameters or altering some of the assumptions. A mathematical model is also of interests to executives because it locates the point and identifies the conditions under which the best performance can be reached. The location and conditions, in turn, provide useful guidelines for executives to design better product development and pricing strategies. To understand the life cycle stages of a high-tech product, it is necessary to know the market structure of a typical high-tech industry. First of all, entry into the high-tech industries is not free. The primary entry barrier is perhaps the high labor cost of skilled engineers. Highly skilled software engineers are costly to find as the competition for skilled software engineers is also intensive due to the already competitive product market environment. Although patents can deter entry into some industries, they are not effective in other industries due to the short life span of some products as a result of frequent technological innovations. Another characteristic of a high-tech product is that once a product is developed, improvement on the product often requires little marginal cost. For instance, incorporating additional features of a software product sometimes only requires transporting/altering program codes from another existing product. Similarly, for high-tech products that are software-based, the output can be expanded almost instantaneously and with little additional cost (Liebowitz and Margolis 1999). In other words, although the initial R&D outlays may be significant, the cost at growth and mature stage is relatively low.
A Research Study on Students’ Level of Acceptance in Applying E-Learning for Business Courses -A Case Study on a Technical College in Taiwan
Kai- Wen Cheng, National Kaohsiung Hospitality College, Taiwan, R.O.C.
The twenty-first century is a period of knowledge economy, so both academic units and enterprises now exert great influence on the development and promotion of e-learning. By reviewing the available archived literature, we discovered several research studies on e-learning with regard to courses in languages, biology, and geography. However, there were few studies regarding business courses. As a teacher conducting business courses, it is vital to utilize e-learning to expedite business courses more easily. This research thus used a technical college in Taiwan to survey students’ level of acceptance in applying e-learning for business courses. The purpose of the study was to provide a clear reference for developing and promoting e-learning in all business courses. Peter F. Drucker remarked that we are in a time of great change and great changes will continue to the year 2020; the evolution of these times will also be unpredictable (Hamel, 2000). The rise of a knowledge economy will serve as a key point contributing to this great change (Wu, 2002). In the epoch of the knowledge economy, society is transforming into a new structure where the “binary digit” has become the foundation and basis for thinking (Chou & Yang, 1998). Academic units and enterprises have veered from traditional in-class learning to limitless e-learning to meet the demand for more learner-centered environments (Chen, 2002). Nevertheless, e-learning is just in its first stages and has not expanded on a large scale (Chen, 2002). By reviewing the literature from prior years, we discovered that relevant research studies on e-learning as it applies to business courses was scanty. Thus, this research aimed to survey students’ level of acceptance regarding applying e-learning for business courses with the intention of providing a useful reference for developing future academic units and promoting the use of e-learning in business courses. Peterson, Marostica & Callahan (1999) pointed out that the letter “e” in e-learning may have several meanings: Exploration: The tools provided by the Internet for learners to explore information. Experience: The learning experiences provided by the Internet for learners in all areas that encourage learners to explore self-learning. Engagement: The innovative ways of learning provided by the Internet for learners that help the learners engage in cooperation and cultivate the awareness of community. Ease of Use: The availability of easy, digitized learning environments and the handy tools provided by the Internet to learners. This environment transforms the content of the lecture into on-line courses that are then available for transmission. Empowerment: The content, manner, and degree of the progress of learning provided by the Internet to learners, so they can fully grasp the content, manner, and degree of learning by themselves. This research reviewed the literature on e-learning and sorted out the staple definitions of e-learning in terms of content and techniques as follows: From the literature, we can determine that e-learning means learners gaining knowledge through the individual use of electronic or digital media, such as computers, tapes, CDS, the Internet, etc. There are two modes of e-earning, namely on-line learning and off-line learning. On-line learning means that the learners achieve learning through the media of the Internet or an intranet, a term also equivalent to web-based learning (WBL). Off-line learning means that learners learn by way of an independent computer and the content of the learning material is stored on disks or CDs. The concept is equivalent to computer-based learning (CBL). For a clear definition of the research scope of how students apply e-learning for business courses, this research study included all students who have used any of the above-mentioned modes of e-learning. This research adopted a questionnaire survey in three stages: 1. Stage One: Based on the purpose of the research, we reviewed literature regarding the designing of the questionnaire content with a view of attaining the goal of the research. After completing the design of the questionnaire, we obtained input and opinions from professional scholars in the related field. 2. Stage Two: The research then adopted a purposive sampling of 5 students from each of the 9 departments in the case school. All 45 students had taken business courses. These students served as the pilot-test subjects of the questionnaire to examine the quality of the items in the questionnaire. Inferior items were either discarded or revised. 3. Stage Three: The research here adopted a purposive sampling again for 20 students from each of the 9 departments in the case school, with the exclusion of the 45 pilot-test students. All 180 students had taken business courses. They undertook the formal survey using the questionnaire. 1. The Content of the Questionnaire: The questionnaire was designed to have two parts: (1) Relevant Personal Information: For this part, the content of the questionnaire sought basic information, such as the interviewee’s gender, school system, computer skills, and whether they have applied e-learning for business courses. This information became the foundation for the statistical analysis. (2) Survey into the Students’ Level of Acceptance in Applying E-learning for Business Courses: Students’ opinions played an important role in the success in applying e-learning for business courses. For this part, the content of the questionnaire focused on exploring the students’ level of acceptance in applying e-learning for business courses. Based on the purpose of the research, the items of the questionnaire covered three dimensions. The 15 items listed (see Table 1) constitute “the rating scale for the students’ level of acceptance for applying e-learning to business courses”. This questionnaire adopted Likert’ s five-rating-scale pattern of response, ranking responses from “strongly agree”, “agree”, “neutral”, “disagree”, “strongly disagree” and scoring the responses from 5 to 1. However, if the statement was negative, the scoring was reversed. For example, if the student marked “strongly agree”, the score would be 1. In contrast, if the student ticked off “strongly disagree”, the score was 5. The first draft of the questionnaire was reviewed by three professional scholars in the related field. They discussed the content of the first draft and provided their viewpoints for revision, so the pilot-test questionnaire became qualified for expert validity to undertake the task of the pilot-test. (1) The Pilot-test for the Questionnaire: First, 45 copies of the pilot-test questionnaire were distributed, and 45 valid copies of the questionnaire were retrieved, so the rate of retrieval for valid questionnaires was100%. (2) Analysis of the Pilot-test Questionnaire: After the retrieval of the pilot-test questionnaires, we adopted the SPSS 10.0 package software of statistics to progress through item analysis and reliability analysis and delete the undiscriminating items. A. Items Analysis: According to Chiou (2005), this research used the missing value test, descriptive statistical test, extremes test, and homogeneity test, aimed for “the rating scale for the students’ level of acceptance in applying e-learning for business courses”. The purpose was to delete the items with a lower rate of discrimination. The results of the items analysis showed that the rate of discrimination for Questions 2, 13, and 14 of the pilot-test questionnaire was inferior. B. Reliability Analysis: This research adopted an internal consistency analysis to conduct the reliability test. The Cronbach’s α for “the rating scale for the students’ level of acceptance in applying e-learning for business courses “turned out to be 0.8042. That outcome indicated that the reliability of the rating scale is excellent as a whole. Nevertheless, the result of the reliability analysis showed that if we deleted Questions 13 and 14, the Cronbach’s α would go up to 0.8131 and 0.8076, respectively.
Study of the Financial Planning Behaviors of Chinese Senior Citizens
Dr. Yao-Tsung Tsai, Nan Kai Institute of Technology, Taiwan
The goal of this study is to identify the relationship among senior citizen’s financial planning behaviors, financial planning tools selection, and financial planning service selection. First, based on the relevant literatures of researches on financial planning services, we have formulated and distributed surveys which will be used in our study. Subsequently, we have utilized participant observation method to interview senior citizens on their financial planning behaviors and to survey their planning tools selection in order to determine the senior citizens’ thinking process and logic and to further understand the suitability and problems of existing financial planning service models. Finally, we have applied statistical analyses such as factor analysis and multiple-regression analysis to develop a new financial planning service model for financial institutions to design the guidelines and bylaws of financial planning services offered in the future. Furthermore, we believe that this model can be used as references for the financial institutions or government when making important strategic decisions. There is an old Chinese adage that states “A person’s real life begins at age 70”. However, the actual meaning of this adage is more symbolic than realistic. In another word, if no adequate financial planning has been provided at young age, when the person becomes senior citizen, it will mark as the beginning of his or her misery. The Chinese tradition place key emphasis on piety to parents and supporting parents is the social norm that each person should comply with. Since Chinese people are family-oriented, the family will be responsible for providing cares to their members at each stage of their lives. Therefore, for an extensive period of time, the society pays no special attention to providing proper cares and social benefits for the senior citizens. However, when the society has evolved from an agricultural society into an industrial and commercial society, there is a trend of decreasing number of large family clusters and increasing number of small families. Therefore, the responsibility of providing cares for senior citizens has been shifted from families to the entire society. As a result, we have conducted studies on the senior citizens’ financial planning behaviors in order to identify financial planning models that are unique to the Chinese senior citizens and to provide assistance on finding solutions for social issues related with these senior citizens. The so-called senior citizen can be defined as a person who is older than 65 and is currently retired. According to the 5th Article of the Civil Service Retirement Act, any civil service worker who has been on the job for at least 5 years should be retired from his or her current position after reaching age 65. In the private sectors, according to the 54th Article of the Basic Labor Standards Act, the employer can force its employee older than 60 into retirement. Based on a study done by Department of Statistics, Ministry of the Interior, Taiwan has officially become an aged society by the end of 1994. Due to the prolongation of the average citizen life expectancy, the average population age distribution has shifted toward older age. The percentage of population age 65 or higher has increased from 14.1% in 1994 to 19.1% in 2003. In countries such as Thailand, Singapore and China, the percentage of population age 65 or higher is also showing an upward trend. However, in United State, such percentage has remained steady during the same period (Statistics Yearbook, Ministry of the Interior, 2004). Therefore, it is clear that aging population is a growing trend in the oriental countries. In another statistics report published by Ministry of the Interior (2005), currently in the group of people 65 or older in Taiwan, there are 693 thousands of senior citizens receiving senior farmer subsidies, 726 thousands receiving senior citizen welfare subsidies, 149 thousand receiving mid or low income senior citizen living subsidies, 92 thousand receiving handicapped subsidies, and 96 thousand receiving veteran support subsidies. Overall, more than 1.756 million senior citizen, or 80% of the population, have received subsidies from the government. With advancement in medical technologies, the average life expectancy of the senior citizens has also increased. Therefore, such increase in life expectancy might potentially cause more problems and burdens to the society and highlight the importance of the senior citizens’ financial planning. The financial planning service is a process for assisting the clients to seek financial success and to reach the clients’ financial goal. In the years past, the average income in Taiwan was low and the majority of people’s financial planning is to balance their incomes and expenses and to find addition income sources. However, with the continuing accumulation of personal wealth, the emphasis of financial planning has shifted to evaluating the performances of the financial planning. Currently, the financial institutions play the role of the providers of various financial products while the clients play the role of various financial products seekers. The financial planning advisors, then, play the role of mediators between the financial institutions and the clients. These advisors not only will analyze the clients’ financial needs but they have to recommend appropriate financial products to the clients in order to help the clients to reach their financial goal. Therefore, providing all-around financial planning services will allow these financial planning advisors to earn the clients’ trust. To summarize the above discussion, with the changes in population distribution, society and family structures, and health treatment methods and technologies, we believe that the senior citizens in the future will have higher education level, better health, and more personal wealth. These future senior citizens, thus, will demand higher standards for their independent life qualities. Therefore, our research attempts to provide an all-around and life-long financial planning for these senior citizens. By considering the senior citizens’ financial planning needs such as retirement planning, personal asset planning and investment planning, we have made suggestions on how to provide proper financial planning for the senior citizens based on our research result. We believe that our suggestions can serve as reference for the government agencies and financial institutions when providing financial planning services for the senior citizens in the future. Generally speaking, the personal financial planning is the execution of a coordinated and integrated long-term financial plan which is customer-oriented. The ultimate goal of such plan is to achieve financial success (Dawes, 1998). Blazevic & Lievens (2004), based on their research result, have indicated that the key factors of financial planning services include harmonious cross-functional interfaces, organizational diversity and participative decision-making on the level. Innovation and symmetrical communication should be based on the balance of information needs. In fact, the quality of communication with clients will directly influence the quality of the financial planning and the financial institution’s performances. Therefore, in a research on the financial asset purchasing behavior, asset formation, and asset validation within Spain, Plath & Stevenson (2005) have pointed out that “Power-prestige”, “retention-time”, “distrust-anxiety”, and “quality” are the four major phases of the ideal structure for financial planning services. However, in fact, in addition to the financial planning behaviors mentioned above, there are other key elements in financing planning which require further discussion. We have summarized the additional factors as follow:
Integrating the Role of Sales Agent into the Branding Model in the Insurance Industry
Hui-Chen Chang, National Taipei University, Taipei, Taiwan
The use of salespersons to build effective and enduring customer relationships is a strategic choice in many industries. This study explores the coincident influence of brand equity and sales agents in the service industry and develops a positive customer relationship maintenance model in order to identify and investigate how the antecedent variables influence relationship marketing outcomes. The potential effects of customers’ relationship maintenance motivations on subsequent relationship attitudes and behaviors are examined through the sample drawn from the customers’ database of three major insurance companies in Taiwan. The results provide support for the model and indicate the important role of the sales agent in customer relationships in the insurance industry; that is, the salesperson’s attributes, product and non-product attributes, customer benefits, customer affect and trust of salesperson and firm are observed as significantly contributing to relationship marketing outcomes in services. The managerial implications are also provided. Building and maintaining enduring relationships with customers of service industries is increasingly important in the field of relationship marketing. A critical challenge is to identify and investigate how antecedent variables influence relationship marketing outcomes. This study develops an integrated customer relationship model based on Aaker’s (1996) definition of brand equity, Keller’s (1998) definition of brand associations, and other literature related to relationship retention determinants. Currently, firms enhance the customers-sales link to increase brand loyalty. However, this creates a dilemma for managers, either for the company brand loyalty or for the salespeople’s commitment (Reynolds & Beatty, 1999; Reynolds & Arnold, 2000). This study addresses several marketing issues. First, how do companies manage their sales agents in order to fortify the corporate brand equity? To answer this question, this study explores the relationship among brand attitude, agent attitude, and loyalty, then discusses the antecedents of brand attitude and agent attitude. The second issue addressed by this study is identification of the mutual influences between sales agents and corporate branding. Most earlier research focused on isolating the effects of sales agents or corporate investment (e.g., Reynolds & Arnold, 2000). In this study, however, both the sales agents and corporate benefits with their reciprocal effects are considered, as is the issue of whether one is more important than the other for developing positive attitudes. The insurance industry in Taiwan was the research object in this study. The sample industry selected was chosen for two reasons: First, because the insurance industry is a mature market, the concept of brand leadership is more important than other kinds of marketing manipulation; second, the financial services industry is moving rapidly toward diversification, so the results of this study will help related financial companies to understand the sales agents’ effect and the importance of marketing implementation. In order to develop this research framework, the extant literature related to brand building and brand equity were reviewed. Marketing programs that link strong, favorable, and unique associations to the brand create a positive brand image. Keller (1998) emphasized what consumers need and want in brand equity. Along with these aspects, Keller further classified the brand associations into three major categories of increasing scope: attributes, benefits, and attitudes. Attributes are those descriptive features that characterize a product or service (Keller, 1998). Keller further classified attributes as product-related attributes and non-product-related attributes. Replacing the form of product-related attributes can change the consumer’s attitude toward the product (Cherenev & Carpenter, 2001), and further change the consumer’s behaviors (Pritchard & Howard, 1997; Maxham, 2001). Wulf, Odekerken-Schroder and Lacobucci (2001) provided evidence of the customer’s perceived quality of non-product-related attributes that indirectly generated attitude and behavioral loyalty. Benefits can be further divided into three categories. Functional benefits are the intrinsic advantages of product or service consumption and usually correspond to product-related attributes. Symbolic benefits relate to underlying needs for social approval or personal expression and to outer-directed self-esteem. Experiential benefits relate to what consumers feel when they use the product or service and can correspond to both product-related attributes and non-product-related attributes, such as usage imagery (Keller, 1998). Consumers’ brand attitudes generally depend on the attributes and benefits of the brand. Chaudhuri and Holbrook (2001) investigated the psychological condition of customers, that is, their brand attitudes, and classified the brand attitudes into two types: brand trust and brand affect. Brand trust is defined as the consumers’ degree of reliance on the quality of product or service provided by the brand. As Sirdeshmukh, Singh, and Sabol (2002) noted, the trust of relationship exchange can be classified into faith in the firm and in the salesperson; trust comes as a result of competence, benevolence, and problem solving skills. Consequently, the strength of brand trust will reflect the degree of customer purchase intention and customer recommendation (Kennedy, Ferrell & LeClair, 2001; Verhoef, Franses, & Hoekstra, 2002). Brand affect is a positive emotion derived from a particular brand that affects the extent of purchase intention and behavior and depends on the degree of delight (Barone, Miniard & Romeo, 2000; Chaudhuri & Holbrook, 2001; Chaudhuri, 2002). Most of brand equity lies in the degree of consumers’ loyalty toward the brand. Oliver (1999) defined brand loyalty as “a deeply held commitment to rebuy or repatronize a preferred product/service consistently in the future”. This definition emphasizes the two distinct aspects of brand loyalty—behavioral loyalty and attitudinal loyalty (Aaker, 1991; Pritchard & Howard, 1997; Chaudhuri & Holbrook, 2001; Oliver, 1999). Attitudinal loyalty is the commitment of customers to the brand-customer relationship. Researchers argued that the role of commitment is the nucleus of attitudinal loyalty, which point of view differs from those theories which posit that commitment is a antecedent of loyalty (Morgan & Hunt 1994; Pritchard, Havitz & Howard, 1999; Verhoef, Franses, & Hoekstra, 2002). This manner of thinking requires research in this area to be close to flawless in order to distinguish between brand attitude and purchase loyalty (Gruen, Summers & Acito, 2000).
Integrating Management Concepts and Telecom Facilities Using Ideas from Public Law
Dr. Yi-Chun Lin, Central Police University, Taiwan, R.O.C
Dr. In-Chung Chang, National Chiao Tung University, Taiwan, R.O.C
The Telecom business in Taiwan was originally exclusively controlled by the Directorate General of Telecommunications (DGT) under the Ministry of Transportation and Communications (MOTC). In 1996, Taiwan completed revisions to the Telecommunications Act and a reorganization of the DGT. Since then, the DGT has aggressively opened Taiwan’s telecom market and established an environment of fair competition. In 2001, the domestic telecom market was fully liberalized. However, the market still existed some disputes on infrastructure of telecom. This study analyzes management techniques such as: (1) the concept of property; (2) scarcity of resources; (3) public benefit; (4) management efficiency; and (5) fair competition. Moreover, four Taiwanese telecom companies involved in the fixed network industry are taken as examples to explain the gain or loss of the sharable facilities on the telecom market. After the Telecommunications Act was published and implemented in Jan. 1996, the restructuring and the establishment of the Directorate General of Telecommunications (DGT) with the segregation of telecom supervision and operation; at the same time, market was opened up and private sector investment was encouraged. The telecommunications enterprises were classified into Type I and Type II Telecommunications enterprises. Type II Telecommunications enterprises has been fully opened up after the Telecommunications Act was passed, since Type I Telecommunications enterprises were involved scarce resource such as frequency or land utilization , special permit method was adopted. According to planning schedule, started from 1996, it will achieve the goal of fully deregulation in 5 years. Mar 18, 2000, there are 3 Fix Network operators participated market competition and achieved fully de-regulation. As of December 2004, there were 99 Type I telecom operators in Taiwan, 10 more than in the year before. In the same comparison period, the number of Type II telecom operators increased by 84 to 539. Mobile communication service in Taiwan has also developed since private operators entered the market at the beginning of 1998. As the number of subscribers was sharply rising. Following Taiwan Cellular acquisition of Mobitai, the number of major 2G service providers in Taiwan fell into three (Chunghwa Telecom, Taiwan Cellular and FarEasTone)in 2004. Through the vigorous opening of the telecom market, the government has enabled Taiwan to nimbly responds to Internet developments and enhance its competitiveness in this area. With more open markets, Taiwan’s online population, both individuals and business, has steadily increased. The government has released a series deregulation policies and measures in an efficient manner to promote market competition, However, prior to the formal opening of wireless communication market in 1997, the original DGT was sub-divided in 1996, splitting into the newly-established DGT, and the state-run Chunghwa Telecom Co., Ltd(CHC). It has inherited more than 90% real property from original DGT and begun a carrier with significant market power (dominant carriers). In May 1999, MOTC announced regulations aimed at managing the Fixed-Network operators, ensuring a legal basis for resolving disputes in the telecommunication business. Furthermore, since March 2000 it has spawned a Type I telecommunications enterprise; the so-called Fixed-Network telecommunication service industry operators. In July 2001 these Fixed-Network owners formally launched various local and international communication services, at the meanwhile the main facilities including as machinery, apparatus, cable, lines and other related equipment used for telecommunications owner by CHC, caused the unfair competition, the entrants cannot competitive with the dominator(CHC).Therefore, During the initial opening stage, the existing owner, Chunghwa Telecom, received some of the capital belonging to the government, giving them ownership over considerable quantities of essential facilities(bottleneck facilities), thereby handing them with significant market power. However, the essential telecom facilities did not open along with the telecom service market that let the new entrants had to establish their own infrastructure facilities from scratch line. However, the new entrants have difficulty bridging such gaps in competition, since the establishment of the infrastructure involves numerous difficulties. On top of that, disputes have arisen. In the period April 1-15 2004, four approved companies were visited to learn about their operating experiences. Their opinions were summarized and analyzed, demonstrating the following transitional issues associated with government-sponsored telecom liberalization: The feature of infrastructure development for Taiwan’s environment of human-geography: the dominator is unwilling to share their bottleneck facilities with the new entrants. This trend of developing and establishing the essential facilities will wastes re-investment and causes environmental devastation. The issue of land use for telecom: Can land earmarked for telecom use by the Government in Taiwan satisfy the requirements of the existing four companies? Environmental issue: When developing the linear shaped or point shaped the newly-established telecom facilities in protected land areas, the sharable facilities should be considered in relation to the issues of forest, water, and soil conservation. Difficulty in acquiring the licenses of telecom architecture: most of the land allocated for telecom facilities is located on land controlled by the state, the relevant government authorities prohibited rebuilding on a protected-environment zone. Cable running issues: The Fix-Network enterprises has no choice but to run cables along ditch edges to achieve the requirement of one million subscribers but the relevant government authorities cut these cables owing to a lack of a legal basis. There is no fixed-standard of fee charge: Currently land rights are determined by each county in Taiwan but local governments vary in their recognition of relevant regulations, meaning no standard charges exist for current Fix-Network operators. The issue of equipment space: It is difficult to find space for telecom pipelines and equipment, and cables running from out-door telecom networking links up inside buildings since the supervising authorities do not allow fixed network owners to run cable lines on the outer wall of buildings. The holes of pipe for cables are too narrow to be useful: When the new entrants ask to share pipelines, the dimensions of the original holes of the pipelines makes it hard to allocate new cables. Given the above issues related to the establishment of telecom facilities, it can be summarized and concluded: that the essential facilities has the features of incomplete property, various government authorities have the rights of way, unequal rights, saturated market structure, as well as the confusion caused by uncertainty regarding the classification of these telecom facilities as either “ public facilities” or “ business facilities”; additionally, no proper planning exists regarding underground pipelines, nor is there any clear timing of sharing essential facilities owned by the dominator. Cross-analysis of the principle of management and telecom facility usage based on the concept of law
The Application of Quality Function Deployment (QFD) in Product Development - The Case Study of Taiwan Hypermarket Building
Shih-shue Sher, Feng Chia University, Taiwan
In order to get competitive advantage, a business must depend on a continuous innovation and creativity of products. Hence, the research and development of new products become a focus topic. Quality Function Deployment is a useful tool for the research and development of new products. QFD is a systematic method to transfer customers’ thinking into the designing, manufacturing, and costing processes of products, services, and parts. It uses a technique transformation of a two dimensional matrix to expand its meanings for the purpose of guaranteeing the quality of the product and service, in turn, it will meet the customer’s requirement and provide customer’s satisfaction. This research uses the large scale supermarkets as our observation subjects to investigate the application QFD on supermarket’s architecture. Data are collected through mail questionnaires and face-to-face interviews. Seventy eight valid questionnaires are returned. A quality house was built after analyzing the data through the technique of QFD. The result revealed (1) that the model developed through QFD can handle the customer’s demand and develop the quality required by the customers, and (2) that QFD is a mature logical thinking and is applicative to service industry. In order to acquire the consumers’ identification and the competitive advantage, firms must continue change in management and accelerate product innovation. Therefore, for all companies, the development of new products becomes one of the most important issues. Since 1990, after the hypermarket was introduced from Europe, it changed the environment of retail business in this decade. At the initial stage, MAKRO was the only company of the market in Taiwan. However, its business model was so-called warehouse model, its customers were belonged to retail stores, and its location was mostly located in industrial district. Hence, its building was such as a factory or a warehouse, instead of having complex or sophistic layout design. With the restriction slacked, that government admitted hypermarket to set up its business in business section, RT-MARK, CARREFOUR, and GEANT successively entered the market, and the end consumer became their main target. At the present day, there are 108 hypermarkets in Taiwan (TCFA, Taiwan chain store and franchise association, 2004), and all of them focus on the expanding of locations and numbers, as they enter the market. But with the consumer behavior changing, the kind of warehouse model cannot satisfy the need of consumers anymore. Hypermarket begins to rethink about the substance of its building, including building’s function, purpose, and value for consumers. Quality Function Deployment was originally developed from Japan in late 1960s. It is the process that transforms the voice of customer into design, components, manufacturing, and cost (Akao & Mizuuno, 1978). Its purpose is based on satisfying demand and creating value with customers. In other words, by blending with the voice of customer at product designed and planned stage, firms can consequently increase their sales, performance, and market shares. Via the verification of many transnational companies which came from United States, Japan, and Europe, the appreciation of QFD has been quite high in manufacturing. But, in contract, it is not placed importance on service yet. By way of the deploying QFD, while firms exploit new location, we are able to offer firms some suggestion about what the customer really need. Firms can in consequence add the voice of customer to the initial planning and evaluating phrase, and benefit the research and development of the future facility of hyper market. Furthermore, we can simultaneously verify whether QFD can apply to non-manufacturing industry. As a result, the purpose of this paper is as follows: Using QFD to change the concept of architectonic design, while drawing the design notion back to function-based. From design planning to building work stage, using QFD to increase the quality of building engineering. Using QFD to transform the need of customer systematically into design, then ensuring it workable at the manufacturing stage. QFD is the complete process that uses a systematic method to transform the voice of customer into the design, components, process, and cost of product or service. In which the transformation is used dual matrix technique to deploy step by step. The purpose of QFD is that let firms to focus on the need of customer to perform product plan or design, as they deal with a series of research or manufacturing activities. In addition, QFD is the beforehand design, then forms the attractive product or service which customers are willing to purchase again. Therefore, when the product or service is at the planning or constructing stage, all of the related units, including marketing, design, engineering, manufacturing, and quality certification, have to team up to cooperate. In conclusion, QFD is a structural and systematic method and proceeded by cross-team cooperating. It has to know the need of customer first, and decide the relevant skills and evaluate the function of product or service later. It is expected to shorten the period of new product development, and make the product suitable to join market. Its basic method is that uses the house of quality (HOQ) to drive the requirement of product quality. The foundation of HOQ is function-oriented. It is the part of quality system that can specify what quality consumers accurately want.
Assessment of Internet Marketing and Competitive Strategies for Leisure Farming Industry in Taiwan
Dr. Chen-Ling Fang, National Taipei University, Taiwan
Dr. Ting Lie, Yuan Ze University, Taiwan
Internet marketing has brought Taiwan leisure farming industry into modernization and globalization. In Taiwan, most of the leisure farms have used Internet marketing to different extent in promoting business. Although using their homepages to access information on the location, facilities, products and services are popular, few studies have examined the effectiveness of Internet marketing and the level of usage by tourists. This study aims at evaluating the Internet usage behavior of leisure farm tourists and identifying the marketing strategies for different consumer groups. Two hundred and thirty-two valid questionnaires collected from 20 leisure farms were analyzed. The results showed that tourists perceived an inadequate amount of information provided by leisure farm websites, the most important function demanded by tourists. Six factors affecting the choice of leisure farms by tourists were also identified. Cluster analysis yielded three different consumers groups that varied with respect to their choice emphases. Internet marketing is to employ the web to market products and services to customers. Promotion, advertising, transaction and payment can be done through web pages. The users of Internet marketing can conveniently access information anywhere with computers hooked up to the Internet. In Taiwan, an increasing number of leisure farms have used Internet marketing to different extent in promoting their business. Information on the location, facilities, products and services are provided through leisure farms’ homepages. As a result of the nation-wide 2-day weekly holiday since 1999, the demand of local leisure areas in Taiwan has been increased. According to the Tourism Bureau (2000) of the Ministry of Transportation and Communications, from 1986 to 2000, there was a two-fold increase in Taiwan’s leisure population. The number of leisure locations in Taiwan was even more than doubled. Leisure farms can not only provide the places for the leisure tourists, but also enhance their apprehension of Mother Nature and become more aware the necessity of environmental resources protection. It is noted that Internet marketing can be more extensively employed to boost competitiveness in the tourism industry. According to the Council of Agriculture of Taiwan, there were 67 leisure farms applying for financial assistance from the Council during 1992 to 2000. In view of the higher number of applications submitted in 1991 and 1992, those who ceased to apply were mostly out of business or lack of intention to reinvest the business. The main problem behind their management difficulties has been the lack of human resources and capital. Therefore, this study aims to seek ways to utilize Internet marketing to lower marketing cost while increasing their competitiveness as well. There are three objectives in this study : i) to analyze the current practice of Internet marketing by leisure farming industry in Taiwan; ii) to understand the demand of Internet marketing from the perspective of tourists and iii) to identify the marketing strategies for leisure farms. The rest of the paper is structured as follows: the next section reviews current literature on leisure farming and Internet marketing. Methodology, results and discussion are then presented. The emergence of Internet in 1969 marked the beginning of a hyper-speed era. Internet is a media as well as a channel which has changed the traditional business model. Compared with traditional marketing strategies, Internet marketing has additional benefits, such as customer relationship management (CRM), direct marketing, and electronic transaction, all of which reduce operating and social costs. Heskett (1986) suggested that a successful service company needs to have her uniqueness transformed into competitive advantages in order to maintain the competitive edge in the market. Internet marketing is one way to achieve this goal. Internet marketing includes banner advertisement, sponsorships advertisement, and interstitials advertisement, marketing through e-mail, viral marketing etc. Compared with those used in traditional marketing, these channels are different in advertising content, media, time of advertisement, and promotion methods. General business as well as tourism industry and leisure farming industry are applying these Internet marketing channels for sales and marketing purposes. Lu (1997) studied the effect of Internet marketing application in Taiwan businesses. The results showed that information comprehensiveness, internationalization and new technologies acceptance ability were positively correlated (Lu 1997) with the application of Internet marketing. Yung (1998) studied the perception of consumers on the use of Internet marketing in tourism industry, showing that 93% of the consumers would consider using Internet to purchase traveling products. Also, the study suggested targeting differentiated traveling products on different consumer groups. Data collection of this study was divided into two stages: (i) classifying leisure farms according to their size, operational performance and Internet usage level; (ii) surveying leisure farm tourists of their perception and demand for Internet marketing. According to Zheng (1998), Taiwan’s leisure farms can be classified based along two dimensions: the level of environment resources usage and the selection of natural or artificial resources. These two dimensions can be used to define the product and/or services provided by the leisure farms and the selection of market segments. Twenty leisure farms were selected, of which 270 visiting tourists were surveyed, with 232 valid questionnaires collected. Cluster analysis was utilized to evaluate the attributes and the demand of tourists for Internet marketing, and factor analysis was employed to identify the marketing strategies for leisure farms.
A Kano Two-dimensional Quality Model in Taiwan’s Hot Spring Hotels Service Quality Evaluations
Yao-Hsien Lee, Chung Hua University, Taiwan
Tung-Liang Chen, Chung Hua University, Taiwan
hot spring recreation activity has become an important leisure activity of the
tourism industry in Taiwan. Therefore, activity-related issues such as service
quality, pricing, marketing, and safe/security have been extensively
investigated. The empirical result of the study shows that there are 15 items
belong to indifferent quality element defined by the Kano two-dimensional model
in the 23 examined items of hot spring hotels service quality. We also adopt a
quality improvement methodology which is proposed by Matzler and Hinterhuber
(1998) to find that improving the surroundings of hot spring hotel and its
decorative expressions can increase the consumer’s satisfaction. There are
statistically significant satisfaction differences in customers with different
demographics and different traveling modes. According to our findings, we
suggest that the hot spring hotel industry in Taiwan should segment the targeted
consumers and provide them with the different services by considering relevant
demographics obtained in the paper.
According to the quarterly national economic trends in Taiwan area,
growth rates increased in steadily from 2001 to 2004 are -2.18%, 3.59%, 3.31%,
and 5.87%, respectively. The average national income in 2001 is 11,639 USD and
12,528 USD in 2004. The official survey of Taiwanese travel indicates that the
average domestic travel frequency of each citizen in 1999 is 4.01 and 5.39 in
2003. The number of travelers in domestic travel during the weekend in 1999
increased 56.2% and 60.9% in 2003. This shows that increases in the economic
growth and the national income have increased travel activities of Taiwanese. In
addition, the government passes legislation to practice the regulation that
requires two days off in each other week from 1998. The regulation has
completely changed the leisure concept of Taiwanese, as a result the traveling
frequency and expenditure increased. The impacts on the tourism industry have
encouraged the providers of tourist facilities and services to meet the demand
of domestic tourists. There are a lot of hot spring in Taiwan and being
considered an important tourism activity by Taiwanese. The hot spring
distributed geographically wide in Taiwan. It is believed that hot spring has
the medically curative effects. Nowadays, the hot spring tourism is one of the
best choices of domestic tourisms. The quality management of the hot spring
tourism product is becoming an increasingly important management function since
it is crucial to create a good reputation for the quality of the product and
service offered. Therefore, hot spring tourism service providers are more
likely to be successful if they can be depended upon to deliver higher-quality
service levels than their competitors. However, most the literature on service
quality is based upon the traditional one-dimension quality model. That is, the
result is confined to the fact that if a service provider delivers what
consumers expect from a service’s sufficiency, the consumers are satisfied. If
not, the consumers are not satisfied. But this is not a case. In contrast to
the one-dimension quality literature, we are aware of the Kano’s two-dimension
quality model developing the concept that the sufficiency of service quality may
not affect the consumer satisfactions, and sometimes it may result in the
consumer unsatisfactions or even no feelings. In this paper we utilize the Kano
two-dimension quality model to analyze the total service quality of hot spring
hotels. Using the questionnaires to collect the evaluations of the quality of a
service experienced by consumers, we are able to use the Kano two-dimension
quality model to investigate the hot spring hotel’s service quality
characteristics. The model allows us to observe the quality differences and
realize the potential demand of consumers so that we can suggest an improved
realization of the key points of influence on the satisfactions of the consumer
in the hot spring hotels. We also discuss the relationship between the consumer
profile and service quality elements/satisfactions the consumer experienced in
the hot spring hotels. The paper is organized as follows. Section 2 reviews the
related literature, Section 3 states the methodology used in the paper, Section
4 illustrates the results obtained in the paper and discusses managerial
implications, Section 5 concludes. Generally speaking, consumers have
expectations about the product service quality. Similarly, the tourists also
have expectations about tourist attractions. As a rule, the “satisfaction” is
used to measure the tourists feeling about the quality of outdoor travel. Miller
(1977) considers that consumer satisfaction is the result of expectation and
cognition. Lam and Zhang (1999) suggest that if the consumer demand is
satisfied, and then the consumer satisfaction is achieved. Oliver (1981)
indicates that the final psychological state resulting from the disconfirmed
expectancy related to the initial consumer expectation. Gunderson, Heide, and
Olsson (1996), argue that a guest’s post-consumption judgment of a product or
service that can, in turn, be measured by assessing the guest’s evaluation of a
performance on specific attributes. Inspecting outside leisure activities
generally uses the travelers’ actual satisfactions to measure the leisure
quality (Manning, 1986). Baker and Crompton (
Compelling Claims on Multinational Corporate Conduct
Ahmed S. Maghrabi, King Abdelaziz University, Jeddah, Saudi Arabia
This empirical study identifies that multinational corporations by definition are said to have production facilities in more than one country. Beyond economics the social and public objectives of multinational corporations have intriguing phenomena. In the twenty first century many corporations in the western world have been expressing such social objectives as those that are inherent in the conceptual philosophies of “Social Marketing”, “Social Responsibility Marketing” and “Societal Marketing” in their mission statements, policy statements and in their advertising themes after a high visibility of social objectives in the non profile sector of the western world economy. These aims are emerging as a moving force in other parts of the world through out the transnational corporate structure. The twenty first century has witnessed globalization of business with astonishing speed. This speed has played an important role in the life of Multinational Corporation around the global world which has forced them to enter markets all over the world. In order to keep up competitions for these corporations as well as leadership of the markets they have to do every possible way in their long range planning to have production facilities in more than one country. This new century has brought to multinational corporations a new trends which have posed great challenges for companies as they have to interact with different cultures. However, they have been said that there is compelling evidence to suggest that U.S. multinational corporations (MNCs) operating in areas such as transportation, mass media, tourism, publishing, sports, consumer durables and non-durables, and information technology among others have been contributing immensely to make the world a “ Global Village “ (Mcluhan and Powers,1989). Additionally now a days many corporations in America have been expressing social objectives as those that are inherent in the conceptual philosophies of social responsibility marketing and societal marketing in their mission statements, policy statement and in their advertising themes. Moreover, after a high visibility of these social objectives in the non profit sector of the American economy these aims are emerging as a moving force in other parts of the world through out the transnational corporate structure. However, these social objectives and goals have motivated many social scientists and researchers to collect data and run empirical studies under a general title of Social Accounting Inquiries are directed in order, to assess if corporation are formulating new social goals in response to some popular movement. The purpose of this study is to measure the extensively of rising claim on multinational corporate conduct through a survey of an educated younger population of the Saudi society. Multinational corporations can have both positive and negative effects in the society. It also can play a unique position in international market, and to have production facilities in more than one country. However, multinational corporations have a high degree of sensitivity to the corporations contribution to community in which they are located and to society in general (Anderson, 1985). In addition, (Sethi, 1987) indicated that multinational corporations should provide jobs and other benefits through investments even investments under repressive conditions. The ability of multinational corporations in today’s are simply expected to assume social and political responsibilities, in addition to creating jobs and capital (Anderson, 1985). But (Sethi, 1987) indicated that multinationals corporations might build plants and sell products that cause considerable shifts in consumption patterns and adversely effect both the local industry and consumer welfare. Moreover, (Anonymous, 1985) has pointed at that company’s continental Europe is starting to take strategic involvement in their communities seriously. One of the factors driving this trend is the globalization of business with multinational companies at there front of change. Vance and Paderon (1993) indicated that multinational corporations have moral responsibility and a set of correlative duties which has been acquired as business institutions. These moral duties include the responsibility; To assist all employees, including the expatriate managers. Avoid the semblance of discriminatory treatment. Encourage full status integration into a global economy. Foster personal enlightenment and self enrichment. Help individuals develop useful marketable skills. Contribute to the development of a greater and more functional national labor skill base. Encourage a long term focus on creating an enduring value for a maximum number of stake holders, rather than upon short-term and short sighted profit for only a few. Nevertheless, multinational corporations role since the end of the 1960’s is shaping the world economy particularly in less developed parts of the globe has been closely scrutinized (Blond, 1978). In addition, multinational companies whose subsidiaries are concentrated in manufacturing activities are playing an increasingly important role in the economic life of many developing countries (Blond, 1978). However, MNCs will continue to be controversial in their roles, emphasis, direction and priorities so long as they constitute the only dominant forces in International business operations. For this reason, MNCs have gained an ever increasing interest from the popular media and scholarly research. Their role in developing countries should remain the focus of various groups for the years to come because the aspirations and expectations of developing countries are important for the peace and stability of our world (Ali and Al-Shakhis 1991). Moreover, d’ Aquino (1996) indicated that multinational companies are accelerating the exchange of innovations and people across open borders. In addition, multinational companies well advised to devote time and resources to helping develop consensus at the multinational level to advance the cause of social progress. However, multinational corporations will have to become more sensitive to all aspects of technology transfer. They also will have to consider not only profit maximization but stability of the society and its cultural values on one hand and a growth is accompanied by more equitable distribution of income on the other. Moreover, multinational corporations and the countries involved are in harmony and can stand the test of public scrutiny that the corporation will be able to survive and grow in the new international environment. Investment will have to be justified simultaneously for their economic efficiency, political legitimacy and moral sufficiency (Sethi, 1987). The purpose of this research is to assess and analyze Saudi student responses and evaluation of the corporations is formulating new social goals. This research intends to answer questions that arise with regard to the contribution of MNCs in seven dimensions: as and institution, citizen as an Initiative or rational unit, legitimacy of the policy makers personal, rationality social responsibility, legitimacy of the outsiders and welfare of the society.
Copyright 2000-2016. All Rights Reserved