The Business Review, Cambridge

Vol. 3 * Number 2 * Summer. 2005

The Library of Congress, Washington, DC   *   ISSN 1553 - 5827 

Most Trusted.  Most Cited.  Most Read.

All submissions are subject to a double blind review process

Main Page   *   Home   *   Scholarly Journals   *     Academic Conferences   *   Previous Issues   *   Journal Subscription

 

Submit Paper   *     Editorial Team   *    Tracks   *   Guideline   *   Sample Page   *   Standards for Authors / Editors

Members  *  Participating Universities   *   Editorial Policies   *   Jaabc Library   *   Publication Ethics

The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Business Review, Cambridge will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work.  All submissions are subject to a double blind peer review process. The Business Review, Cambridge is a refereed academic journal which  publishes the  scientific research findings in its field with the ISSN 1553-5827 issued by the Library of Congress, Washington, DC.  No Manuscript Will Be Accepted Without the Required Format.  All Manuscripts Should Be Professionally Proofread Before the Submission.  You can use www.editavenue.com for professional proofreading / editing etc...The journal will meet the quality and integrity requirements of applicable accreditation agencies (AACSB, regional) and journal evaluation organizations to insure our publications provide our authors publication venues that are recognized by their institutions for academic advancement and academically qualified statue. 

The Business Review, Cambridge is published two times a year, December and Summer. The e-mail: jaabc1@aol.com; Website: BRC  Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via our e-mail address.. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above. Address advertising inquiries to Advertising Manager.

Copyright 2000-2017. All Rights Reserved

An Evaluation of State Income Tax Systems and their Impact on State Spending and Revenue - A Multi-State Study

Professor Demetrios Giannaros, University of Hartford, CT

 

ABSTRACT

The primary objective of this study is to carry out a multi-state public finance behavioral impact comparative analysis, regarding the introduction of the income tax system in various states.  The emphasis of this multi-state study is to determine whether the introduction of an income tax system, a politically controversial issue, resulted in significantly higher levels of state spending or taxation after the introduction of such revenue system. We use econometric (such as interaction variable analysis) techniques to evaluate whether a significant structural change in state public policy on spending and taxation did materialize after the introduction of the income tax in ten states.  Our results do not show behavioral consistency for the ten states under study--subsequent to the introduction of the income tax.  In the last few years, about forty five state governments in the USA have struggled to cover unexpected large budget deficits.  The economic debate on the issue of state budget deficits revolved around the level of state government spending and taxing, the appropriate form of taxation, the impact of such forms of taxation on politicians’ behavior, and the relative stability (steady stream of revenue) of alternative tax systems.  Some of the discussion and debate turned to the system of taxation and its impact on the budget, spending, taxing and the state economy.  This study attempts to evaluate whether the introduction of the income tax system resulted in behavioral changes in terms of taxing and spending by the state legislatures.  For this purpose, we use econometric techniques to determine if there were structural changes after the introduction of the state income tax in ten different states. The critics of the state income tax system proclaim that politicians would use the income tax as a way to expand the size of state government, that is, “tax and spend.”  Proponents, on the other hand, viewed it as a fair and stable taxation system that does not necessarily result in bigger government.   More specifically, some proposed that the income tax system is a hindrance while others saw it a as stable and fair tax revenue raising systemThe proponents of the income tax suggested that a system of taxation based on a proportionately more progressive income tax regime -- relative to other regressive taxes -- would create a more stable state revenue stream that would avoid the excessive fluctuations in state revenue.  The critics of the income tax system predicted that politicians would use the income tax as a way to expand the size of state government.  The debate was so potent with some strongly believed that the income tax would significantly harm the state economy and individuals.  The leading study quoted by the opposition at the time predicted that the introduction of the income tax would have both negative economic and budget implications with increased spending and taxation.  Prof. Thomas Dye (1990) conducted this econometric study. I became interested in studying this issue, using econometric analysis in 2003, at the peak of the recent state budget deficits and debate.  A similar study has already been completed for the State of Connecticut (see Giannaros, 2004).  In that study, I did not find any significant statistical change in spending or taxation behavior after the state introduced the income tax, in the case of Connecticut.  The purpose of this project is to evaluate the same issues in a number of other states that have introduced income tax systems over the last thirty years.  This will be done in order to determine whether my findings in the case of Connecticut can be substantiated elsewhere and whether the overall findings allow us to reach a theoretical conclusion on this controversial public policy issue. The next logical step in evaluating the behavioral changes of politicians when an income tax system is in place is to determine if the same results can be obtained in other states that introduced a state income tax system during the 20th century.  Thus, this study uses similar econometric techniques to carry out a structural break econometric estimation and analysis that allows us to determine if the critics of the state income tax system are correct in their assumption regarding changes in behavior of politicians with respect to taxation and spending.  This is the first time that the issue of state income tax impact on state government spending behavior will be thoroughly studied using econometric techniques. The overall results should have substantial taxation policy implications and may assist resolving some of the misconceptions that exist in this public policy arena. Thus, the purpose of this project is to carry out a multi-state comparative analysis using econometric (such as interaction variable analysis) techniques that allow us to evaluate whether a significant shift in state spending and taxation did materialize after the introduction of the income tax.  That is, the primary emphasis of this multi-state study is to determine whether the introduction of an income tax system resulted in significantly higher levels of state spending or taxation as a percentage of the state economy or state income.  Moreover, we evaluate whether state tax systems relying on income tax revenue provide a relatively steady or stable stream of revenue.  Our multi-state study uses time-series analysis and econometric methods to analyze the behavior of state governments spending and tax policy after they introduced the income tax.  For this purpose, I will use time series data for the period of 1944-2000, and determine if there are any statistically significant changes in spending and taxing behavior, during the second half of this period, when the income tax system is in effect. 

 

Leadership Competencies: Can They Be Learned?

Dr. Stewart L. Tubbs, Eastern Michigan University

Dr. Eric Schulz, Eastern Michigan University

 

ABSTRACT

There is a substantial body of research evidence regarding the importance of leadership development to organizational success, Charan, Drotter and Noel (2001), Fullmer and Goldsmith (2001), McCall and Hollenbeck (2002), McCauley, Moxley and Van Velsor (1998), Viceri and Fulmer (1997). However, there remains a controversy about whether or not leadership can, in fact, be learned. In this paper leadership is defined as, “Influencing others to accomplish organizational Goals,” (Tubbs, 2005). Based on the model presented in this paper, the rationale is advanced that some aspects of leadership are more or less fixed at a young age while others are able to be developed even well into adult life. This paper describes the model and explains which aspects of leadership are fixed early in life  and which are more able to be developed. A taxonomy of leadership competencies is also presented. Approximately $50 billion a year is spent on Leadership Development (Raelin (2004). Yet, one of the most frequently asked questions of leadership scholars is whether leadership can, in fact, be taught and learned. The answer seems to be a qualified yes. In other words, some aspects of leadership are more likely to be learnable and others are less so. For the purposes of this paper, leadership is defined as, “Influencing others to accomplish organizational goals,” Tubbs, (2005). Leadership is often discussed in terms of competencies, (Boyatsis (1982), Goleman, Boyatsis and McKee (2002), Whetton and Cameron, (2002). Competency is a term that describes the characteristics that lead to success on a job or at a task, Boyatsis (1982). Competencies can be described by the acronym KSA knowledge, skills and abilities. The model in Appendix A shows that leadership competencies can be represented by three concentric circles. These three circles describe three distinct aspects of leadership. The innermost circle includes an individual’s Core Personality. The second circle includes an individual’s values. The outermost circle represents an individual’s leadership behaviors and skills, (i.e., competencies). The authors contend that (1) the attributes in the innermost circle are more or less fixed at a young age and are unlikely to be changed as a result of leadership development efforts; (2) that a person’s values are somewhat more malleable than personality characteristics, yet more stable and perhaps more resistant to change than behaviors; and (3) that the behaviors represented in the outermost circle are the most likely to be changed through leadership development efforts. Each of these circles are be discussed below. Personality represents the accumulation of enduring physical and mental attributes that provide an individual with his or her identity.  These attributes result from the interaction of heredity and environmental factors.  Determinates of personality can be grouped in four broad categories: hereditary, cultural, familial and social interactions.  Each of these perspectives suggest that an individual’s personality is a relatively enduring characteristic formed early in their life.   Genetic specialists argue that components of an individual’s personality are in large part heredity (Holden, 1988).  Personality is also affected by an individual’s culture because it directs what an individual will learn and formats the context in which behavior is interpreted (Hofstede, 1984).  While the culture dictates and restricts what can be taught, a person’s family plays a key role in the constitution of an individual’s personality development.  The overall social context created by parents is vital to personality development (Levinson, 1978).  Besides family influences on personality, social interactions in the environment effect personality by dictating what is acceptable and customary in the social group. An individual’s core personality is a relatively permanent characteristic of that leader.  It is formed by hereditary, cultural, familial and social interactions.  Research findings indicate that individual personalities differ along dominate personality dimensions, attribution of events impacting the individual and preferred manner of resolving unmet needs.   In sum, the personality research suggests that an individual’s core personality, the innermost circle, is formed early in the life of an individual and once acquired is rather immutable. While personality is certainly a strong influence on behavior, an individual’s values also strongly shape peoples’ behaviors, Rokeach (1965). Witness the recent U.S. presidential election. Exit polls showed that, more than any other factor, “moral values” shaped their choice of candidate. Similarly, the scandals in American corporations have resulted in a loud outcry for an increased emphasis on business ethics in American business schools. The strong value system is that individuals and business that perform in ethical ways are much more likely to succeed in the long run. The outermost circle in the model shown in Appendix A describes the competencies associated with effective leadership. The fifty competencies are clustered under seven metacompetencies, Each will be discussed below.  Leaders can gain the respect of followers by demonstrating their knowledge of the entire organization. Behaviors can include use of systems theory to show the realization that changes in one part of the organization often can and do impact other parts of the system. Effective utilization of technology such as the Internet and an organizational Intranet are other such behaviors. Acting in a way that demonstrates global sensitivity is another skill. Utilizing effective compensation plans is another critical organization-wide competency. Demonstrating an overarching commitment to ethical practices is still another “big picture” competency.  Demonstrating a compelling and achievable vision and a decisive pursuit of that vision are more likely to lead to organizational success. Showing inclusiveness and respect for diversity is another competency that can lead to organizational success. Overcoming obstacles and overcoming adversity will also most likely result in organizational success. Attitudes include demonstrating appropriate self-confidence and confidence in others as well.

 

Foreign Direct Investment in Post-Soviet and Eastern European Transition Economies

Nataliya Ass, Glasgow Caledonian University, Glasgow, UK

Professor Matthias Beck, Glasgow Caledonian University, Glasgow, UK

 

ABSTRACT

Foreign Direct Investment (FDI) is generally considered central to the development of transition economies. Based on the ‘benign model’ of FDI and development (1) it is often assumed that the attraction of FDI is a key to the long-term integration of these economies in the global market, but also represents one of the most effective means for short-run economic recovery. This paper argues that unstable countries, in particular post-Soviet states, can involve riskier investors who prefer relatively weak political regimes over stronger ones and who reduce their investment inputs once host states become more assertive. Using a panel dataset for 27 countries, including post-Soviet and Eastern European states, for the years 1998-2002, this paper examines the relationships between the extent of Foreign Direct Investment (FDI) and country risks and other economic indicators via conventional LSDV and GLSE regression models. In this context, it is noted that the ability to attract FDI, in general, corresponds with indicators of stability and policy consistency in a country, whereby a strong negative correlation exists between FDI as percent of GDP and levels of Economic Risks in all countries. It can be also observed that FDI, for most countries, is negatively related to per capita GDP, which indicates that foreign investors are likely to reduce levels of investment in a country once certain level of prosperity and, possibly, political stability are reached. Furthermore the paper notes that, for certain groups of countries, in particular European post-Soviet republics, FDI is positively related to debt and negatively to trade balance. This is likely to reflect some of the adverse effects of FDI which arise when host governments overspend as a consequence of expected revenue inflows. In recent years Foreign Direct Investment (FDI) has become increasingly important for transition and developing economies. This emphasis on FDI is closely related to the ongoing globalisation of world economic processes and the concomitant increase in international capital flows. Among transition economies, in particular, there is belief that the process of building a compatible economic infrastructure requires mobilization of, not only national resources, but also the involvement of investment sources which are situated beyond the borders of the domestic economy (2). In this context, large-scale diversified international investment activity has often been considered as one of the main leverages for the long term development of a highly integrated economy (3). Recipient states with positive attitudes towards FDI typically expect that the utilisation of foreign investment will provide them with access to contemporary technologies and management, contribute to the creation of national investment markets, increase the efficiency of both production factors and goods markets, maintain macroeconomic stabilisation, and facilitate the solution of social problems which might have arisen during the transitional period (4). In his work Moran (1998) identifies these assumptions with regard to FDI in transition economies as the ‘benign model’ of FDI and development. In accordance with this model, the host country expects that “under reasonably competitive conditions FDI should raise efficiency, expand output, and lead to higher economic growth in the country” (5). Moreover, the model emphasizes the fact that “the additional supply of capital should lower the relative return to capital, while the additional demand for labour should bid up the wages of workers, thereby equalizing the distribution of income and improving (quite probably) health and education throughout society” (6). In recent years, some empirical research has argued that “FDI is not necessarily an indication of good health for an economy; to the contrary, riskier countries with less developed financial markets and weak institutions tend to attract less capital, but more in the form of FDI” (7). Hausmann and Fernandez-Arias (2000), for instance, suggest that the proposition “that capital inflows tend to take the form of FDI – share of FDI in total liabilities tends to be higher - in countries that are safer, more promising and with better institutions and policies” (8) is misleading. They find that, while capital flows tend to go to countries that are safer, more developed, more open, more stable and have better and advanced institutions and financial markets, the share of FDI in total capital flows to these countries tends to be lower than the share of other forms of capital flows. Implied in this analysis is the view that FDI is not necessarily an indication of good health; rather countries that are riskier, poorer, more volatile and more closed, less financially developed with weaker institutions and with more natural resources, tend to attract a greater share of capital in the form of FDI (9).  It can be argued that, contrary to the anticipations of host states that FDI will always improve economic welfare as a result of the attraction of ‘benign FDI’ (10), it is often ‘malign FDI’ (11) that targets the unstable transition and developing countries, with high levels of country risk and political instability, which are often experiencing a lack of progress in the implementation of structural reforms.According to Moran’s ‘malign model’ of FDI and development: “instead of filling the gap between savings and investment, Multinational Enterprises (MNEs) may lower domestic savings and investment by extracting rents and siphoning off the capital through preferred access to local capital markets and local supplies of foreign exchange. Instead of closing the gap between investment and foreign exchange, they might drive domestic producers out of business and substitute imported inputs. The MNE may reinvest in the same or related industries in the host country and extend its market power. The repatriation of profits might drain capital from the host country. MNEs’ use of “inappropriate” capital intensive technologies may produce small labour elite while consigning many workers to the ranks of the unemployed. Their tight control over technology, higher management functions and export channels may prevent the beneficial spillovers and externalities hoped for in more optimistic scenarios” (12). Following Hausmann and Fernandez-Arias (2000) and Elo (2003) it can be argued that the risky investment profile of post-Soviet countries is likely to lead to a situation where a greater share of FDI is ‘malign’. This means that investment either does not contribute to the host states’ growth, or even influences negatively the overall development of the host state. A comparative analysis of the relationship between the volumes of FDI inflows and recipient country’s indicators is applied for 5 different groupings of countries: all CEE including post-Soviet states, the Baltic states, Central European accession and non-accession countries; all post-Soviet states except the Baltic states;Central European accession and non-accession countries and the Baltic states;Central European post-Soviet states including Belarus, Moldova, Russia and Ukraine;Central Asian States.

 

Emotional Intelligence:  Are Successful Leaders Born Or Made?

Von Johnson, Woodbury University, CA

 

ABSTRACT

Is leadership a ‘natural-born’ set of skills, or can personality traits that are common to superior leaders be taught?  This paper accepts the idea that heightened emotional intelligence plays a key role in leadership success; but can emotional intelligence be successfully acquired and retained through training programs? We will examine this argument from three perspectives. First, the paper will present the historical framework and contemporary thinking surrounding ‘emotional intelligence’.  Second, we will briefly examine the results of various case studies where emotional intelligence measures are tested in the workplace.  Third, the paper will conclude with remarks from the consultant community charged with executive training and enhancing leadership by teaching ‘emotional intelligence’. Are successful leaders born or made?  If a leader possesses superior knowledge in his or her field, coupled with an academically acquired set of business skills and ‘common sense’, what ingredients are missing that set the exceptional leader apart from the pack?  Is there really an ‘emotional intelligence’ that complements and accentuates the leader’s tool chest?  If so, what role does emotional intelligence (EI) play in the mix of acquired skills, learned experience and personality that describe the ‘star performer’?  Ultimately, if these ingredients are quantifiable and easy to understand, can training programs be developed and implemented with positive, measurable results?  The roots of emotional intelligence are found in twentieth century psychological research. As early as the nineteen-forties, David Wechsler wrote about the ‘non-intellective’ versus the ‘intellective’ components of personality (Wechsler 1940); referring to the individual’s affective, social and personal traits.  Wechsler later clarified his position, adding that the non-intellective factors of personality are essential ingredients for predicting an individual’s capacity to succeed in life (Wechsler, 1943).  Wechsler concluded that total intelligence on a human level could not be measured without regard for non-intellective ingredients. Wechsler’s work gained support from other researchers including his contemporary, Robert Thorndike, who described ‘social intelligence’ as “the ability to understand and manage men and women, boys and girls – to act wisely in human relations” (Thorndike & Stein, 1937, p. 228).  Still later, Howard Gardner (1983) suggested that social intelligence was composed of ‘intrapersonal’ and ‘interpersonal’ intelligence.  Gardner argued that the measure of these two factors stood side-by-side with traditional measures (e.g., IQ, or Intelligent Quotient).  Gardner assigned five key abilities that compose social intelligence:  self-awareness, managing emotions, motivating oneself, empathy and handling relationships. In his doctoral dissertation, Bar-On (1988) coined the term ‘emotional quotient’ (EQ) as analogous to intelligence quotient (IQ).  Bar-On described an array of emotional and social traits and skills the individual enlists to cope with environmental demands.  This model includes (1) the ability to be aware of, to understand and to express oneself; (2) the ability to be aware of, to understand and relate to others; (3) the ability to deal with strong emotions and control one’s impulses; and (4) the ability to adapt to change and to solve problems of a personal or social nature.  Bar-On’s model of psychological well-being and environmental adaptation includes five areas:  interpersonal skills, intrapersonal skills, adaptability, stress management and general mood (Bar-On, 1997). In 1990, Mayer and Salovey coined the term ‘emotional intelligence’ and used the term to describe a “form of social intelligence that involves the ability to monitor one’s own and other’s feelings and emotions, to discriminate among them, and to use this information to guide one’s thinking and actions” (Salovey & Mayer, 1993, p. 433). More specifically, emotional intelligence is the ability to perceive emotions, to access and generate emotions to assist thought, to understand emotions and emotional knowledge, and to reflectively regulate emotions to promote emotional and intellectual growth (Mayer & Salovey, 1997).  This approach placed emotional intelligence among other ‘intelligences’ that were subject to objective testing.  As with other ‘intelligences’, the abilities described by Mayer and Salovey conveyed and supported the idea that emotional intelligence is distinct from other intellectual abilities, improves over time and is not a simple set of ‘preferred behaviors’ (Mayer, Salovey, Caruso & Sitarenios, 2001; Mayer, Caruso & Salovey, 1999). The entry of emotional intelligence into the business environment really began with Daniel Goleman’s 1995 best-selling novel, Emotional Intelligence.  Goleman asserts a theory that mastering the domains of Self-Awareness, Self-Management, Social Awareness, and Relationship Management translate into success in the workplace (Goleman, 2001). Within these four domains lie the competencies that determine success in the workplace.  For instance, the domain of Self-Awareness contains the learned competency for ‘accurate self-assessment’.  According to Goleman, these competencies are learned rather than genetically acquired - an important distinction from previous theories. Goleman starts with the idea that mastering specific abilities within emotional intelligence is important; but he ends with the notion that EI is a foundation for learned competencies, which enable greater effectiveness in the workplace (Goleman, 2001). Another important distinction of Goleman’s work is the grounding of his theories in the workplace.  Goleman’s EI domains and competencies are listed in Table 1 (Goleman, Boyatis and McKee, 2002).

 

African American Women Striving to Break through Invisible Barriers and Overcome Obstacles in Corporate America

Dr. Linda Joyce Gunn, Indiana University Northwest, Gary, IN

 

ABSTRACT

The purpose of this descriptive study is to determine the role of African American women in corporate America, specifically to provide some basic information about the attitudes and perceptions of barriers that prevent and preclude African American women from reaching executive positions that involve formulating policy, and to offer suggestions for the removal of those barriers.  The results indicate that there are barriers to advancement of African American women in corporate America, particularly in their ability to formulate policy.  Organizations must institute changes to eliminate these barriers by considering the five predictors which determine if African American women can formulate policy at the highest level of the organization: 1) the opportunity to implement policy that impacts the department, 2) the opportunity to implement policy that impacts the entire organization, 3) earning an advanced degree, 4) the availability of an African American male mentor, and 5) the availability of an African American male role model.  If African American women are to be in positions of power 1) there must be an opportunity to implement policy that impacts the organization, 2) organization must be willing to sever business relationships with clients and vendors if they do not respect the organization’s employees, and 3) the individual must make a commitment to remain in the organization.  Finally, organizations must take the initiative to eliminate barriers by accepting employees for who they are and what they contribute to the organization, capitalize on applicable talents each employee possesses, provide opportunities for employees to succeed, and provide the same opportunities to African American females as they do for white males and white females. Barriers [refer to factors that] hold people back and separate or hinder them from attaining certain goals; in particular, they impeded corporate progress.  The implication is that there is something on the other side of the barrier that can and should be attained.  Barriers must be considered as an explanation for the low numbers of African American women in corporate America.  Barriers, in terms of professional advancement in the corporate environment, have not been utilized as explanations of women’s positions in the corporate arena (Domhoff and Dye, 1987). The purpose of this study is to determine if there are indeed barriers in the advancement of African American women in corporate America, particularly in their ability to formulate policy. Persons in power have line positions, formulate policy, and control operating budgets because these functions can be seen as vehicles for action.  Power is the ability to do, and as such, means that one has access to those vehicles.  That a person has been given a formal title does not necessarily mean that power, which is expected to be included with the title, comes along as a package deal (Kanter, 1977).  This is the challenge for African American women in the corporate environment.  They are bestowed with the titles of manager, supervisor, director, and vice president, but beyond that, little if anything else is provided.  Herein lies the problem.  The goal of the study is to determine if African American women are involved in establishing policy that impact the entire corporation.  Another goal is to provide suggestions to eliminate the barriers. Both the study and the analysis were descriptive and exploratory.  This study examines attitudes toward and perceptions of barriers and obstacles that prevent African American women from reaching positions of power that include formulating policy in the corporate environment.  The researcher submitted 1,600 stamped addressed envelopes, one copy of the cover letter and a clean copy of the questionnaire to the National Black MBA Association office headquartered in Chicago, Illinois.  The researcher was not made aware of the names or addresses of the subjects to receive the questionnaire.  This is due in large part to the organization’s commitment not to violate the privacy of the members. The basis of the questionnaire was derived from the attitudes and perceptions of focus group participants.  The literature shows (Stewart and Shamdasani, 1990), focus groups are useful in assessing and clarifying concepts and definitions at the beginning stages of questionnaire design.  The focus group participants were carefully chosen based on their position within their respective organizations.  A set of questions was then asked at each session with each participant responding and providing feedback based on the responses of others.  The (audio recorded) continuous feedback provided a sense of quality control in the data collection process because the participants served to provide a check and balance to weed out false or extreme views (Patton, 1990), providing an effective way of gathering qualitative information.  Each participant’s statement was written verbatim by the researcher with the goal of transforming the statements into questions to develop a questionnaire.  The questionnaire was developed by written statements addressing issues raised in the focus group and the literature.  The labels for the IV (Independent Variables) and DV (Dependent Variables) are variations of barriers discussed in the literature review. The purpose of this descriptive study was to explore this area of inquiry and to specify organizational changes needed to enable African American women to be in positions of power that includes formulating policy in the corporate environment.  This purpose was achieved by the use of a self-administered questionnaire using a 5-point Likert-type scale.

 

Global Terrorism:  Past Present & Future

Dr. Jack N. Kondrasuk, University of Portland, Portland, OR

Dan Bailey, University of Portland, Portland, OR

Matt Sheeks, University of Portland, Portland, OR

 

ABSTRACT

Terrorism has a deep history with instances of terrorist-type activities being recorded in the Bible, presently involving many of the world’s countries with all continents but Antarctica recording one or more terrorist attacks in the last year, and has many facets to examine. Those facets include knowing about the perpetrators and their goals, the targets, the weapons, the events, and the effects of those events. Presently, the United States is the top target in the World and is likely to be so for the foreseeable future. Al Qaeda is probably the top terrorist organization aiming at the U.S. with intentions of using more lethal weapons and producing severe damage. The future will probably see a significant reduction in ideological terrorism and increases in single-issue terrorist groups attacking major cities with weapons of mass destruction. To better prevent and respond to terrorism in the world it is necessary to understand its origins. For the U.S. September 11, 2001, when the Twin Towers in New York were destroyed by terrorists and over 3,000 people were killed, marked a rude awakening to terrorism. In Spain the bombings in Madrid significantly changed political policies. Australians found how terrorism could impact them from a bombing of a tourist restaurant in Bali. Russians were reminded of terrorism in their midst when hundreds were killed at a school in Beslan. Israel was reminded of terrorism every time a suicide bomber blew up a restaurant in Israel while Palestinians thought of terrorism when one of their leaders was assassinated. People throughout the world need to understand terrorism. The purpose of this paper is to enable us to understand terrorism so as to be better able to deal with it. To this end, this paper will look at the origins of terrorism in the world and how they have led to present day terrorism. . . and very importantly, what can we expect regarding terrorism in the future? Before we can answer questions about origin, prevalence and future, we must define our topic—“terrorism.” However, defining the term is difficult to do. Different people and different countries have different views of the same behaviors. A “terrorist” to one is an insurgent, guerilla, militant, rebel, revolutionary, freedom fighter, warrior, soldier, or hero to another. At the core of terrorism is uncertainty. It only seems fitting that there is also ambiguity and uncertainty regarding a definition for terrorism itself. The U.S. Department of State, while conceding that there is no universally accepted definition of “terrorism,” uses the definition of  “premeditated, politically motivated violence perpetuated against non-combatant targets by sub-national groups or clandestine agents, usually intended to influence an audience.“ (U.S. Department of State, 2004a, p. 1). The Department of State goes on to describe foreign, as opposed to only intra-nation terrorist groups, and lists 40 Foreign Terrorist Organizations (FTO’s). The recent European Union definition of terrorism considered “terrorism as violent crimes aimed at seriously destabilizing or destroying the fundamental political, constitutional, economic or social structures of a country or an international organization" (Treanor, 2004). Considering different definitions of “terrorism,” the main aspects of  global “terrorism” seem to include 1) a non-military group of people united in a common political, religious, or ideological cause, 2) who operate in a clandestine way, 3) without a publicly-known headquarters, 4) committing or threatening to commit acts of significant violence, 5) against those who oppose their views, and 6) are mainly civilians. 7) Terrorists  do not abide by civil laws or formal rules of conduct such as the Geneva Convention, 8) employ uncertainty and other psychological weapons to produce fear, 9) seek to influence others to take a particular political course of action, and 10) are based outside the country (or operate in more than one country) and could be referred to as a “Foreign Terrorist Organization” (FTO). This tends to exclude intra-country groups and typical workplace violence. Therefore, the use of “terrorism” in this paper seeks to employ these 10 facets. It is possible to trace terrorism to biblical times in the Middle East. In Chapter 34 of Genesis in the Bible, the "Rape of Dinah" about 1850 BC might be considered one of the first documented cases of terrorism (Wilkins, 1981).  It has also been stated that “the first terrorist campaign was launched in 48 A.D. by members of a Jewish sect called the Zealots, who sought to drive the Roman occupiers out of Palestine” (Maxwell, 2003, p. 14). A Shiite Islamic group arose in Persia in the 11th century and were known as The Assassins.  They sought government rule by their Islamic sect and killed opposing government leaders and members of other Muslim sects, and occasionally Christians (Laquer, 2002). In more modern times Israel and Palestine have seen continuing disagreements and attacks on each other’s lands and interests. For instance, Al Fatah, emerging under the leadership of Yassir Arafat, destroyed an Israeli water pump installation in 1964.  Al Fatah was responsible for numerous attacks and also participated in training terrorists worldwide during the 1960’s and 1970’s (Kushner, 2003). Al Fatah joined the Palestine Liberation Organization (PLO) in 1969. The PLO is probably the best known Palestinian terrorist organization.  This is because the PLO was an umbrella organization for various Palestinian terrorist organizations. Arafat was the PLO chairman from 1969 until his death in 2004. The PLO was involved in the fighting during the Lebanese civil war in 1975, siding with the Lebanese National Movement, a left wing  Arab nationalist organization (Kushner, 2003).

 

“Winning and Losing Research and Business Methodologies for US Government Contracts”

Dr. Shawana P. Johnson, Global Marketing Insights, Strongsville, OH

Dr. Steven M. Cox, Meredith College, Raleigh, NC

 

ABSTRACT

In 1969 the United States government began the Earth Resources Technology Satellite (ERTS) Program, later changing the name to the Landsat Program in 1975.  During the Carter administration consideration was given to commercializing the operation of the system and transferring distribution to a private company. Congress passed the Land Remote Sensing Act of 1984 providing legislative authority to transfer the satellites to the private sector.  At the same time, a request for proposal (RFP) was issued for companies to bid on the contract to manage the Landsat program. EOSAT, a joint venture between RCA (which was shortly thereafter purchased by General Electric Aerospace) and Hughes Aircraft, was started in 1984 for the sole purpose of competing on the Landsat procurement. One of the first tasks facing EOSAT, after winning the contract, was how to establish a true marketing and sales function to insure that EOSAT could become a viable commercial entity.  Since the initial EOSAT staff was technical in nature, the EOSAT Board of Directors decided to subcontract all of the marketing and sales functions to the Earth Satellite Corporation, (Earthsat). In an agreement between EOSAT and Earthsat, Earthsat received an exclusive contract to sell all of the EOSAT’s Landsat data. After a year of difficulties, the EOSAT Board of Directors voted to end the arrangement with Earthsat.  Since no detailed succession planning had been done prior to the decision by the Board of Directors,  the problem for  EOSAT was what to do now; outsource again, build an in house sales capability, do no marketing and sales, or something else?  In 1969 the United States government began the Earth Resources Technology Satellite (ERTS) Program, later changing the name to the Landsat Program in 1975.  The first satellite, Landsat 1, was successfully launched in 1972. It soon became apparent that information about the surface of the earth from Landsat 1 could serve a variety of private sector and government needs.  The US government continued the program; designing, building, launching, and managing Landsat 2-5.  During the Carter administration consideration was given to commercializing the operation of the system and transferring distribution to a private company.  The first step in this process was the transfer of control of the satellites to the National Oceanic and Atmospheric Administration (NOAA), a part of the Department of Commerce (DOC) in 1981.  Landsats 4 and 5 were then launched in 1982 and 1984 respectively. Taking the lead from the Carter administration, in 1983 the Reagan administration began the process of transferring image distribution and operational control from the government to the private sector.  It was believed that the steady growth in satellite image usage would lead to a viable commercial industry and the private development of earth observation satellites.  Congress soon passed the Land Remote Sensing Act of 1984 which provided legislative authority to transfer the satellites to the private sector.  At the same time, a request for proposal (RFP) was issued for companies to bid on the contract to manage the Landsat program.  Seven companies initially showed interest in the program, eventually the DOC selected two companies, the Earth Observation Satellite Corporation (EOSAT), and a consortium of Kodak and Fairchild for final bidding.  EOSAT, a joint venture between RCA and Hughes Aircraft, won the bid and began operation in 1985. Under NASA, and later NOAA, distribution of satellite data was done in two ways.  First, all data received in the United States was archived by the US Geological Survey (USGS) and distributed free of charge to US government agencies and sold at $2500 per image to the private sector and foreign governments.  Second, NOAA licensed ground stations in several countries to receive, process, and distribute the data that was received by that ground station after the payment of a licensing fee of $600,000 per year and a nominal royalty on each image sold.  Responsibility for the management of the ground station relationship was also transferred to EOSAT. EOSAT, a joint venture between RCA and Hughes Aircraft, was started in 1984 for the sole purpose of competing on the Landsat procurement.  RCA and Hughes had combined to build the Landsat satellites, RCA building the satellite and Hughes building the camera.  They were under contract with NOAA to build the follow-on satellites, Landsats 6 and 7, when EOSAT was awarded the contract to manage Landsats 4 and 5.  At the time EOSAT assumed responsibility for the Landsat program, the US government employees responsible for the operations, production, and customer service functions were transferred to EOSAT as permanent employees.  However, because the US government did not actively market satellite data as part of their mission, no marketing or selling function other than an order-taking group housed in South Dakota was in place at the time of the transfer.  Since the internet was unknown during this period, few individuals outside of the government and academia had ever seen Landsat imagery let alone knew of its uses. One of the first tasks facing EOSAT was how to establish a true marketing and sales function to insure that EOSAT could become a viable commercial entity.  Since the initial EOSAT staff was technical in nature, the EOSAT Board of Directors decided to subcontract all of the marketing and sales functions to the Earth Satellite Corporation, (Earthsat).  Earthsat, a leading processor of satellite and other spatial data, made maps and other ‘added value products’ for the private sector and government agencies including the military, and had a well established technical marketing and sales function.  In an agreement between EOSAT and Earthsat, Earthsat received an exclusive contract to sell all of the EOSAT’s Landsat data. 

 

Relative Efficiency of Computer and Computer Services Companies

Seetharama L. Narasimhan, University of Rhode Island, Kingston, RI

Allan W. Graham, University of Rhode Island, Kingston, RI

 

ABSTRACT

This paper analyses the relative efficiency of US computer services companies using Data Envelopment Analysis (DEA).  DEA provides an integrated non-parametric approach to performance measurement based on a specified set of inputs and outputs. Our objective is to investigate US computer companies and rank them into different groups based on their performance.  The companies we study include Apple, IBM, Hewlett-Packard, Dell, Gateway, Unisys, Computer Science Corporation, Arrow Electronics, and Affiliated Computer Services.  Our evidence suggests the sample companies fall in three categories: firms that are consistently good performers (IBM and Affiliated), firms that are improving (Apple and Dell), and firms that are erratic and or declining (Gateway, CSC, Unisys, Arrow Electronics and HPQ) in performance over a five-year period. Studies cited in The Economist (June 10, 2000) report that the growth reported for the U.S. economy is actually concentrated in the computer industry itself.  Consequently, the computer industry is critical to the health of the U.S. economy, which makes it important that we understand the performance and productivity of its members. Ranking computer industry members in terms of productivity or operational performance requires that we address an important factor, which is that the industry is sometimes difficult to define.  Some ranking schemes compare very divergent companies under the umbrella of “IT” industry classifications. Business Week (2004) constructs its InfoTech100 and includes not only computer manufacturers, but also firms that use computers for the conduct of their business (e.g. cell phone providers).  We constrain our examination to the domestic computer manufacturers and related service providers, which are basically firms that provide personal computers and laptops, as well as other computer solutions to business, government, and the home-user markets.We find that the firms in our sample appear to fall into three categories: firms that are consistently good performers, firms that are improving, and firms that are declining. We also find that firms that ranked relatively well in prior studies are less well ranked in our sample period of 1999 to 2003.  The rest of this paper is organized as follows. In Section 2, we provide a brief overview of the DEA model.  In Section 3, we discuss our sample and present our results.  We conclude the paper in Section 4. DEA is a mathematical programming technique that provides an assessment of the operating efficiency of each member in similar organizations, relative to each other. The DEA methodology identifies an effective frontier, which consists of the most efficient decision-making units (DMU).  The procedure is based on the notion that no other unit or linear combinations of units can generate the same amount of outputs for the given inputs (Charnes, et al., 1994). Charnes, et al. (1978) developed the DEA methodology, which defines a nonparametric relationship between multiple outputs and multiple inputs. Firms can lie along the efficient frontier (have a relative efficiency of 1) or inside the efficient frontier (something less than 1).  Fried, et al. (1993), provides successful applications of DEA methodology for comparing relative efficiencies in banks, restaurants, school districts, ferries, and hospitals.  Soteriou and Stanrinides (1997), in their efficiency study of banks, point out that efficiency can be measured as technical efficiency (that is, productivity), or price efficiency, or some combination of the two. These different ways to specify the model can also be thought of as production-based efficiency or market-based efficiency. Technical efficiency is the use of the various factors of production such as labor, materials, and overhead as inputs. We examine technical efficiency for the large computer manufacturers as well as computer consulting firms that offer solutions to their customers.  Using DEA analysis, the efficiency of a unit i is defined as follows: Efficiency of unitwhere Oij represents the value of unit i on output j, Iij represents the value of unit i on input j, uj is a nonnegative weight associated to output j, vj is a nonnegative weight associated to the input j, n0 specify the number of output variables and n1 is the number of input variables.  The values of inputs and outputs are known. Therefore, DEA problem consists of maximizing the weighted sum of outputs when all input and output values are known and specified.  This amounts to determining the values of decision variables uj and vj in order to maximize the output.  We want to maximize the output from each unit subject to constraints, that is, The weighted sum of outputs cannot exceed the weighted sum of inputs, that is, or equivalently,To prevent unbounded solutions, we impose that the sum of the weighted inputs and outputs for the unit i is equal to one or weighted sum of inputs equals one, which automatically imposes a constraint on the weighted sum of output as one. A study by Chen and Ali (2004) uses the computer industry to demonstrate a modified DEA method called “Malmquist Productivity”.  In their application, they use eight large computer and office equipment manufacturers for the period 1991 to 1997.  The computer manufacturing component of their sample overlaps with ours and include the firms Apple, IBM, HP, and Compaq.  Their model consists of a three inputs: assets, shareholder’s equity, and number of employees, but only a single output, revenue. During their sample period, Apple and Compaq appeared to be the best performers, whereas IBM and HP showed improvement at the end of the sample period.

 

Service Quality Perceptions: An Assessment of Restaurant and Café Visitors in Hamilton, New Zealand

Dr. Asad Mohsin, The University of Waikato, Hamilton, New Zealand

 

ABSTRACT

Growing competition in the hospitality sector and the need to remain customer focused impose the need to provide excellence in service and quality to retain and propagate customers.  Restaurants and cafes in Hamilton (the fourth largest city in New Zealand) are no different to experience this competitive environment.  This study attempts to assess the service quality perceptions of restaurant and cafe goers in Hamilton.  It draws upon the responses of 340 respondents to examine their expectations and actual experiences of dining out in a restaurant or cafe.  The findings, in revealing the actual experience of the restaurant goers, indicate mostly above average performance but also suggest gaps in their actual experience based on gender and age groups.  The findings are expected to help the owners of restaurants and cafes to address those gaps and improve the satisfaction rate from their customers, thereby effectuating repeat business and improving profits.  Customer satisfaction has now long been a feature of interest to researchers and business owners.  In the contemporary hospitality business world, the true measure of success lies in an organization’s ability to satisfy customers continually (Gabbie and O’Neil 1996).  In other words, service quality impacts organizational profits as it is directly related with customer satisfaction, customer retention and thereby developing customer loyalty (Baker and Crompton 2000; Zeithaml and Bitner 2000).  Business gurus state it costs a lot more to attract new customers than to retain current customers (Rosenberg and Czepiel 1983; Oliver 1999).  There is a strong likelihood that repeat customers develop excellent loyalty towards the business.  However, having repeat customers is linked to customer satisfaction and a feeling of delight through quality product and service.  This holds true for restaurants and cafes too.  Most business executives agree that sound business strategies include a concern for quality (Getty and Getty 2003).  It is of interest to note, however, that despite all the attention the concept of quality has received in the published literature, there is still no consensus on the definition of quality.  According to Getty and Getty (2003), managers from different functional areas in firms tend to view the concept of quality from differing perspectives – this in turn impacts on the achievement of quality.  The hospitality industry on an international level can easily be considered a multi-trillion dollar industry.  In last two decades, the hospitality industry has witnessed exponential growth and increased competition (Lee, Barker and Kandampully 2003).  Superior quality of service is one crucial factor within the control of the hospitality industry that can add value to its product and lead to customer delight and loyalty (Lee et al. 2003).  A desired objective of all service marketers should be to provide a ‘zero-defect’ service, but due to the unique characteristics of services it becomes difficult (Berry and Parasuraman 1991).  Perceived service quality has been defined as a judgment or attitude relating to the superiority of a service (Zeithaml and Bitner 2000).  Much of the published literature on service quality has been built around the SERVQUAL model (see for example, Parasuraman et al. 1988).  The application of SERVQUAL has been further refined (see Parasuraman et al. 1991).  There still remains, arguably, no one ideal method acceptable as a global measure of service quality.  Nevertheless, customer satisfaction can be seen as an important indicator of repeat business and customer loyalty.  Although, Soriano (2002) suggests there is no guarantee of a satisfied customer’s repeat visit, yet, it is almost certain that a dissatisfied customer will not return.  Perhaps total satisfaction is no longer wholly sufficient for a customer to return, while a delighted customer is perhaps more likely to return.  Since quality can mean different things to different customers and it is an experience, consequently it is more difficult to define or measure.  This has been more widely noted and addressed in studies that have examined the nature of the tourist and leisure experience (see for example, McIntosh 1998).  Alternative instruments based on the performance approach to ensure customer expectations are met include SERVPERF that measures customer satisfaction. Likewise, DINESERV and LODGESERV serve the same purpose of finding out the customers’ perceptions in relation to the services offered by the provider (cited in Mohsin 2003).  Pizam and Ellis (1999) have advocated that unlike material products or pure services, most hospitality experiences are an amalgam of products and services.  They further state that satisfaction with a hospitality experience such as a hotel stay or a restaurant meal is a sum total of satisfactions with the individual elements or attributes of all the products and services that make up the experience.  Hence, based on a review of the published literature, it is important to measure service quality perceptions and satisfaction of customers in all attributes of all the services.  This study attempts to undertake such a task and measure the perceptions of restaurant goers in downtown Hamilton, the fourth largest city in New Zealand. The outcome of such research is important for restaurant operators in Hamilton to comprehend their strengths and weaknesses in relation to the achievement of customer satisfaction. Parasuraman et al. (1988, 1991) identified the following five generic dimensions (RATER) of service quality (SERVQUAL) required in the service delivery to facilitate customer satisfaction:

 

Health Care Delivery in OECD Countries, 1990-2000: An Efficiency Assessment

Dr. Sam Mirmirani, Bryant University, Smithfield, RI

Targol Mirmirani, University of Massachusetts, Amherst, MA

 

ABSTRACT

The health care delivery system of OECD nations for the period of 1990-2000 is analyzed. Outputs are: Life expectancy and infant mortality. Inputs include: Per capita health care expenditure; population adjusted physicians and hospital beds; protein intake; alcohol consumption, percent of children with measles inoculation and school life expectancy. Using linear programming under Data Envelopment Analysis, performance efficiencies of member nations are ranked (top, bottom, and most improved) and evaluated. Rapid and persistent inflation of health care costs has been burdening national economies for a number of years. This problem is of particular concern to industrialized countries where support for vital services, such as, national security and education are gaining more public demand. In 1984, OECD countries had a health care per capita expenditure mean of $870.00 (with purchasing power parity adjustment); this figure rose to $1,983.00 in 2000. The United States, for example, when compared with the overall mean of the OECD, has maintained twice as much per capita health care expenditure over the past 20 years. While controlling costs is the priority, both developed and developing nations are trying to improve access to and quality of health care services for their citizens. The majority of research in the area of efficiency measurement has focused on the firm/organizational level. In the health care sector, as increasingly more resources are poured in, it is equally important to know the relative efficiency of the entire (macro) system. However, an important aspect of macro-level analysis is to have input and output measures that are consistent, definable and uniform across different systems. Among the various methods of efficiency assessment, researchers in the field of business management have frequently used Data Envelopment Analysis (DEA). The robustness of DEA has been the main reason for its wide popularity. Although there are published studies related to a comparative analysis of health care systems, none, at the present time, have applied the DEA method in assessing macro-level efficiencies. The objective of this paper is to apply the DEA technique to the measurement of health care efficiencies of OECD countries and analyze differences among them with respect to their inputs and outputs. The organization of the paper is as follows:  A review of literature on health care systems is provided; the DEA methodology is elaborated; empirical testing and analysis; and finally, a summary and conclusion. On a regional level, Mirmirani and Li (1995) applied the DEA method in assessing efficiency of health care deliver in the six states of the New England region (USA). They found that the State of Massachusetts scored the highest level of relative efficiency in the period of 1985-1990. In a comparative analysis of health care systems, we have found a number of fairly recent studies. In a five-nation (New Zealand, UK, US, Canada, and Australia) study, Blendon et. al. (2003) finds that there are a significant number of citizens that are dissatisfied with their health care system. With a focus on OECD countries, Anell and Willis (2000) suggest that instead of expenditure measures, using a resource profile is a more desirable alternative for an international comparison of health care systems. In another OECD-based study, Anderson, et al. (2003) suggest that differences in health care spending patterns between the United States and the rest of the member nations are mostly explained by higher prices in the US.  Efficiency measurement, whether at micro or macro level, is a valuable tool for health care policy and planning. There are various techniques for measuring the efficiency of health care services. If a benchmarking approach is undertaken, techniques such as, ratio analysis, unit cost analysis, stochastic frontier analysis and DEA are common tools used for efficiency measurement. With a sampling of 191 countries worldwide, Evans et al. (2001), applied the regression technique to conduct a comparative efficiency of national health care systems.  Using life expectancy as a health output, as well as, health expenditures and average schooling as inputs, they conclude that, while increased resources result in improved health, a more efficient use of resources can also contribute to the overall health care of a nation. As mentioned earlier, a large majority of DEA studies focus on micro-level applications-very few on macro-level applications. For example, Dimelis and Dimopoulou (2002) use DEA to evaluate the productivity growth of countries in the European Union. They suggest that because DEA does not impose any constraints, it “frees” the user from assigning a priori assumptions about weights used in the model. In another study, Cherchye (2001) use DEA to assess the macroeconomic policy performance of OECD countries. DEA’s robustness and its ability to allow researchers to assess the relative rankings of the observed decision making units (countries) are the motivation for its application. Within the health care sector, DEA has gained much popularity in the 1990’s. Hollingsworth et. al (1999) provides a thorough review of various applications. More recent applications of DEA to hospital efficiency measurement can be found in Hofmarcher et al. (2002), Giokas (2002), and Bhat (2001) that also elaborates on its extensions, strengths and limitations. DEA’s comparison with regression analysis is applied to the managed care organization by Nyhan and Cruise, (2000) and to primary care by Giuffrida and Gravelle (2001).

 

Monetary Policy as Company Guiding Principle

Dr. José Villacís González, San Pablo CEU University, Madrid, Spain

 

ABSTRACT

A company is a productive unit that transforms or should transform money in production. In fact, a company should generate money in order to convert it into production and so balance the amount of money and production. Such equilibrium between the amount of money generated and production is called the price equilibrium level. The essence of monetary policy is the ability of the monetary authorities to create money that can in turn be converted into production. This policy’s greatest asset is getting to know how such alchemy comes about. There are several guiding principles or philosophies that apply to monetary policy. From an internal and emotional point of view, we can mention carefulness and fear. In a practical sense, the following can be added to the list: the amount of money created, and the route by which that money is supplied. Finally, monetary policy techniques themselves complete the list of guiding principles.  Companies convert money into income through production. Income is a flow of money arising from production through the companies’ activities. Income, as a money flow, will call for production –this is the classic form of Say’s Law. The medium of income is money, but money does not necessarily entail the creation of income. A company is a loop of contracts whereby factors are purchased and products are sold. Production factors are acquired in the purchasing process – and all of them are circulating factors, including those that Macroeconomics considers fixed factors. Before they are actually sold, the products are also circulating capital.  Money is a production factor that can be purchased in the money market, ultimately coming – although not all of it – from the monetary authorities. In fact, it is really a rental because it has to be returned, and the rent to be paid is the interest rate.  The company purchases present money with future money, and since future money does not exist, such purchase is accompanied by a promise of future production. Such is the essence of the matter, because that future production would then purchase the original debt. And if such purchase transaction deal is done, there will be equilibrium between real and financial magnitudes. We can at any time compare the cost-efficiency of the circulating capital for which a  rent (or interest) is to be paid. If such comparison were possible, then a group of companies would require a certain amount of money from the system. So, it is the companies’ productivity that partially determines the creation of money. Usually, accounting and financial analysis books rank assets from the most fixed to the most liquid. Money is the most liquid asset. We will categorize assets according to their closeness to or remoteness from production. Therefore, we will identify two basic categories: one is primary assets, such as machinery, raw materials, energy, the work factor. These assets directly or indirectly convert inputs into final or semi-final products. The other category is wealth deposits, which may or may not be transformed in demand for primary assets. If they are transformed in demand for primary assets, they end up becoming production, there being equilibrium between newly generated money and new production.  The opposite is pure speculation: production fails and equilibrium disappears. Yet, we should not necessarily believe that the gap between a bigger money stock and a smaller production level will bring about inflation, because it is precisely the speculating activity that such a gap is financing. The value of a share portfolio is usually measured from a very broad approach that includes our securities and our physical assets. Such a broad approach considers money as the most liquid component of wealth and apparently the least cost-effective. And we say the ‘least cost-effective’ because the rent of wealth is measured by its ability to obtain money inputs simply for having it in stock, and this is quite a naïve mistake. Money is said to have the advantage of being liquid but not cost-effective. Money is highly cost-effective: it the most cost-effective of all assets due to the simple fact that it represents the entire existing wealth universe.  With a static approach, wealth can be measured at any time. This measurement is key to monetary policy because, in the end, monetary authorities wish to know what companies would do when faced with an increase in the money supply.  We think that this is the moment to state that money’s perennial vocation is to circulate, and this circulatory operation involves the sale and purchase of goods, services and financial assets. By its very nature, money is restless and volatile, and therefore we cannot support the mere static quantitative value of money.  A personal physiological aspect that is not vital to know is the volume of blood in our bodies, but we need to know how it circulates and what that circulation means. The reason why we place emphasis on money is that Keynesians refer to money as additional wealth that should be measured, and monetarists (not all) believe that money’s immediate purpose is for purchasing. If the stock of money increases, it will seek to purchase and not slumber in a static cornucopia of wealth. Companies are constantly adjusting their income. They usually generate positive income. Paying for production factors means building links for generating income, and the profit is income. Part of that profit is dividends or distributed profits, and another part is corporate savings (reserves, depreciation funds, etc.). Such corporate savings are kept in a diverse portfolio that will not generate income at first, although it will generate a yield. Successive increases in the amount of money will produce greater income and greater yield, but not all will be converted into retained income. A part, we insist, will turn into demand for production factors, which will normally develop into production and aggregate demand, and another part will temporarily go into a reservoir from which the company’s production system will drink.  If the company uses everything that enters the system at the same time and in the same amount to finance its productive activity, then monetary policy impacts are effective. This is basically the key to monetarism. This statement is perfectly compatible with the permanent existence of previous savings in the company placed into a broad portfolio.

 

Viral Marketing: New form of Word-of-Mouth through Internet

Dr. Palto R.Datta, London College of Management Studies, London, UK

Dababrata N. Chowdhury, London College of Management Studies, London, UK

Dr. Bonya.R Chakraborty, London College of Management Studies, London, UK

 

ABSTRACT

This article sets out to examine the documentary evidence and expert opinions as to the extent to which a new form of marketing via the internet which has become very prevalent and fashionable in the last few years, namely Viral Marketing, is really just an electronically extended version of what is generally regarded to be one of the oldest forms of influence on a persons decision to buy a product or service, traditionally known as Word of Mouth, as claimed by Hanson (2000). The means by which we have set out to consider this question is to firstly review the literature relating to the various aspects of Word-of- Mouth communication to achieve as full as possible understanding of how it operates, how effective it is as a means of marketing and the various factors which affect its operation. In particular, we have been especially interested to find out to what extent an essentially natural and spontaneous form of communication can be deliberately instigated, encouraged and controlled as a marketing tool and means of promotion. The second section of the article comprises a thorough analysis and overview of the Viral Marketing revolution in order to understand its means of operation, effects, potential and finally, links with its predecessor, Word of Mouth. It is an inescapable conclusion that Viral Marketing has grown from the concept of word of mouth and has taken it to totally different levels of complexity, geographical spread and possibilities of manipulation. So it cannot really be described as ‘simply word of mouth on the ‘net’, but even so the intimacy of the old fashioned word of mouth recommendation still makes it a power to be reckoned with. Word-of-Mouth `a form of interpersonal communication among consumers concerning their personal experiences with a firm or a product (Richins,1984), has undoubtedly always been a powerful marketing force (Sundaram et el, 1998), which is  the only promotional method that is both the means and the end (Cafferky, 1997), thousands times as powerful as conventional marketing (Silverman, 2001), seen as a dominant force in the market place for services (Mangold, 1999) and is very important in shaping consumers attitude and behaviours (Brown and Reingen, 1987). Katz and Lazarfield (1955) found the influence of word-of-mouth is stronger than advertising or personal selling on consumer buying behaviour and very important in the diffusion of new products (Rogers, 1983). Although current body of research provides little insight into the nature of Word-of-mouth in the service market place (Mangold, 1999).  Notwithstanding, despite considerable attention on Word-of-Mouth there are few literature on the topic (see Sundaram et el, 1998). Sundaram et el (1998) further asserted that most of the studies have focused the consequences of word-of-mouth, the flow of Word-of-mouth within the market place and the moderating role of social and situational factors in the persuasiveness of Word-of-mouth. However, Richins (1998) found that most of the word-of-mouth research concentrated on negative word-of-mouth rather than positive word-of-mouth. While Krishnamurthy (2001) pointed out that ``WoM was never seen as the dominant paradigm for conducting marketing action``.  The advent of internet has brought new realization for both marketers and consumers the way they use to pass or receive messages about products and services, introduced a new platform for traditional word-of-mouth communication (Granitz and ward, 1996), and ``a new realization among marketers to adopt a win-win strategy with consumers`` (Krishnamurthy, 2001), has recently received renewed (Anderson, 1998) and considerable (Sundaram et el, 1998) attention in the marketing literature.  This section will organize the theoretical body of knowledge on word-of-mouth contributed by marketing scholars.  The term word-of-mouth is used to describe verbal communications (either positive or negative) between groups such as the product provider, independent experts, family and friends and the actual or potential consumer (Helm and Schlei, 1998). According to Schiffman and Kanuk (1994) word-of-mouth is ``the informal flow of consumption related influence between two people``. While Mangold and et el (1987) describe word –of-mouth communications received from individual as personal. One of the earliest researchers on word-of-mouth was Arndt(1967) characterized word-of-mouth as oral, person to person communication between a receiver and a communicator whom the receiver perceives as non-commercial, regarding a brand, product or service. However, Buttle (1998) asserted that word-of-mouth not necessarily need to be brand, product or service focused, it can be organizational focused. Further he stated that in electronic age it not need to be ``face to face, direct, oral or ephemeral``. However, Cox (1967) refers to WOM advertising quite simply, as nothing more than a conversation about products and Westbrook (1987) defined word-of-mouth (WOM) as ``informal communications directed at other consumers about the ownership, usage, or characteristics of particular goods and services or their sellers``. All these above definition agree some points and these are: informal communication directed to receiver and about communicators brand, product, service or organizational related experiences. Also it is personal communication while information from other sources (i.e radio, television, newspapers etc) regard as non-personal

 

Are There Any Long-run Benefits from Equity Diversification in Two Chinese Share Markets: Old Wine and New Bottle

Dr. Tsangyao Chang, Feng Chia University, Taichung, Taiwan

Dr. Zong-Shin Liu, Feng Chia University, Taichung, Taiwan

Mei-Ying Lai, Feng Chia University, Taichung, Taiwan

 

ABSTRACT

This study provides evidence that there exist long-run benefits for investors from diversifying in two Chinese share markets over the period January 5, 1995 to May 31, 2001.  The evidence is based on tests for pairwise cointegration between the Shanghai and Shenzhen’s A-share and B-share stock price indexes, using five cointegration tests, namely PO, HI, JJ, KSS, and BN approaches.  The results from these five tests are robust and consistent in suggesting that these two Chinese share markets are not pairwise cointegrated with each other.  These findings could be valuable to individual investors and financial institutions holding long-run investment portfolios in these two Chinese share markets. This study aims to explore whether there exist any long-run benefits from equity diversification for investors who invest in two Chinese share markets, namely those of Shanghai and Shenzhen Stock Exchanges.  Recent empirical studies have employed cointegration techniques to investigate whether there exist such long-run benefits from international equity diversification (see Taylor and Tonks, 1989; Chan et al., 1992; Arshanapalli and Doukas, 1993; Roger, 1994; Chowdhury, 1994; Kwan et al., 1995; Masih and Masih, 1997; Liu et al., 1997; Kanas, 1999).  According to these studies, asset prices from two different efficient markets cannot be cointegrated.  Specifically, if a pair of stock prices is cointegrated then one stock price can be forecasted by the other stock price.  Thus, these cointegration results suggest that there is no gain from portfolio diversification. In this study, we test for pairwise long-run equilibrium relationships between two Chinese share markets by employing five techniques of cointegration tests, namely PO, HI, JJ, KSS and BN approaches. (1) The findings of our five tests all suggest that the two Chinese share markets are not pariwise cointegrated with each other.  The finding of no cointegration can be interpreted as evidence that there were no long-run linkages between these two Chinese share markets and thus, there exist potential gains for investors from diversifying in these two Chinese share markets.  These results are valuable to investors and financial institutions, holding long-run investment portfolios in these two Chinese share markets. The major motivations for this study are three folds.  First, China is a rapidly expanding emerging market and the rapid growth of the Chinese economy has attracted the attention of international investors.  Second, the government policy of gradual relaxation of restrictions on foreign investments in Chinese share markets has further enhanced the importance of the Chinese share markets to international equity investors.  Third, the last decade has seen a significant increase in the integration of world capital markets.  In light of pressure for incorporating developing economy stock markets into global investment strategies, studies on thin security markets have increased in importance.  Empirical results from stock markets such as the Chinese share markets are of great importance to global fund investors who may, be planning to invest in these two Chinese share markets. The remainder of this study is organized as follows.  Section II describes the data used.  Section III presents the methodologies employed and discusses the findings.  Finally, Section IV concludes. Daily closing price indexes for A-share and B-share from both Shanghai and Shenzhen Stock Exchanges are used in this study and the period extends from January 5, 1995 to May 31, 2001. Data are collected from the Core Pacific Securities Investment Trust Co. Ltd, Taiwan.  All series are measured in natural logs.  Figures 1 and 2 shows the plot of A-share and B-share price indexes from both Shanghai and Shenzhen Stock Exchanges.  Insert Figures 1 and 2 about here. We examine how these four stock markets are correlated with each other.  The summary statistics and correlation matrices for these four stock market index returns (or log price changes) can be visually appreciated in Table 1.   The market’s average daily index returns are 0.05%, 0.07%, 0.05% and 0.07% for Shanghai A-share, Shanghai B-share, Shenzhen A-share and Shenzhen B-share, respectively, over this empirical sample period.  Regarding the standard deviation, we find that the Shanghai A-share has the highest daily standard deviation of 3.02%, whereas the Shanghai B-share has the lowest at 2.65% over the sample period.  Table 1 also shows that index returns for each market are leptokurtic since the relative large value of the kurtosis statistic (larger than three) suggests that the underlying data are leptokurtic, or heavily tailed and sharply peaked about the mean when compared with the normal distribution.  The Jarque-Bera test also leads to the rejection of normality in the data sets of these our markets’ daily returns data sets.  Regarding the correlation matrix, we find that all the correlations are positive and significant.  The highest contemporaneous correlations are shown between the Shanghai A-share and Shenzhen A-share, while the lowest are shown for the Shanghai A-share and Shenzhen B-share.   Insert Table 1 about here

 

Trade Relations and Stock Market Interdependence: A Correlation Test for the U.S. and Its Trading Partners

Dr. Steven ZongShin Liu, Feng Chia University, Taiwan

Dr. Tsangyao Chang, Feng Chia University, Taiwan

Dr. Kung-Cheng Lin, Feng Chia University, Taiwan

Mei-Ying Lai, Feng Chia University, Taiwan

 

ABSTRACT

This paper examines the trade relation hypothesis which, in some recent studies, argues that differences in trade relations among countries can significantly explain (predict) differences in the stock market interdependence and stock price transmission mechanisms. Based on trade relations between the U.S. and its major trading partners and applying, for the first time, the correlation test with bootstrap procedure, the results indicate that the hypothesis is hardly as a general rule. Regarding the stock market interdependence, it holds only in some countries or in some specific trade relations. The initial responses of foreign stock markets to shocks in the U.S. market are correlated with foreign countries’ trade dependence upon the U.S., however, their response patterns over time do not exhibit noticeable differences. The interdependence among national stock markets has been investigated by numerous studies (e.g., Arshanapalli and Doukas, 1993; Eun and Shim, 1989; Friedman and Shachmurove, 1997; Jeon and Von Furstenberg, 1990; Von Furstenberg and Jeon, 1989; Cha and Oh, 2000; Chan et al., 1992; Chowdhury, 1994; Dekker et al., 2001; Janakiramanan and Lamba, 1998; Masih and Masih, 2001; Rogers, 1994; Sheng and Tu, 2000; Chen et al., 2002; Elyasiani et al., 1998). In general, except in some emerging markets, such as those in Taiwan and South Korea, with severe restrictions on foreign investment, a substantial amount of interdependence has been evidenced among national stock markets, especially in the post-October 1987 crash of the New York stock exchange. Moreover, markets with close geographic and economic proximity exert more significant influences over each other. Not only to evidence the stock market interdependence, the recent research arena is attempting to explore what economic fundamentals determine the stock market interdependence? However, studies on the issue so far have been few in number, and the results are either contradictory or unsatisfactory. The earlier study of Von Furstenberg and Jeon (1989) find that only few demand-side economic events, such as exchange rates and oil and gold prices, have significant effects on the daily stock price changes of four major stock markets (the U.S., Japan, Germany, and the UK). They claim that national stock market interdependence may simply reflect contagious market shocks, unrelated to economic fundamentals. Recently, Karolyi and Stulz (1996) report that U.S. macroeconomic announcements, interest rate shocks, and industry factors have no significant effect on the correlations between the U.S. and Japan share returns. Serra (2000) also provides evidences that industry factors do not seem to affect cross-market correlations of 26 emerging markets. In contrast, Chen and Zhang (1997) demonstrate that the cross-country return correlations of Pacific-Basin markets are significantly related to trade activities. Cheung and Lai (1999) find that the long-term stock price co-movements of three EMS countries (France, Germany, and Italy) can be partly attributed to the co-movements of money supply, dividends, and industrial production, but the explanatory power is far from strong. Pretorius (2002) shows that, among possible fundamental factors, only bilateral trade and industrial production growth differentials are significant in influencing the correlations between 10 emerging markets. His model explains about 40% of the variations in the correlations. Moreover, Bracker et al. (1999) investigate the extent of stock market co-movements between pairs of 9 well-established stock markets and find that bilateral import dependence, geographic distance, market size differentials, and real interest differentials are the significant factors. Soydemir (2000) argues that differences in the stock market response patterns of three emerging countries (Mexico, Argentina, and Brazil) are consistent with differences in their trade flows with the U.S. For example, Mexico has strong trade tie with the U.S., its stock price transmission pattern in response to shocks in the U.S. market is much more predictable.  To know the fundamental determinants of stock market interdependence and stock price transmission has important implications for policy makers and investors (pretorius, 2002). From previous studies, it seems that trade relation reveals a relatively more significant factor in determining stock market interdependence, and differences in stock market interdependence and stock price transmission mechanisms could, at least partially, be significantly explained by differences in trade relations among countries. The underlying economic foundation is that trading activities link the cash flows of trading partners, thereby making their stock markets more highly correlated (Chen and Zhang, 1997). If two countries have tighter trade relations, then their stock markets should be more interdependent (correlated), and stock price transmission patterns should be more predictable (Bracker et al., 1999; Pretorius, 2002; Soydemir, 2000). The purpose of this study, however, is intending to examine the robustness of such previous findings, which, for simplicity, is referred to here as the trade relation hypothesis, that differences in trade relations among countries can significantly explain (predict) differences in stock market interdependence and stock price transmission mechanisms. Three reasons provide the rationale for the aim of this study. The first is to change the coverage of the stock markets explored. Among previous studies that support the explanatory power of trade relations, Chen and Zhang (1997) explore Asia Pacific markets, while Soydemir (2000) and Pretorius (2002) more specifically investigate several emerging markets. Unlike any of those, this study delves into the stock markets of the U.S. and its 10 major trading partners, which mostly consist of developed countries and the Asian Newly Industrialized Economies (ANIEs). The reasoning for this is that trade relations among these countries are all well established, rather than weak, and these trade relations differ in degree. This enables us to examine the robustness of the trade relation hypothesis, i.e., whether it holds as a general rule or just in special cases, such as in emerging markets.

 

Testing the Safe-Haven Hypothesis for Selected African Currencies

Hsu-Ling Chang, National Yunlin University of Science & Technology and Ling Tung College

Hsioa-Ping Chu, National Chung Hsing University and Ling Tung College

Dr. Tsangyao Chang, Feng Chia University, Taichung, Taiwan

 

ABSTRACT

In this empirical note we test whether the U.S. dollar increases in value during times of uncertainty.  Our test of this “safe-haven” hypothesis is based on an ARMA-GJR-GARCH-M model for the currencies of twenty-two selected Africa countries.  We examine the period from January 1980 to May 2004.  Our empirical results support the safe-haven hypothesis for Botswana, Central African Republic, South Africa, Zimbabwe, Burundi, and Burkina Faso six countries, only. Doroodian and Caporale (2000) argue for the dollar as a “safe-haven” currency, experiencing increases in value in times of uncertainty.  Using a symmetric GARCH in mean model, Doroodian and Caporale (2000) obtain results consistent with the safe-haven hypothesis for the currencies of Egypt, El Salvador, Malaysia, and Mexico.  Chang and Caudill (2004) extend Doroodian and Caporale‘s (2000) empirical work to test the “safe-haven” hypothesis using an ARMA-GJR-GARCH-M model for the currencies of six Asian countries: Malaysia, Philippines, Singapore, South Korea, Taiwan and Thailand.  They examine the period from January 1979 to August 1999 and their empirical results support the safe-haven hypothesis for South Korea and Taiwan two countries, only.  The safe-haven hypothesis predicts that increases in the conditional volatility of these foreign currencies increases the value of the U.S. dollar.  Thus, a positive relationship between exchange rate uncertainty and the value of the dollar would confirm the insurance aspect of investing in dollar-dominated assets.   In this study, we further extend Chang and Caudill’s (2004) empirical work to test the “safe-haven” hypothesis using the same ARMA-GJR-GARCH-M model for the first time for the currencies of twenty-two selected African countries:  We examine the period from January 1980 to May 2004 to see whether the “safe-haven” hypothesis hold in these twenty-two selected African countries.  This empirical study is organized as follows.  Section II discusses the data and presents summary statistics.  Section III describes the methodology employed and discusses the empirical results.  Finally, Section IV presents the conclusions. This empirical note investigates whether the U.S. dollar acts as a safe-haven for the currencies of twenty-two selected African countries: Botswana, Central African Republic, Core d’Ivoire, Gabon, Ghana, Kenya, Madagascar, Mali, Mauritius, Niger, Nigeria, Rwanda, Senegal, Sierra Leone, Tanzania, Uganda, South Africa, Zambia, Zimbabwe, Burundi, Burkina Faso, and Lesotho.  The period examined is January 1980 to May 2004.  This study uses bilateral exchange rates and the exchange rate is defined as the national currency per U.S. dollar.  The exchange rate returns, R, are calculated as the first difference between the logarithms of consecutive exchange rates,   All data ere obtained from the AREMOS database of the Ministry of Education, Taiwan.  Descriptive statistics for exchange rate returns are reported in Tables 1.A, 1.B, 1.C, and 1.D.  The sample means of the exchange rate returns series are positive for all countries except Niger, Sierra Leone, Zimbabwe, and Burundi.  The sample standard deviations range from 27.25% for Core d’lvoire to 1.10% for Botswana.  Both the skewness and kurtosis statistics indicate that the distributions of the exchange rate returns are non-normal, a finding consistent with most of the literature.  The Ljung-Box statistics for both 6 and 12 lags applied to returns [denoted by L-B(Q=k)] and square returns [denoted by L-B(Q=k)-Square] indicate that significant linear and nonlinear dependencies exist in all currencies, with the exception of Core d’lvoire, Gabon, and Burkina Faso three countries, and these nonlinear dependencies can satisfactorily be captured by Engle’s (1982) autoregressive conditional heteroskedasticity (ARCH) models.  Finally, all exchange rate returns series are found to be stationary according to ADF and P-P statistics as reported in Table 2.  Using a symmetric GARCH in mean model, Doroodian and Caporale (2000) obtain results consistent with the safe-haven hypothesis for Egypt, El Salvador, Malaysia, and Mexico.  However, recent empirical evidence indicates that the asymmetric GARCH model may be more appropriate to model the conditional variance because the impact of news in the market may be asymmetric.  In particular, good and bad news may have different impacts in predicting future volatility.  There are several asymmetric GARCH models, such as the EGARCH model of Nelson (1992) and the GJR-GARCH model of Glosten, Jagannathan. and Runkle (1993).  In a comparison of these models, Engle and Ng (1993) suggest that the parameterization of the GJR-GARCH is the more promising approach.  Chang and Caudill (2004) extend Doroodian and Caporale‘s (2000) empirical work to test the “safe-haven” hypothesis using an ARMA-GJR-GARCH-M model for the currencies of six Asian countries: Malaysia, Philippines, Singapore, South Korea, Taiwan, and Thailand and their empirical results support the safe-haven hypothesis for South Korea and Taiwan two countries, only.  In this study we further extend Chang and Caudill’s (2004) empirical work and employ an ARMA (p, q)-GJR-GARCH (1, 1) in mean model to test the safe-haven hypothesis for twenty-two selected African currencies.  Our model is defined as follows:

 

Long-run Benefits from International Equity Diversification between Taiwan and Its Major Trading Partners: Nonparametric Cointegration Test

Chin-Wen Mo, Feng Chia University, Taichung, Taiwan

Dr. Tsangyao Chang, Feng Chia University, Taichung, Taiwan

 

ABSTRACT

This study attempt to re-investigate whether there exists long-run benefits from international equity diversification between Taiwan and its major trading partners, Japan and the USA, using a more powerful nonparametric cointegration test developed by Bierens (1997), over the July 1, 1999 to December 31, 2004 period.  The results from this test suggest that the Taiwanese stock market is pairwise cointegrated with the Japanese and the US stock market.  These findings should prove valuable to individual investors and financial institutions holding long-run investment portfolios in these markets. This purpose of this study is to explore whether there exist long-run benefits from international equity diversification for Taiwanese investors who invest in the equity markets of its major trading partners, namely Japan and the USA.  To explore this issue, recent empirical studies have employed cointegration techniques to investigate whether there exist such long-run benefits from international equity diversification (see Taylor and Tonks, 1989; Chan et al., 1992; Kasa, 1992; Arshanapalli and Doukas, 1993; Roger, 1994; Chowdhury, 1994, Arshanapalli et al., 1995; Masih and Masih, 1997; and Kanas, 1998a, 1998b, 1999), Chang (2001) and Chang and Caudill (2005).  According to these studies, asset prices from two different efficient markets cannot be cointegrated.  If a pair of stock prices is cointegrated, then one stock price can be forecast from the other stock price.  If that is the case there are no gains from portfolio diversification. In the above studies, Johansen cointegration test (Johansen, 1988; Johansen and Juselius, 1990) has been widely employed.  This popular cointegration test is built on the basis of linear autoregressive model and implicitly assumes that the underlying dynamics are in linear form.  However there is ample empirical evidence against the linear paradigm.  Theoretically, there is no reason to believe that economic systems must be intrinsically linear (see, Barnett and Serletis, 2000).  Empirically, there are a lot of studies showing that financial time series such as the stock prices exhibit nonlinear dependencies (see, Hsieh, 1991; Abhyankar et al., 1997).  The Monte Carlo simulation evidence in Bierens (1997) indicated that the standard Johansen cointegration framework presents a min-specification problem when the true nature of the adjustment process is nonlinear and the speed of adjustment varies with the magnitude of the disequilibrium.  The work of Bake and Fomby (1997) also suggested a potential loss of power in standard cointegration tests under threshold autoregressive data generating process.  Motivated by the above consideration, in this study we re-examine the issue of stock market integration for the markets of the Taiwan and its major trading partners of Japan and the USA, using a more powerful nonparametric cointegration test developed by Bierens (1997).  The results from this test suggest that the Taiwan market is pariwise cointegrated with the markets of the Japan and the US during the period from July 1, 1999 to December 31, 2004.  Our finding of cointegration can be interpreted as evidence that there is long-run linkage exists in  Taiwan-Japan and Taiwan-US markets and thus, there exist no potential gains for Taiwan investors from diversifying in the Japan and the US equity markets, or vice versa.  This result should prove valuable to individual investors and financial institutions. The main reason for us to choose these two markets is that there exist strong international trade-ties between Taiwan and these two countries.  For the year 2003, the share of Taiwan exports to these two countries was 26.3% and the share of Taiwan imports from these two countries was 38.8%, respectively (see Table 1).  There are two motivations for this study.  First, Taiwan is a rapidly expanding emerging market and a significant number of Taiwan investors have adopted diversification benefits as the primary criterion in investing outside Taiwan.  Second, the rapid growth of the Taiwan economy has attracted the attention of international investors and both Dow Jones and Morgan Stanley have included Taiwan stocks in their international indices beginning September 1997.  This suggests that the issue of international linkages to the Taiwan share market is of practical interest to a number of international investors.  The remainder of this note is organized as follows.  Section II presents a discussion of the data.  Section III presents the tests for cointegration and discusses the findings.  Section IV concludes the paper. The data used in this study are daily closing stock price indices from July 1, 1999 to December 31, 2004.  Data starting from 1999 limits the adverse effects of the 1997 Asian financial crisis.  We use the Weighted Index for Taiwan, Nikkei Stock Average of the Japan, and the Dow Jones industrial Average Index for the US.  Data are collected from Taiwan Stock Exchange and the AREMOS database of the Ministry of Education, Taiwan.  All indices are based on local currencies and all series are measured in logs.  Following Chowdhury (1994), we match the three time series by omitting some observations.  For example, seasonal festival or holiday entries (and others) are omitted to guarantee that both countries have entries on a given date.  According to Chowdhury (1994), this procedure solves the problem of the data gap caused by holidays and other nonworking days.

 

Nonlinear Short-Run Adjustments in US Stock Market Returns: 1871-2002

Dr. Tsangyao Chang, Feng Chia University, Taichung, Taiwan

Jennifer Chi-Chen Chiu, Feng Chia University, Taichung, Taiwan

Dr. Chien-Chung Nieh, Tamkang University, Taipei, Taiwan

 

ABSTRACT

Using a more powerful nonparametric cointegration test of Bierens (1997), we find no rational bubbles exist in the US stock market over the 1871 to 2002 period.  The application of a logistic smooth transition error-correction model designed to detect nonlinear short-run adjustments to the long-run equilibrium provides empirical support in favor noise trader models where arbitrageurs are reluctant to immediately engage in trade when stock returns deviate substantially from their fundamental value. Over the past several decades, studies have been devoted to investigating the relationship between the stock prices and dividends from both theoretical and empirical points of view (see, for example, Campbell and Shiller, 1987; Caporale and Gil-Alana, 2004; Han, 1996; McMillan, 2004; Taylor and Peel, 1998).  From theoretical point of view, stock price valuation model assume that stock prices depend upon the present value of discounted future dividends, where the discount rate is equivalent to the required rate of return.  This means that stock returns can be predicted by the dividend yield and implicitly assumes that dividends and stock prices are cointegrated with log returns dependent upon log dividends minus log stock prices.  As such, returns could be modeled by a linear error-correction model (see, Campbell and Shiller, 1987).  However, this relationship could not be expected to hold exactly and deviations may arise due to time variation in required rate of return, speculative bubbles and fades, and omission of other relevant variables such as retained earnings.  Recent research has suggested that this relationship may be better characterized by a nonlinear model.  For example, theoretical models of interaction between noise and arbitrage traders have suggested that small and large returns may exhibit differing dynamics as arbitrageurs must be aware of the potential for noise traders to drive returns further away from equilibrium before correction.  More specifically, this suggests that the dynamics governing small return deviations form fundamental equilibrium differ from those governing large return deviations.  This study contributes to this line of research by comparing the estimation performance of a linear error-correction model and a nonlinear error-correction model, specifically, a logistic smooth transition error-correction model (LSTECM) for US stock market returns over the 1871 to 2002 period.  The LSTECM is able to capture market dynamics which differ large and small returns and allows gradual movement between regimes, which is consistent with the “stylized facts” of slow mean reversion in asset returns (see, Campbell et al., 1997; McMillan, 2004)  The study is organized as follows.  Section II describes the data used in this study.   Section III presents the methodologies used in this paper and the empirical results.   Section IV concludes our paper.  Annual US Standard and Poor’s stock price index and dividend data are analyzed over 1871 to 2002 period and are taken from Shiller’s Web site http://aida.econ.yale.edu/~shiller.  A description of the time series can be found in Shiller (2001).  Summary statistics are reported in Table 1. A. Unit Root Tests.  Recently, there is a growing consensus that financial time series data might exhibit nonlinearities and that conventional tests for stationarity, such as the ADF unit root test, have low power in detecting the mean-reverting tendency of the series.  For this reason, stationarity tests in a nonlinear framework must be applied.  We use the nonlinear stationary test advanced by Kapetanios et al. (2003) (henceforth, KSS test).  Following Kapetanios et al. (2003), the KSS test is based on detecting the presence of non-stationarity against a nonlinear but globally stationary exponential smooth transition autoregressive (ESTAR) process.  The model is given by whereis the data series of interest,  is an i.i.d. error with zero mean and constant variance, and  is the transition parameter of the ESTAR model and governs the speed of transition.  Under the null hypothesis  follows a linear unit root process, but Yt follows a nonlinear stationary ESTAR process under the alternative.  One problem with this framework is that the parameter,, is not identified under the null hypothesis.  Kapetanios et al. (2003) use a first-order Taylor series approximation for {} under the null hypothesis  and then approximate equation (1) by the following auxiliary regression: In this framework the null hypothesis and alternative hypotheses are expressed as  (non-stationarity) against (nonlinear ESTAR stationarity).  Table 2 reports the KSS nonlinear stationary test results.  These results indicate that both stock prices and dividends are integrated of order one. For the sake of comparison, we also incorporate the Augmented Dickey-Fuller (ADF), Kwiatkowski et al. (1992, KPSS) and Phillips and Perron (1988, PP) tests into our study.  Table 3A and 3B report the results of non-stationary tests for the U.S. stock price and dividends using ADF, KPSS and P-P tests, respectively.  The tests results indicate that stock price index and dividends are both nonstationary in levels and stationary in first differences, suggesting that the stock price and dividends are integrated of order one, I(1).  On the basis of these results, we proceed to test whether these two variables are cointegrated using the Bierens’s (1997) nonparametric cointegration approach.

 

Does PPP Hold in African Countries? Further Evidence based on More Powerful Nonlinear (Logistic) Unit Root Tests

Chi-Wei Su, Feng Chia University, Taichung, Taiwan

Dr. Tsangyao Chang, Feng Chia University, Taichung, Taiwan

 

ABSTRACT

In this study we use a more powerful nonlinear (logistic) unit root test advanced by Leybourne et al. (1998) to investigate where Purchasing Power Parity (PPP) holds true for twenty-two selected African countries for the period January 1980 to December 2003.  We strongly reject the null of unit root process for six of the twenty-two countries compared to only one case from traditional unit root tests of ADF, DF-GLS, PP, KPSS, and NP.  These empirical results indicate that PPP holds true for Central African Republic, Cote d’Ivoire, Kenya, Madagascar, Uganda, and the Lesotho six countries. Over the past decades, many studies have been devoted to investigating the stationarity of real exchange rate as is has important implications in the international finance.  Studies on this issue are critical not only for empirical researcher but also for policymakers.  In particular, a non-stationary real exchange rate indicates that there is no long-run relationship between nominal exchange rate, domestic and foreign prices, thereby invalidating the purchasing power parity (PPP).  As such, PPP can not be used to determine the equilibrium exchange rate and invalid PPP also disqualifies the monetary approach to exchange rate determination, which requires PPP to hold true.  Empirical evidence on the stationarity of real exchange rates is abundant but inconclusive thus far.  Details about previous studies see the work of Rogoff (1996) and Sarno and Taylor (2002) who provide details on the theoretical and empirical on PPP and the real exchange rate.  Recently, there is a growing consensus that macroeconomic variables such as exchange rate exhibit nonlinearities and, consequently, conventional unit root tests, such as the Augmented Dickey Fuller (ADF) test, have low power in detecting mean reversion of exchange rate.  A number of studies have provided empirical evidence on the nonlinear adjustment of exchange rate in the developed countries (Baum et al., 2001), the G7 countries (Chortareas et al., 2002), the Middle East (Sarno, 2000), and Asian Economies (Liew, et al., 2003, 2004).  However, the finding of nonlinear adjustment does not necessarily imply nonlinear mean reversion (stationarity).  As such, stationarity tests based on a nonlinear framework must be applied. This empirical study contributes to this line of research by determining whether a unit root process characterizes real exchange rate in Africa.  We test the non-stationarity of real exchange for 22 selected African countries using the nonlinear (logistic) unit root test of Leybourne et al. (1998) (henceforth, LNV test).  There are several contributions of this study.  First, previous empirical studies have been done in developed countries and Asian countries.  Very few such studies have been done in African countries.  Second, while previous studies are able to reject the linearity of exchange rate behavior based on linearity test, they can draw no conclusion on the nonlinearly stationary behavior of these exchange rates, with exception of Chortareas et al., (2002) and Liew et al. (2004).  This study hopefully fills up this literature gap. Third, to our knowledge, this study is to date the first one that utilizes LNV nonlinear stationary test in African real exchange rates   We find that the LNV strongly reject the unit root process for six of the countries examined, while the traditional unit root tests such as the ADF, PP, KPSS, DF-GLS, and NP have led to no rejection at all, with the exception of Cote d’Ivoire country.  The plan of this paper is organized as follows.  Section 2 discusses theoretical model of real exchange and the theory of Purchasing power parity.  Section 3 presents the data used in our study.  Section 4 briefly describes the LNV test and our empirical results.  Section 5 concludes the paper. Our bilateral real exchange rate is defined as the nominal exchange rate deflated by a ratio of foreign (U.S in our case) and domestic price levels:      [1] whereis the nominal exchanger defined in local currency units per U.S. dollar;  is the real exchange rate; and  are the domestic and foreign price levels.  We use consumer price index (CPI) in our study.  Taking the logarithm of both sides of the Eq. [1] and rearranging the terms yields:       [2] From a statistical point of view, the validity of the purchasing power parity (PPP) hypothesis reduces to a unit root test of.  The presence of a unit root in the real exchange rate series would imply that PPP does not hold in the long run.  If PPP holds, it implies that nominal exchange rate corrected for inflation differentials.  Nonstationarity in real exchange rates has many macroeconomic implications.  For example, Dornbusch (1987) has argued that if real exchange rate depreciates, it could bring a gain in international competitiveness, which in turn, could shift the employment toward the depreciating country.  Therefore, it is important to establish the empirical validity of the purchasing power parity theory.  Another important implication of nonstationary in real exchanger is that unbounded gains from arbitrage in traded goods are possible.  In fact, Parikh and Williams (1998) have mentioned that a nonstationary real exchange rate can cause severe macroeconomic disequilibrium that would lead to real exchange rate devaluation in order to correct for external imbalance. This empirical study uses monthly bilateral real exchange rate (defined as equation [1]) for 22 selected African countries over the January 1980 – December 2003 period.  The source for the data is from the Bloomberg and summary statistics are given in Table 1.  Jarque-Bera test results indicate that all 22 bilateral real exchange rate data sets are approximately non-normal, except for those of Gabon and Lesotho.  Figure 1 plots the actual values and fitted smooth transition of real exchange rate for South Africa, a leading country in terms of higher political and economic status in Africa.  Due to space constraints, we do not report the figures for the rest of countries, but are available upon requests

 

Are Real Estate and Stock Markets Related? The Case of Taiwan

Dr. Tsangyao Chang, Feng Chia University, Taichung, Taiwan

Chaiou Chung Huang, Feng Chia University, Taichung, Taiwan

Dr. Ching-Chun Wei, Providence University, Taichung, Taiwan

 

ABSTRACT

This paper studies the long-run relationship between real estate and stock markets, using standard cointegration test of Johansen and Juselius (1990) and that of Engle-Granger (1987) as well as the fractional cointegration test of Geweke and Porter-Hudak (1983) in the Taiwan context over the 1986Q3 to 2001Q4 period.  The results from both kinds of cointegration tests strongly indicate that these two markets are not cointegrated with each other.  In light of risk diversification, it is recommended that investors and financial institutions include both assets in the same portfolio. To portfolio investors who want to diversify in the real estate and stock markets, fully understanding the long-run relationship between these two markets is central.  It is quite apparent, after all, that if the two markets have a long-run relationship, then jointly holding such assets in the same portfolio would likely offer gains in terms of risk reduction. Previous empirical studies have employed cointegration techniques to investigate whether there exist such long-run benefits from international equity diversification (see Kwan et al., 1995; Masih and Masih, 1997).  Yet, exactly what have they empirically shown?  According to these two studies, asset prices from two different efficient markets cannot be cointegrated.  To be more precise, they claim that if a pair of asset price is cointegrated, then one asset price can be forecast (i.e., is Granger-caused) by the other asset price.  Thus, these cointegration results indicate that, with regard to reducing risk, certainly no/few-if any gains are obtained from such portfolio diversification.  This study contributes to this line of research by revisiting the issue by exploring whether there are any long-run benefits from asset diversification for those who invest in Taiwan’s real estate and stock markets.  Unlike other studies, here we test for cointegration using the standard cointegration test of Johansen and Juselius (1990) and that of Engle-Granger (1987) as well as the fractional cointegration test of Geweke and Porter-Hudak (1983).  With the results from the three tests combined, we determine that these two asset markets are not, in fact, pair-wise cointegrated with each other.  The finding of no cointegration can be interpreted as clear-cut evidence that no long-run linkages between these two asset markets and, thus, potential gains actually are present for investors who diversify in these two asset markets over this sample period.  These results are invaluable to investors and financial institutions holding long-run investment portfolios in these two asset markets. The remainder of this study is organized as follows.  Section II presents a review of some previous literature.  Section III presents the data used.  Section IV presents the methodologies used and discusses the findings.  Finally, Section V concludes. Identifying the relationship between stock prices and real estate prices has been widely debated (in literature) within academic circles and practitioner, alike.  Although current literature on the relationship between real estate and equity markets tends to show conflicting viewpoints, much of the empirical evidence seems to support the view that the two markets are segmented.  Goodman (1981), Miles et al., (1990), Liu et al., (1990) and Geltner (1990), for example, document the existence of such segmentation within various real estate markets and stock markets.  In direct contrast, Liu and Mei (1992), Ambrose et al. (1992) along with Gyourko and Keim (1992), report contradictory results, claiming that real estate and stock markets are if fact integrated.  The predicament faced here, therefore, is whether the two markets are segmented or integrated.  Our primary objective then is to ascertain whether any significant relationship does exist between these markets and, if so, to determine what implications it may have for active market traders.  One simple motivation behind our study is that our findings can yield considerable insight for both investors and speculators that may facilitate forecasting future performance from one market to the other. The data sets used here consist of quarterly time series on real estate price index (lresp) and stock price index (lstkp) for the 1986Q3 to 2001Q4 period.  To avoid the omission bias, we also incorporate real interest rate (liret) into our study.  Real interest rate and stock price index are obtained from the AREMOS database of the Ministry of Education of Taiwan.  The real estate price index are collected and constructed by Hsin-Yi Real Estate Inc.  An examination of the individual data series makes it clear that logarithmic transformations are required to achieve stationarity in variance; therefore, all the data series were transformed to logarithmic form. Descriptive statistics for both real estate and stock markets returns are reported in Table 1.  We find that the sample means of the real estate price returns are positive (1.67%), whereas stock price returns are negative (-0.161%).  Both the skewness and kurtosis statistics indicate that the distributions of both markets returns are normal.  The Jung-Box statistics for 4 lags applied to returns and square returns indicate that no significant linear or non-linear dependencies exist in either market.

 

An Empirical Note on Testing the Wagner’s Law for China: 1979-2002

Jungfang Liu, Feng Chia University, Taichung, Taiwan

Dr. Tsangyao Chang, Feng Chia University, Taichung, Taiwan

Dr. Yuan-Hong Ho, Feng Chia University, Taichung, Taiwan

Dr. Chiung-Ju Huang, Ph.D., Feng Chia University, Taichung, Taiwan

 

ABSTRACT

In this note we empirically test the Wagner’s Law for China, using annual data over the 1979 to 2002 period.  To estimate the long-run relationship between government spending and income, we use a robust estimation method known as the Unrestricted Error Correction Model (UECM) -- Bounds Test Analysis.   Empirical results from UECM-Bound Test indicate that there exists no long-run relationship between government spending and income in China.  Furthermore, Toda and Yamamoto’s (1995) Granger-Causality test results also show that the validity of Wagner’s Law was not held for China over this testing period. Over the past two decades a vast amount of research has been devoted to testing the Wagner’s Law, which postulates that as economic activity grows there is a tendency for government activities to increase, for both industrial and developing countries.  This test is more than an intellectual exercise and has important implications within the link of economic and government system.  Empirical tests of this law have yielded results that differ considerably from country to country.    For example, there are many multi-country studies, see Wagner and Weber (1977) test this law for 34 countries over the 1950-1972 period.  Abisadeh and Gray (1985) cover the period 1963-1979 for 55 countries and their findings support the proposition for wealthier countries but not for the poorest countries.  Ram (1986) covers the period 1950-1980 for 63 countries and finds limited support to Wagener’s Law.  Afxentiou and Serletis (1996) examine six European countries over the period 1961-1991 and find no evidence of supporting the Wagner’s Law for countries studied.  Ansari et al. (1997) study three African countries, Ghana, Kenya, and South Africa, and also find no evidence of supporting Wagner’s Law.  On the other hand, Chang (2002) study three emerging countries of Asia (South Korea, Taiwan, and Thailand) and three industrialized countries (Japan, USA and the United Kingdom) over the period 1951-1996 and find that the results concerning the validity of Wagener’s Law are held for selected countries studied with the exception of Thailand.  Besides, there are many country-specific studies, see, for example, Singh and Sahni (1984), Afxentiou and Serletis (1991) and Biswas et al. (1999) have studied Wagner’s Law for Canada.  Nagarajan and Spears (1990), Murthy (1993), Hayo (1994), and Lin (1995) have found mixed results concerning the validity of Wagner’s law for Mexico.   Vatter and Walker (1986), Yousefi and Abizadeh (1992), and Islam (2001) have studied the law for the USA; Gyles (1990) for the UK; Nomura (1995) for Japan; Singh (1996) for India, and Burney (2002) for Kuwait.  In general, the country-specific studies, with few exceptions, have found support for Wagner’s Law.  While previous studies focus most on the industrial and developing countries, this note attempts to make some contributions to this line of research by using a more robust and recent developed estimation method -- the Bounds Test proposed by Pesaran et al. (2001) (based on the unrestricted error correction model (UECM)) and Toda and Yamamoto’s (1995) Granger-Causality test to re-examine the Wagner’s Law for China, over the 1979 to 2002 period.  Several factors make China a most interesting arena to test the Wagner’s Law.  First, China has made remarkable economic progress over the last few decades.  To cite a few examples, its annual average economic growth rate in the past decade (1990-2002) was 9.48%.  Second, by the end of 2002, China had become the world‘s fourth largest trading country with a foreign exchange reserve estimated at $US286.4 billion.  Third, China started its Open-Door policy in the late 1970s, thus sufficient data are available for researchers to evaluate the effects of economic liberalization on various economic phenomena.  Las but not least, previous studies have not done this issue on China, this note will be the first study testing Wagner’s Law for China. The remainder of this paper is organized as follows.  Section II presents the data used.  Section III describes the methodology used and discusses the empirical findings.  Finally, Section IV concludes. Our empirical analysis employs annual data on real GDP (1995=100), real government spending, and population for China over the 1979 to 2002 period.  All the data used in this study are taken from IMF International Financial Statistics.  All the data series are transformed to logarithmic form to achieve stationarity in variance. Following Mann’s (1980) study, five different versions of Wagner’s Law will be empirically examined by employing annual time-series data on China over the 1979 to 2002 period.   The five different versions of Wagner’s where Rtge = real total government spending, RGDP = real GDP, Rtge/Pop = real total government spending per capita, Rtge/RGDP = the ratio of real total government spending to real GDP, and RGDP/Pop = real GDP per capita. Kermers et al., (1992) have shown that for data with small size, no cointegration relation can be made among variables that are integrated of order one, or I(1).  Mah (2000) also states that the ECM and Johansen (1988) method are not reliable for studies that have small sample size, such as those of the previous studies.  Finally, the conventional ADF test (like many other unit root tests) suffers from poor size and power properties especially in small samples (Harris, 1995).  Since our study has a very small sample size (24 observations), the cointegrating relationship for our five version’s of Wagner’s Law model are estimated using the recently developed econometric techniques of the Bounds Test, proposed by Pesaran et al. (2001), which is based on the following unrestricted error correction model (UECM):

 

Examining Task Social Presence and Its Interaction Effects on Media Selection

Dr. Bo K. Wong, Lingnan University, Tuen Mun, Hong Kong, China

Dr. Vincent S. Lai, The Chinese University of Hong Kong, Hong Kong, China

 

ABSTRACT

Social presence has been recognized as an important theory in the area of media selection since 1976, but conflicting research results are found in the literature that deals with this theory. Carlson and Davis believe that task descriptions are incomplete in current research, and that not all of the influences on media selection have been thoroughly considered. The major objective of this research is to study the potential interaction effects between task social presence and other situational variables on media selection decisions. Specifically, task social presence and its interaction effects with the physical proximity of communicators, recipient availability, urgency, and the direction of communication are examined. A policy-capturing technique was adopted to collect data from 208 knowledge workers on media selection behavior in a medium-sized financial company. A total of 840 usable scenarios from 162 completed questionnaires were analyzed using the LISREL statistical technique. The results indicate that all of the tested interaction effects significantly influence the choice of media of knowledge workers. In particular, telephone and voicemail are found to be common substitutes for face-to-face meetings in most situations. The implications for researchers and suggested future research directions are discussed. Our findings should arouse further interest in the significance and implications of the interaction effects in this research area. Although social presence has been recognized as an important theory in the area of media selection since 1976 (Short et al.), recent research has shown that this rational-choice model cannot by itself fully explain the empirical findings on the use of communications technology (Te’eni 2001), or the conflicting research results that are found in the literature. Some evidence supports the concept that individuals usually prefer a rich medium when performing a task with a high social presence, whereas other studies find that individuals who perform a task with a low social presence often also choose a rich medium. As summarized by Carlson and Davis (1998), this could be due to the fact that individuals may have to respond not only to the task social presence, but also to other variables, such as job pressures, geographic dispersion (Steinfield and Fulk 1986b), or individual differences (Trevińo et al. 1990) simultaneously. Under certain circumstances, these variables may dictate an individual’s preference for media with a high social presence, even though the tangible cost is considered to be higher than that of “lean” media, which are therefore deemed to be inefficient for tasks of low equivocality (Schmitz and Fulk 1991). Carlson and Davis (1998) believed that task descriptions are incomplete in current research, and that not all of the influences on media selection have been thoroughly considered.  One possible explanation could be that the direct relationship between task social presence and media selection has been oversimplified. Our research is triggered by the fact that the interaction effects of the two predictors can be a powerful explanation for media choice, as is shown in the research of Straub and Karahanna (1998). In their study on the interaction effect between task social presence and recipient availability on media selection, they not only found that such an effect exists, but also identified the acceptable alternative media, such as telephone and voicemail, when a task calls for a medium with a high level of social presence, but the recipient is unavailable for face-to-face communication. Their findings are deemed to have advanced a new determinant of media choice, and to not only have provided an in-depth understanding of why individuals choose a particular communication medium, but also to have given insight into management approaches to the redesign of organizational communications portfolios. The major objective of our research is to re-examine the relationship between the theory of task social presence and media selection by exploring the potential interaction effects of studied variables. Specifically, task social presence and its interaction effects with the physical proximity of communicators, recipient availability, urgency, and direction of communication are examined. All of these variables have previously been recognized separately as key predictors for media choice in many research studies. We believe that interaction effects do exist between these predictors, and that they can influence an individual’s choice of media.  A recognized theory in media selection is that of social presence, which is defined as the extent to which an individual psychologically perceives other people to be physically present when interacting with them (Short et al. 1976). The theory states that communicators assess the degree of social presence that is required by a task and fit it to the social presence of the medium. When a task is interpersonally involved and sensitive, such as responding or dealing with complaints, a medium with a high social presence, such as face-to-face interaction or the telephone, will be selected. For a less sensitive task, such as a straightforward information exchange, a medium with a low social presence, such as e-mail or fax, will be chosen.  The urgency that is associated with a task-related communication is another important determinant of the type of medium that is selected (Picot et al. 1982). Individuals tend to choose real-time, synchronous media such as face-to-face meetings and the telephone (Saunders and Jones 1990; Steinfield and Fulk 1986a & b; Trevińo et al. 1987; Wijayanayake and Higa 1999) to send urgent messages. In particular, Steinfield and Fulk (1986b) found that managers were more likely to use the telephone when under time pressure, regardless of the relative ambiguity of their task situation. Their study concluded that message content plays a less important role when situational constraints are high.

 

International Financial Integration and Economic Growth - A Panel Analysis

Xuan Vinh Vo, University of Western Sydney, Australia

 

ABSTRACT

This paper employs a new panel dataset and a wide assorted number of indicators both de jure and de facto measures to proxy for international financial integration to investigate the relationship between international financial integration and economic growth. Using 79 countries with the data covering the period from 1980 to 2003, our analysis indicates a weak relationship between international financial integration and economic growth. Our data also show that this relationship is not different even though we control for different economic conditions.  With the development of financial market and increased degree of international financial integration around the world, many countries especially developing countries are now trying to remove cross-border barrier and capital control, relaxing the policy on capital restrictions and deregulating domestic financial system. This paper will empirically examine the growth impacts of international financial integration.  This paper is going to contribute to the existing literature on the impacts of international financial integration on economic performance in a number of ways.  Firstly, we examine an extensive array of international financial integration indicators, both de jure and de facto of international financial integration. We examine the IMF’s official restriction dummy variable (1) as well as the newly developed capital restriction measures by Miniane (2004). Furthermore, we explore various measures of capital flows and in disaggregation including total assets and liabilities, total liabilities, FDI, portfolio, and total capital flows as share of GDP (a total of 18 de facto indicators). Moreover, we consider measures of just capital inflows as well as measures of gross capital flows (inflows plus outflows) to proxy for international financial integration because capital account openness is defined both in terms of receiving foreign capital and in terms of domestic residents having the ability to diversify their investments abroad. In addition, we examine a wide array of international financial integration proxies because each indicator has advantages and disadvantages (2).  Secondly, we develop and examine a large number of new measures of international financial integration, both in flows and stock measures, especially in disaggregation. As proxies for international financial integration, we first examine the flow measures of capital flows. In this regard, we use FDI inflows (as a share of GDP), FDI inflows and outflows (as a share of GDP), portfolio investment (equities and debts) inflows (as a share of GDP), gross portfolio investment inflows and outflows (as a share of GDP), gross private capital flows (3) (as a share of GDP). In addition, we use the stock measure of these indicators. Since we want to measure the average level of openness over an extended period of time, these stock measures are more useful additional indicators. Furthermore, these stock measures are less sensitive to short-run fluctuations in capital flows associated with factors that are unrelated to international financial integration, and may therefore provide more accurate indicators of international financial integration than capital flow measures. In particular, we examine both the accumulated stock of liabilities (as share of GDP) and the accumulated stock of liabilities and assets (as share of GDP). Furthermore, we break down the accumulated stocks of financial assets and liabilities into stock of FDI and stock of portfolio inflows and outflows in assessing the links between economic growth and a wide assortment of international financial integration indicators. There are some authors who previously use a subset of these indicators. For example, Lane and Milesi-Ferretti (2002) get credit as the first to compute the accumulated stock of foreign assets and liabilities for an extensive sample of countries. Edison et al. (2002) use this dataset for their study in measuring international financial integration. Thus, we advance other studies by carefully computing, developing and investigating these additional international financial integration indicators as well as a large number of countries in our dataset. As a result, we believe that our empirical investigation provides a far more complete picture of the relationship between international financial integration and economic growth compared to other previous studies using a small subset of these indicators and smaller number of countries.  Thirdly, since theory and some past empirical evidence suggest that international financial integration will only have positive growth effects under particular institutional and policy regimes (Edison et al. 2002), we examine an extensive array of interaction terms. Specifically, we examine whether international financial integration is positively associated with growth when countries have well-developed banks, well-developed stock markets, well functioning legal systems that protect the rule of law, low levels of government corruption, sufficiently high levels of real per capita GDP, high levels of educational attainment, prudent fiscal balances, and low inflation rates. Thus, we search for economic, financial, institutional, and policy conditions under which international financial integration boosts growth. Fourthly, we use newly developed panel techniques that control for (i) simultaneity bias, (ii) the bias induced by the standard practice of including lagged dependent variables in growth regressions, and (iii) the bias created by the omission of country-specific effects in empirical studies of the international financial integration-growth relationship. Since each of these econometric biases is a serious concern in assessing the growth-international financial integration nexus, applying panel techniques enhances the confidence we can have in the empirical results. Furthermore, the panel approach allows us to exploit the time-series dimension of the data instead of using purely cross-sectional estimators.

 

Understanding Consumer Involvement Influence on Consumer Behavior in Fine Restaurants

Dr. Theodoro Peters, FEI – Fundacao Educacional Inaciana – Sao Paulo/Brazil

 

ABSTRACT

This work examines the influence of involvement on the perception made by the consumer in services. Starting by explaining the concept of involvement and its influence on services performance perception, and based on qualitative research with fine restaurant managers and consumers, the article intends to cover the interpretation and treatment of consumer involvement as a way to better understand the consumer behavior in the arena of fine restaurants. The study is done in a large metropolitan area – in the city of Sao Paulo, south of Brazil – and deals with involvement depending on the subject, the object (reason to go) and the situation (motivation of going) in fine restaurants scenario.  Marketing activity for a long time recognizes the importance of measuring quality and value perceived by the consumers and their interdependence with satisfaction (Zeithaml, 1988). So far, relevant marketing benefits result to business activity, translated into consumer loyalty and patronage, particularly relevant in the present period of intensificated competition, justifying concerning with consumer value perception by business managers and its insertion into marketing strategies (Bateson, Hoffman, 1999; Cronin Jr., Brady, Hult, 2000; Holbrook, 1999; Woodruff, 1997). Woodruff (1997) suggested to enrich the consumer value theory by deepening the knowledge in the area of products usage in different situations – how consumers form their preferences wich reflect the desired value – and exploring the linkage between the consumer preferences for desired value, the received value evaluations, and consumer overall satisfaction feelings.  Amongst research directions in the consumer value field, Woodruff (1997) settled the search of a better understanding of how consumers perceive value in different contexts, the criteria they rely on and their relative importance, in such way that the strategic concerns with consumers retention poses a context question. In this sense, there is a need for new methods to collect and analyse  data connected to particular issues of consumer value.  Parasuraman (1997) supported these suggestions purposing to broaden the conceptual and empirical research of consumer value on contextual factors, such as the nature of the product being goods or services and the kind of consumer being new or old. At the same time, it´s necessary to develop consistent psicometric measures to the construct. Given the construct complexity and richness these studies may be faced as challenges intended to achieve its complete operacionalization and the development of a standard scale able to englobe its nuances. Cronin Jr., Brady,  Hult (2000) studying the inter-relations among quality, value, satisfaction and consumer behavior intentions verified the need of measurement all these three variables, given their complex and comprehensive effect on behavior intentions. The marketing theory about value perceived by the consumer includes in a relation of inter-dependence quality, cost, sacrifice, satisfaction and has consequences over consumer behavior in terms of attitudes, intentions, spontaneous communication, and loyalty.  Consumer is always in search for value, wich perception may be affected by his/her involvement with the object and the situation. Involvement is the most important factor that frames the kind of decision behavior process to be adopted by the consumer. Involvement is the perceived personal importance degree and/or interest evoked by a stimulus (or stimuli) in a specific personal situation. Person acts deliberatedly to minimize risks and maximize benefits obtained from buying and use (Engel, Blackwell, Miniard, 1995). Involvement is better conceived as function of subject, object, and situation. The initial point is always the person and his/her underneath motivations in the shape of needs and values. It is activated when the object (a product, service or promotional message) is perceived as an instrument to achieve important needs, goals and values. Nevertheless, the perceived significance of need satisfation in the object varies from one situation to another. So, all three factors shall be taken under consideration. Zaichkowsky (1985) specifies these three categories in: personal – interests, values, or inerent needs wich motivate a subject in the direction to the object; physical – object characteristics that cause diferentiation and broaden the interest; situational – something that temporarily amplifies the relevance or the interest in direction to the object. Involvement, so far, is the reflex of strong motivation, in the shape of high personal perceived relevance, of a product or service, in a special context, and also acquires the shape of a scale wich varies from high to low. The factors that can determine the degree of involvement are many, including personal factors (e.g., self-image), product factors (e.g., risk in buying and use; physical,  psychological, performance and financial risk – the greater the perceived risk, the greater the probability of high involvement), situational factors (or instrumental, wich  operates on a temporary basis, changing with the time, or depending on the use that is destinated to, or still depending on social pressures).  Restaurants seem to be strongly influenced by situational variables, as it is analysed in this study. The process starts with the client, depends on the motives, and drives to a specific restaurant wich fulfills, or at least is intended to, this conjunction of subject/situation/object, translating particular needs and values.  When there is high involvement consumers usually become more careful with search of information and in processing it, being probable to occur more efficacy of selling appeals in advertising and promotion in contrast with the way that the appeals are expressed and visualized. Furthermore, there is more propensity of consumers to notice/perceive differences in attributes among diversified alternatives, inducing to greater brand loyalty, and increasing relying on mouth-to-mouth comunication.  In these situations, another factor that affects the consumer behavior is his/her personal disposition, what may causes strong influence on information processing and the respective assessment.

 

Strategic Implications of Surging Chinese Manufacturing Industries: A Case Study of the Galanz

Dr. Gloria L. Ge, Griffith University, Australia

Dr. Daniel Z. Ding, City University of Hong Kong, Hong Kong

 

ABSTRACT

Recent years witnessed the surging of Chinese manufacturing industries, and China became the world’s factory floor. Yet little is known about Chinese manufacturing firms, let alone their strategic choices. This paper presents a case study of one of the most successful manufacturers in China, the Galanz Group. As the world’s largest microwave manufacturer, Galanz has developed strategies that have made its success both in China and overseas market. By examining these strategies, domestic Chinese firms can learn how to build up their market share in overseas market, while international firms can learn to better compete in the Chinese marketplace. China is now the world’s fourth largest industrial producer behind the U.S., Japan and Germany. China makes more that 50% of the world’s cameras; 30% of its air-conditioners and televisions; 25 % of its washing machines and nearly 20% of all refrigerators (Leggett and Wonacott 2002). Nearly half of all the goods China sends overseas each year are made by foreign companies, such as Motorola and Philips. Foreign investment continues to soar and hit a record of $52.74 billion in 2002 (China Daily, 15 January 2003) and China surpassed the United States to become the largest recipient of foreign investment in the world. When foreign companies, such as Philips Electronics and Motorola, started their investment in China, most of them had objectives to sell products to a billion Chinese. However, it turns out it is much easier and more profitable to use China as an manufacturing base and exporting center than selling goods inside the country. Today, Philips operates 23 factories and produces about $5 billion-worth of goods in China each year, and nearly two-thirds of that is exported to overseas. In domestic markets, more and more multinational corporations (MNCs) are beaten out by their Chinese counterparts. How can MNCs not only use China as an important export base, but also compete with their domestic counterparts in the country? Among the millions of Chinese manufacturers, some have grown very fast in recent years, and are becoming increasingly important international competitors. For example, Haier in refrigerator industry, TCL and Changhong in TV industry, Gree and Chunlan in air-conditioner industry and Galanz in microwave industry. Ten years ago, many Chinese even did not know what microwave ovens were. In 2003, a private Chinese company, Guangdong Galanz Enterprise, produced 16 million microwave ovens. It controlled more than 60 percent of domestic market and around 35 percent of overseas market. Galanz has become the largest microwave oven producer in the world and one third of new microwave ovens are produced in Galanz. Most of the companies listed above didn’t exist 20 years ago, but have surpassed their foreign peers in China and gained momentum in overseas market. How can these companies grow so fast? What kind of strategies do they use to increase their domestic and international market share? What can other Chinese firms learn from them? In order to answer these research questions, this study, using the case study method, examines process, strategies and influences whereby the Galanz Enterprise Group, a leading Chinese microwave oven producer, has developed into an influential Chinese company. The paper addresses strategic implications from the experience of Galanz’s successful expansion to both domestic and foreign firms.  In less than 30 years, Galanz grew from a small down factory to the largest microwave oven producer in the world. There are three major stages in Galanz’s growth. In its first stage, 1978 to 1992, Galanz was founded and grew steadily in down industry. At this stage, Galanz also set up three joint ventures with overseas companies. After six-year solid and steady growth, Galanz accumulated certain experiences and assets necessary for next stage development. In the second stage, 1992 to 2000, Galanz transferred from down industry to microwave oven industry and created a miracle in China’s manufacturing history. From 2000, Galanz realized its second jump by successfully expanding into air-conditioner and other home appliances industry. Followings are detailed description of the three stages in Galanz’s history. Galanz’s predecessor was the Guizhou Down Product Factory, which was founded by Mr. Leung Qingde and another 10 residents in Guizhou town in September 1978. Its main product was duck down and by the end of 1979 its annual production value was around RMB 468,000 (approximately equal to US$300,000 with official exchange rate US$1=RMB1.56 in 1979) and had 200 employees. From 1983 to 1988, Guizhou Down Product Factory formed three joint ventures with companies from Hong Kong and the United States. They are South China Woolen Mill, Huali Garment Factory and Huamei Industry Company. By June 1992, when it was officially renamed as the Galanz Enterprise Group, the company’s annual sales reached RMB 30 million (approximately US$ 5.54 million with US$1 = RMB¥5.51) and its total asset was RMB1800 million (approximately US$326.7 million with US$1 = RMB¥5.51).

  

Supply Chain Coordination with Asymmetric Information

Dr. Gang HAO, City University of Hong Kong, Hong Kong

 

ABSTRACT

How to make effective contract decisions that retaining healthy partnerships among independent chain parties and improving the chain value is critical to the supply chain success. The contract study in the literature has largely assumed a simple two-party chain structure involving a single supplier and a single buyer or a serial contractual relationship along the chain. There has been a growing industrial trend of outsourcing or subcontracting to external parties whose advantages derive from scale, focus and location. When outsourcing involved, the supplier needs to initiate and enable the contracts with both the buyer and the outsourcing party. This study provides an optimal contracting framework that allows integrated analysis and systematical tradeoffs among all three contracting parties. Simultaneous consideration of multiple contracts in a single framework brings both contracting complexity and extra rooms for improving chain coordination and efficiency.  Supply chain management has been increasingly recognized as a core competitive strategy to meet the twin goals of reducing cost and improving service. Companies all over the world are pursuing supply chain as the most powerful means to build the sustained competitive advantages. The key to the supply chain success relies on effective partnering and coordination among chain parties, who are independently managed entities seeking to maximize their own profits. It is through collaborated pursuit of the chain value optimisation, the chain parties, ultimately the customer, will benefit. Hence, how to make effective decisions that retaining the healthy partnerships while improving the chain value become critical. As almost all B2B transactions or interactions are practically governed by contracts, a good understanding of contractual forms and their economic implications to all parties under varied chain scenario are therefore the most important part of supply chain management.  The research related to supply chain contracting has been enormous. Most studies in the literature, however, have assumed a simple two-party chain structure, in which a single upstream supplier providers a single product to a downstream buyer, the latter in turn serving market demand. Other researches involve multiple parties under a serial contractual system, where each party contracting with its downstream consecutive party in sequence. Driven by the global industrial trend in past years, outsourcing or subcontracting orders to external parties has become a prevalent practice in order for improving an enterprise’s agility and efficiency. When outsourcing is involved, the supplier has to initiate and enable the contracts with both the buyer and the outsourcing party. Given this new contractual setting, the supplier needs to take the drive to resolve three basic problems: How to derive the optimal contracts that all parties ultimately benefit? What contract forms and terms should be offered to each other party? As the supplier may not have complete information on other parties’ cost structure (information asymmetry), when and how should the supplier pursue additional information? This lead to three key issues in designing supply chain contracts, i.e. the mechanism employed for contract design, the type of contract, and the knowledge about other parties.   The simple two-party or serial contracting frameworks are apparently limited to cope with the complex interactions among multiple parties and provide the overall contracting efficiency. This research aims to extend the previous contract study by providing a framework that allows integrated analysis and systematic tradeoffs among the middle agent company and its outsourcing part and its downstream buyer. Simultaneous consideration of multiple contracts in a single framework brings in both contracting complexity and extra rooms for improving chain coordination and efficiency. To our best knowledge, the study of above questions under an integrated optimizing framework has yet attended in the existing supply chain contract literature. By using our framework, we attempt to address a number of important managerial questions, such as: What are the opportunities for the supplier to trade-off and balance the benefits of all parties?  For example, when demand decreases, should the supplier sacrifice unit margin so as to maintain volume or  should he do the opposite? How parties should be compensated when sacrificing for the chain benefit? What is the value to the supplier of obtaining additional information on other parties’ internal cost structure?  What is the value to the supplier of offering different contract schemes (e.g. a constant unit of wholesale price, linear contracts with side payments and non-linear contracts)? When should a contract type be offered? Combining the previous two questions, when should a supplier focus on obtaining additional information and when should he focus on offering more sophisticated contracts? What are the impacts of different contract types and information asymmetry on each party’s profits? When multiple buyers and multiple outsourcing parties involved, how to best allocate orders through the optimal contracting decisions?  This research was triggered by the supply chain management challenges faced by a Hong Kong based multinational group company, whose businesses comprise of export trading, retailing and distribution. The company manages the supply chain for high-volume, time-sensitive consumer goods, including source garments, fashion accessories, toys and games, sporting goods, home furnishings, handicrafts, shoes, travel goods and tableware. As a supply chain manager across many producers and countries, the company aims to provide the convenience of a one-stop shop for customers through a total value-added package: from product development, through raw material sourcing, production planning and management, quality assurance and export documentation to shipping consolidation. Given a company with such the nature, searching, interacting, and negotiating, and in the end contracting with other independent chain parties have been its core management activities. To design and implement the contracts that can effectively retain the healthy partnerships along the chain and improve system efficiencies is critical to the company’s business and even survival. As a middle agent who does not actually produce or sale products itself, the company needs to interact and initiate the contracts with both its outsourced supplier and the buyer or often a chain of suppliers so to make its business. In the company’s supply chain, two successive chain parties (for example part A and part B) are connected via the middle agent company with two contracts, i.e. a contract between the middle agent and part A, and a contract between the middle agent and part B, instead of having a contract directly between A and B. Given the middle role of the company, it is required to balance and tradeoff the conflicting interests of the independent parties at both ends yet ensuring its own profits. Up to now, such the complex yet important activities have been managed by a group of senior managers using their extensive experiences and subjective judgments. Most of the time these managers have to negotiate and coordinate between the upstream and downstream parties back and forth over detailed contract terms, which is neither efficient nor effective. It is highly desirable for the company to develop a framework that allows joint consideration of and systematic tradeoff analysis among all parties -- a middle agent company and the two consecutive chain members, to optimize the total chain benefit and improve the total chain coordination.

 

Diagnosing Qualitative Issues of Enterprise Systems Adoption: The Cases in Indian Context

Dr. Vineet Kansal, Arab Open University, Kuwait

 

ABSTRACT

A range factors have strongly of influenced and encouraged the wide spread adoption of Enterprise Systems (ES). However, there is a widespread belief and a emerging consensus that ES has in many cases failed to provide expected benefits. The increasing by hyped role of, and dependency on ES and the uncertainty of these large investments, has created a strong need to monitor and evaluate ES performance. It is worthwhile to analyze corporate practices concerning ES, in an organization, in Indian context with a view to develop a better vision of the current state of ES software. This paper reports on three suit case study application. The cases were reported using interviews and observation techniques. The Situation-Actor-Process (SAP) of framework SAP-LAP paradigm was used to analyze the cases. Based on extensive interactions and brain storming sessions with ES practitioners a relative ranking method analogous to ‘VED’ (‘Vital’, ‘Essential’, and ‘Desirable’) in context of selective inventory management technique was adopted. This ‘Normal’ (N),  ‘Important’ (I) and ‘Critical’ ( C) were somewhat akin to the notion of ‘VED’ in reverse order. A synthesis was performed in the management context, situation factors, role of factors, processes used in ES. The resultant learning issues in conjunction with the conclusion of the study may help in identifying the potential key areas in ES adoption in Indian context. As the pace of change accelerates in the twenty first century as a result of technological opportunities, liberalization of world markets, demands for innovation, and continuality desiring life cycle, organization are feeling that they have to continuously readjust and realign their operations to meet all these challenges. This pace of change has increasingly forced organizations to be more outward looking, market oriented and knowledge driven. A useful tool that business are turning to, in order to build strong capabilities, improve performance, undertake better decision making, and achieve a competitive advantage is ES software. (Al-Mudimigh, A, et al., 2001). ES software are customizable, standard solution that have the potential to link up and automate all aspects of business, incorporating the core process and main administrative functions into a single Information Technology (IT) architecture. Organizations have invested heavily in ES with the expectations of improvements in business process, better management of 'IT' expenditure, increase customer responsiveness, and generally, strategic business improvement. India is a huge country with a very large, small and medium sector. Given the fact that there are three million registered small and medium enterprises (SMEs), the scope in India is tremendous. IDC report 2001 on the ES market in the country reveals that though the awareness and current adoption of ES is high in medium enterprises, the SME segment would fuel the growth of the ES market in the coming years. The number of organizations implementing an ES solution being added each year would be more from SME segment. According to NASSCOM (2002), the ES market segment in India growing by more than twenty percent. Along with the unorganized and small players around ten major players account for 52 percent of total market. SAP contributes 17 percent of the total revenues followed by PeopleSoft, Oracle, Baan and JD Edwards.  The ES market has come to recognized as one of the most dynamic segments in terms of growth and potential. In fact, it has redefined the delivery of enterprise wide solutions, as branded products. This has, in turn, led to vendors' availability to command premium for services.  Traditionally organizations in India depended more on 'IT' professionals rather than business professionals for commercial software developments. ES places greater value on domain knowledge of the functions rather than 'IT' skills. This calls for a mind set change, which is a challenge. In spite of growth in the ES market, recent research shows growing dissatisfaction with ES; that they have failed to deliver the anticipated benefits (Ross et al., 1999). Several studies have identified the extent of the so called 'damage' done by the ES, critical success factors of ES, and the possible cause for low performance open from benefits realization from ES. Furthermore, it has also been identified that a high percentage of ES benefits are intangible and therefore enhances difficulties of financial evaluation (Akkermans, H., et al., 2002; Mabert et al., 2000; Somers, T., et al., 2001). To better understand those intangible issues, the nature, scope and impact of ES in Indian context three case studies have developed and analyzed. These have been prepared focusing on the following questions: Why did the company decide on an ES solution? What was the process for selecting ES? What was the internal and external position of the company? What resources were used and what pitfalls and benefits realized? What lessons were learnt? These questions helped in the development of a better vision of the current state of ES software in the Indian environment.

 

The Exploratory Study of Competitive Advantages of Hsin-Chu City Government by Using Diamond Theory

Dr. Lieh-Ching Chang, Hsuan Chuang University, Taiwan

Cheng-Ted Lin, Hsuan Chuang University, Taiwan

 

ABSTRACT

This study is to discuss the competitive advantages of nations and of local government. The local government should think about the ways to use the competitive theory and to develop useful strategies for enterprise in practice. The purpose of this study is to help the local government to improve its competitive advantages based on Michael Porter’s theory --“The competitive Advantage of Nations” (1990). Moreover, this study also applies global competitive index and SWOT analysis into Hsin-Chu City government of Taiwan. With the changing of international environment, the promotion of government functions and efficiency will depend not only on their promotion within a state, but also on the promotion of the overall national competitiveness against the other countries all over the world. Besides depending on the leadership of and efforts made by the central government, each local government should further implement the national advancement policy and connect itself with the world. By summing up the observations of international political scientists on the globalized knowledge economy, the financial editor of the British Newspaper The Independent, Coyle, pointed out: the central government has been historically strong, but the local government now becomes a tendency; in the future the “municipal government” will replace the central government; this viewpoint, besides reflecting the governing predicament of the central government in each state, points out the new tendency in the global democratic development in the future (Mary Yang, 2003).  However, Taiwan now provides very little information about the county and city development researches and the county and city competitive advantage analyses. Moreover, competitiveness is an ambiguous concept that connotes a rather complicated composing system. Therefore, it has become the motive of this research that explores the local competitiveness in Taiwan area whether an evaluation system can be, via an objective and comprehensive set of indexes and on the basis of competitiveness evaluation development tendency, established to provide concrete and consummate information about county and city competitive development that can be used by the country and city governments as the reference basis for their administration, industrial investment, and the choice of the living environment for the population; and that can be used by the central government as the reference for their policies on public and industrial development, financial distribution, area equilibrium, and land exploitation, with the purpose of agglomerating the county and city competitiveness to create overall national competitiveness. In this paper the local government will be regarded as the service industry, hoping that the local governments will take serving the people as the starting point. And the operation performance of the local governments will be analyzed from the perspective of business management, and the public value, metropolitan competitiveness, diamond theory and the national competitiveness index theory established by the IMD (International Institute for Management Development) in Lausanne, Switzerland will be adopted to evaluate Hsin-Chu City Government and compare it with the other local governments in Taiwan and in other countries, to explore whether the competitiveness of Hsin-Chu City Government is better than that of other cities, and find out the policy that will promote the sustainable development of the local industries. Moreover, the strong sides and weak parts of Hsin-Chu City Government will be pointed out. The author hopes that this research will finally offer some effective suggestions on the promotion of the overall competitiveness of Hsin-Chu City Government.  The research made by Porter (1990) on the national competitive advantage is mainly aimed at finding out the international competitive advantage by using the Value-Chain. The author deals with ten important trading states in his research, raises four environmental factors that influence the international competitive advantage, and constructs a diamond competitive model. The analysis on the competitive advantage of the major states includes the following six dimensions: 1. production factor; 2. demand conditions; 3. firm strategy, structure, and rivals; 4. related and supporting industries; 5. government; 6. opportunity. This theory explains the reason why a state can win international success in a particular industry, deeming that it lies in the environment created by the home country, may promote or hinder the creation of competitiveness; all these factors may form the industrial clusters, which may, in contrast to the rivalry state, create the action strength of value-chain for this industry, i.e. the international industrial competitiveness. Furthermore, the diamond system is one of a mutual reinforcement; via the mutual application of factors, the effect of self-reinforcement will be produced, making it difficult for foreign rivals to damage or imitate it (Porter, 1990). While explaining the role that the economic environment, organization, institution and policy play in a country’s economy, Porter sums up the “Diamond determinants of national advantage” by which he can analyzes how a country establishes the competitive advantage in a particular field. The key factors in the diamond system are all interdependent, for the effect of any factor depends on the corporation of all other conditions. So the diamond system is one of interaction, within which each factor may reinforce or transform the performance of other factors. As far as those industries that depend primarily on natural resources or that require a lower level of technology, the competitive advantage will be obtained as long as one or two factors are met, but the competitive advantage will not last long due to the fast changing pace of the industry or the influence of other international rivals (Porter, 1990). To own every advantage of the diamond system does not necessarily mean that you have owned the competitive advantage, for you have to match these factors to form the self-reinforcing advantage of the organization so as to make it impossible for other rivals to replace you. The diamond system is also one of two-way reinforcement, where the effect of any factor may have influence on the status of any other. This theory emphasizes that if an industry wants to establish the international competitive advantage, it must be equipped with such factors as production, demands, related and resource supply industries, and firm strategy, rivalry, opportunity and government.

 

Reverse Engineering: A Technology Transfer Tool

Dr. Alireza Lari, Fayetteville State University, Fayetteville, NC

Dr. Nasim Lari, North Carolina State University, Raleigh, NC

 

ABSTRACT

Reverse engineering has always been considered as a kind of industrial piracy in which a company copies someone else’s product, enters in the same market, sometimes competitively, and threatens the original innovator. In this paper another aspect of reverse engineering is considered in which companies, mostly in less developed countries, try to use reverse engineering as a tool for survival rather than competition. These are mostly companies who first try to obtain the technology thru formal channels of technology transfer, but due to problems that exist in transfer of technology, become discouraged and start looking for ways to do reverse engineering. This reverse engineering does not put these countries in a position to compete with the innovators but helps them to increase their industrial knowledge and satisfy their market needs. Their market does not overlap with the innovator’s market and as a result this type of reverse engineering may be considered as an instructive tool for the global economy. Today, business is global. Even if you are the owner of a small business and have suppliers and customers who are solely domestic, you are likely to have some sort of foreign competition. There are many foreign companies competing for consumer dollars in the U.S. market, but US-based companies also enjoy marketing their products and services throughout most of the world. While foreign companies are competing for the dollars of some 275,000,000 consumers in the United States, US companies are selling products and services to a market of more than 6,118,000,000 (over 6 billion) consumers around the world (Haag et al. 2002).  One of the major effects of globalization is the flow of innovation and new products to less developed countries (LDCs). For a long time, most of the LDCs have relied on imported innovation and technology of products from multi-national companies (MNC). Their development has mostly been in the assembly of products and not much in product development. Their R&D activities have mostly concentrated on process improvement rather than product development and innovations.  Globalization has created localization pressures. Localization pressures include both country-specific factors and MNC-specific factors. Country-specific factors include three primary pressures: trade barriers, cultural differences and nationalism. MNC-specific factors include four principal pressures that either limit an MNC’s ability to respond to globalization pressures or facilitate its ability to be locally responsive: organizational resistance to change, transportation limitations, new production technologies, and just-in-time manufacturing. The primary pressures mentioned above have significant effects on globalization, technology transfer and the degree of cooperation between MNC and LDCs. Globalization pressures on one side and nationalism on the other have forced LDCs to think about the future with respect to technology and innovations and what will eventually happen to them when the innovative countries try to capture all the markets around the world. Will they still have any industries or will their industrial structure be decided by MNC and they will produce whatever those countries decide? Do they become the exporters of cheap labor and raw materials and importers of consumer and industrial goods? They are also concerned about the political issues that follow this economical dependence. There have been cases where industrial activities have been stopped in LDCs due to political situations. That is why during the 70s and 80s the concept of localization of industries and increasing local content of products were real issues in transfer of technology agreements. Even for the countries that have relied on imported technology, the transfer of technology agreements has always been a control tool for the transferor of technology.  In spite of the mentioned facts, in most cases, even if the country has the intention to go after the development of products and innovation, the transfer of technology has not been very effective. A group of political, social, and cultural problems have placed obstacles along the path of proper technology transfer. Above all, if the product development knowledge is transferred, then there is not much profit for the transferor of technology. If developing countries can produce their own products and after a while are able to localize production with a high degree of local contents, then the owner of technology will lose a market.  Advancement of technology is a complex process, which takes a lot of time. The introduction of cellular communications as a commercial service came thirty-six years after AT&T initially announced the development of such a concept. And sixteen years after the first demonstration of the computer mouse, it was actually shipped with a PC (Schneiderman, 2003).  Historically, a technology transfer (TT) agreement was viewed by the licensor as an instrument to send “some technical information from the existing files in the form of documents,” to provide “some technical assistance” as the supplier saw fit, to sell “some equipment and machinery,” and to supply “components” as the need arouse. In most cases, the supplier’s mentality was one of superiority and without regard for the recipient’s real needs. Nowadays, less developed and developing countries are very well aware of what they are bargaining for in a TT deal. This awareness has made the selection of the proper licensor a very technical and complicated process. It has also made the TT less attractive to the innovators. In order to classify the important factors involved in the selection of the best licensor of technology, more than fifteen technology transfer agreements signed between Middle-Eastern countries on one side and European, Japanese, and Korean companies on the other were studied (Lari, 2002). The general expectations of the licensees are classified below. Payments. This includes all the money that is paid to the licensor under the titles of: technical assistance fee, dispatch of engineers for assistance, training costs, etc. Localization. National integration and production are universally desirable issues for the recipient of technology. However, this may not be too desirable for the technology owner. Technical assistance and technical documents. The supplier should provide the recipient with all the necessary documents to avoid future conflicts and problems. To ensure a smooth technical assistance and transfer of technology, the following should be considered in the selection of a licensor: documents and technical information, training and dispatch of experts, and R&D. Capabilities. Licensees should make sure that the licensor has enough experience in technology transfer.  Rights Granted. In the past, there have been attempts on the part of some licensors for their own protection to overly restrict the rights granted to recipients in technical assistance agreements. Today, the recipients have become more demanding with regard to their rights. Some of the rights requested by licensees are as follows: changes in design, subcontractors, export of the product, trademarks, buyback, and future supports.  Supply of Machinery and Equipment. Theoretically, the supplier should not prevent the recipient from seeking other sources of raw materials, and spare parts of equipment. However, this is an area that can generate a lot of money for the licensor.

 

Farmer Adoption of ICT in New Zealand

Dr. Stuart Locke, University of Waikato, Hamilton, New Zealand

 

ABSTRACT

The adoption of information communication technology (ICT) by farmers in New Zealand is investigated in this paper.  Government has initiated a number of policies to expand the availability of broadband internet coverage to rural regions from 2000 onwards.  Since the end of 2004 all schools in New Zealand (NZ) have access to broadband and the local communities and businesses, including farmers, in the vicinity of the schools now have broadband access too.  Results of in person questionnaire surveys of farmers conducted in July 2003 and July 2004 regarding ICT usage are reported and discussed. The importance of high level information communication technology penetration into the business and household sectors of New Zealand has been stressed in successive government reports, culminating in a digital strategy (MED 2004).  “This Strategy provides an ambitious plan for the development and implementation of policies aimed at achieving the ideal of all New Zealanders benefiting from the power of ICT to harness information for social and economic gain” (p1).  Components of the strategic initiative include legislative reform, e-government implementation, e-learning programmes for developing capabilities, and several others.  The telecommunication networks covering landline, mobile and satellite systems are owned and operated by the private sector.  Government has, as part of its Digital Strategy, invested in a project known as PROBE.  This provincial broadband extension project was to ensure “all schools and their surrounding communities have access to broadband by the end of 2004” (p95).  The topography raises difficulties with an alpine range running the length of the South Island with glaziers flowing to the west.  The majority of the population live in urban centres, principally on the North Island.  The sparseness of population concentration in the high country farming areas combined with the difficult terrain has meant that wireless internet rather than landlines are being used into these areas as part of the PROBEg initiative. Government believes that there are e-regional development potentials from improved ICT adoption and has a range of initiatives related to develop e-regions NZTE.  E-region’s focus is on building relationships between the public and private sectors, based around regional needs, to make best use of broadband technology.  NZTE (1) seeks to work with regions and local authorities to help ensure they benefit from related synergies should government invest in an advanced network initiative. (MED, 2004)  Farmers are assumed to benefit but there are no data cited in this discussion: Because of the way technology is moving, farmers will be capturing a lot of data that will need to be whizzed around the place. (Cuzens, 2004). The rural cliché – you can lead a horse to water but you cannot make it drink –is pertinent to the broadband usage discussion.  Low levels of broadband uptake are surprising given the rapid growth of uptake of Internet services. Growth in the number of connections to the Internet since its commercial origins in the mid-1990s has been rapid.  Howell and Marriott (2001) observe that this growth has been significant in NZ, with more than 50% of households and 95% of businesses connected by 2001.  Broadband services are available to around 80% of residential addresses, but fewer than 3% subscribe (OECD, 2001a).  In 2001 New Zealand users ranked among the most intensive users of the Internet, in terms of number of hours of use per month (OECD, 2001). Yet, as Howell (2002) observes, regarding the uptake, that regulatory and supply-side considerations seem unable to answer the question of why uptake has been so slow. The traditional economic view suggests that ICT will contribute to productivity growth through three processes (Schreyer 2000). On the production side the manufacture of ICT goods increase total factor production.  The decreasing price of ICT goods relative to other capital prices results in a substitution of ICT in production processes.  Locke (2004) investigated growth in the New Zealand small medium enterprise sector (SME) related to ICT and found it was largely due to cost reduction through substitution rather than expansion of productive size.  Shreyer also observes that through improving business-to-business transactions there can be advantages to business.

 

Prepare for E-Generation: The Fundamental Computer Skills of Accountants in Taiwan

Dr. Yu-fen Chen, National Changhua University of Education, Changhua, Taiwan

 

ABSTRACT

The study aimed to explore the fundamental computer skills of accountants for E-generation in Taiwan, also to examine the proficiency levels of these computer skills possessed by accountants currently to serve as crucial references and suggestions to education authorities, schools, faculties and curricular planners. Literature review, expert meetings and questionnaire surveys were utilized for gathering data concerning the items of fundamental computer skills of accountants and proficiency levels of these computer skills possessed by accountants in Taiwan.  Data collected from the questionnaire surveys were analyzed through statistical methods including frequency distribution, T-test, one-way ANOVA, and Scheffe’s method.  The study developed 3 major categories and 17 subcategories as a research framework in consisting of the items of fundamental computer skills of accountants for E-generation in Taiwan. The progress of information technology not only trigger the business management mode to change but new graduates entering the job market will invariably be required to possess rudimentary computer skills. As the administration of the Ministry of Education directed that training manpower, the intermediate professional technician should be a key objective in all vocational junior colleges [4]. In 2000, the Chinese Computer Skills Foundation indicated that junior college graduates are not only required to be fluent in the domain of conventional accounting knowledge but also need to be proficient in computer skills, such as computer operating system, system software, word processing software, spreadsheet software, packaged commercial accounting software, database software, graphic software, presentation software, multimedia software, Internet software and so forth. A survey in 2002 also found the top four technology skills for new accounting hires to possess, in order of importance, were spreadsheet software (e.g., Excel), Windows, word-processing software (e.g., Word), and the World Wide Web [8]. In an effort to strengthen the vocational education system, the education authorities have moved to introduce a series of educational reforms, including the upgrade of excellent vocational junior colleges to the full-fledged status of institutes of technology, and the establishment and implementation of a certification system. A survey conducted by China Times of business owners on June 9, 1998 indicated that 52% of the respondents would require the interviewees to possess some kind of computer skills, and as much as 70% of the businesses will require that the interviewees should possess computer skills particularly when hiring administrative and accounting positions. Hence, to be an educator, a compelling question had emerged as to how best to design a preferred curriculum for accounting students, and how best to enhance their computer skills. This study focused on a crucial task of exploring what items of fundamental computer skills are required of accountants in Taiwan. This study includes the objectives below, To explore the items of fundamental computer skills required of accountants in Taiwan. To examine the proficiency levels of these computer skills possessed by accountants in Taiwan. To propose tangible suggestions as references for planning computer program of department of accountancy for vocational colleges or schools in Taiwan. The methods of the current study consisted of literature review, expert meetings and questionnaire surveys with which to attain research objectives.  The respondents of the questionnaire surveys were requested to rate each competency on a five-point Likert-type scale from unimportant (1) to important (5). Literature review: Literature review was conducted by examining publications pertaining to computer skills required of accountants as references for drafting the contents of an initial draft of a pilot-test questionnaire. Expert meetings: There were two expert meetings held. The first expert meeting was on October 21, 2001, intended to examine the contents of the initial draft of a pilot-test questionnaire. The second one was on February 24, 2002 to examine the results of the first pilot-test round and, furthermore, to develop the contents of the formal questionnaire. Questionnaire surveys: First pilot-test round: The first pilot-test round was scheduled for the period between December 5 to 15, 2001 in telephone interviewing conducted by the marketing research center of the Department of Statistics at Fu Jen Catholic University in Taiwan. Second pilot-test round: The analysis of responses of the first pilot-test round had led to the conclusion that preliminary question 3 pertaining to “launch a web site” should be deleted or modified as concluded from the statistical methods of internal consistency analysis and correlation coefficient analysis. Under which, results concluded from the reliability coefficient analysis showed that the Cronbach alpha (α) rating of the category of other computer related skills was below 0.7 to offer insufficient reliability. As a result, some of the questions in this category were later discussed and modified in the second expert meeting and an ensuing telephone interviewing by the marketing research center of Department of Statistics at Fu Jen Catholic University was to follow in the second pilot-test round. Formal questionnaire survey: A formal questionnaire had been concluded from the second pilot-test round, with which a formal questionnaire survey was conducted from April 11 to 30, 2002. The samples for the two pilot-test rounds and formal survey included 1,800 businesses that were randomly selected from the industry and commerce registry published by China Credit Rating Corporation in April 2001[5][6][7]. From which, a valid sample number of 100 businesses were chosen from the respondents in two pilot-test rounds, and a valid sample number of 400 businesses were selected for the formal survey;

 

Who’s Responsible for New Medical Treatment Development: A Model of the Ethical Interaction Among Major Stakeholders

Dr. C. Michael Richie, University of South Carolina Aiken, Aiken, SC

Dr. Michael “Mick” Fekula, University of South Carolina Aiken, Aiken, SC

Dr. David S. Harrison, University of South Carolina Aiken, Aiken, SC

Dr. Pavel Smirnov, International Institute of Management, Sarov, Russia

 

ABSTRACT

This paper proposes a model to examine the interactions between stakeholders who play critical roles in the development, delivery, and receipt of cutting-edge medicines and procedures targeted at debilitating and life threatening illnesses.  A multi-level approach is used to position the various stakeholders and an ethical dilemma framework is imposed upon those positions.  The application of a meta-level dilemma is used to analyze cross-level relationships in ethical terms.  The conclusion suggests that organization-level stakeholders represent the best solution for achieving more effective medical treatment and procedures.  However, this deduction points to need for collaborative efforts among all stakeholders in order to diffuse the risks related to significant amounts of invested capital.  Advancements in medical technology have grown in geometric proportion during the last fifty years. Many illnesses that were once fatal are now routinely prevented, detected, or cured, such as pneumonia and polio. With the discovery of treatments that we now take for granted, maladies that once threatened the daily quality of life exist now only as curiosities in some research laboratories. The current medical research environment continues to make startling breakthroughs daily, with the promise of more and better outcomes. Nowhere else is the future of medical treatment more promising and exciting than in the area of Gene Therapy. Simply put, many ailments are caused by the lack of certain gene coding in the human genome. Without certain genes, the body cannot fight the daily onslaught of bacterial and viral infection. Gene Therapy identifies missing genes for certain conditions and transplants the missing genes into patients’ bodies.  This is certainly an oversimplified explanation of a process that has taken several decades to develop; however, existing technology can determine and replace missing gene code that will alleviate many serious and life threatening conditions. Unfortunately, the research and development of gene therapy has not yet found its way to the patient. Although the basic knowledge exists, the Food and Drug Administration (FDA) has not approved commercial use of gene therapy because of clinical setbacks (U.S. Department of Energy, 2005). The short-lived nature of the therapy, immune disorders, and multigene disorders also pose challenges to the progress of gene therapy.  Conversely, recent developments suggest the potential to treat Parkinson’s Disease (Ananthaswamy, 2003) and blood disorders like thalassaemia and cystic fibrosis (Penman, 2002). While the evidence suggests that some illnesses could be prevented or cured by this process, many continue to plague humanity because treatments are not yet routinely available to society. Despite the remaining treatment challenges and clinical concerns, research is not the only obstacle to making gene therapy readily available to the general public.  It is a sad commentary that the crucial and often missing variable in this equation is capital. As can be imagined, the initial cost of this type of medical research is staggering, with no promise or projection of the potential for success. Even after successful development in the laboratory, the costs of animal and human trial testing are staggering.  The timeline from discovery, to test trial, and eventually to societal approval can be as long as ten years.  To the direct cost of such ventures must be added the financial risk associated with other firms taking the same initiative and possibly entering the marketplace first.  Compounding the basic financial equation is the expected return from such an investment. Even though development, testing, and mass production are possible, there might not be enough demand to justify the investment if the number of stricken individuals is small.  For example, if only a tiny fraction of the general population is stricken by a fatal disease, it is less likely that investors will commit capital to develop, produce, and market the remedy.  From a social responsibility perspective it is difficult to defend continued investment in the manufacture of products like alcohol and tobacco when there is little financial commitment to delivering cures for debilitating or deadly diseases.  Since rational investors seek to maximum returns with reasonable risk, investing in firms like Anheuser-Busch is sounder than investing in projects to develop experimental drugs.  At issue here is the identification of those entities that society holds responsible for medical progress. While it is easy to lay that responsibility at the feet of the medical practitioner or researcher, this project poses a model that identifies the intense interaction between the following stakeholders: society, government, the healthcare industry, drug manufacturers, researchers, and patients.  Those stakeholders represent the active participants in the search for better and more effective medical treatment and procedures.  Since no single stakeholder can be accountable for the entire system, the authors propose to focus upon the interactions necessary to maintain a feasible approach to proper treatment development and delivery.  The proposed model will be used to examine interactions in order to identify and develop the means to integrate the effort of all stakeholders through symbiotic relationships.  The identification of incentives for action among the crucial players will promote participation in this game of true life and death consequences.  Figure 1 poses the initial framework for conceptualizing the relationships between stakeholders.

 

Enterprise Valuation for Closely Held Firms

Dr. Thomas A. Rhee, California State University, Long Beach, CA

 

ABSTRACT

Valuing a firm, when the shares are not publicly traded, is quite difficult.  However, the value of a firm depends on three economic factors: (1) production technology the firm employs, (2) economic conditions facing the industry, and (3) the volatility of the market as a whole.  These economic factors are manifested in the firm’s asset betas.  Any anticipated change in asset betas will change the firm value, and we will forecast firm values accordingly.  Empirical studies suggest that short-run betas converge.  The convergence requires certain restrictive assumptions about parameters underlying asset betas.  When betas are assumed to follow a certain Wiener process, a firm’s asset beta will converge over time, however, to a level much lower than the industry beta. Value of a firm is a direct function of the firm’s asset beta.  The asset beta is generally known as unlevered beta, which typically is obtained from levered equity beta.  However, if a firm is not a public company or does not have a long enough history on the exchanges even if the firm’s shares are publicly traded, it is difficult to measure equity betas statistically.  This necessitates us to examine components of asset betas directly.  Fortunately, there are a number of studies that examine factors determining betas, but very little studies are available today to describe why and how betas may change over time under uncertainty. (1)  Empirical research has shown that generally, equity betas tend to converge to unity over time.  When conducting financial analyses for companies, various cross sectional analyses also seem to suggest that company betas converge to the industry beta.  The purpose of this paper is to reiterate factors determining asset betas and to offer a theoretical structure to a possible over time dynamic behavior of betas. The value of a firm, as manifested in the firm’s asset beta, depends on three economic factors: (1) the production technology in terms of the firm’s capital-output ratio (or asset turnover ratio) and the productivity of labor, (2) economic conditions facing the industry, and (3) the volatility of the market as a whole.  In this paper, we focus on how these economic factors will generate a series of dynamic behaviors of firm values.  We assume that a firm’s asset betas could also possess Martingale properties and follow a particular Wiener process.  When these processes are experimented in a large number of Monte Carlo simulations, we find that short-run betas converge in the long run but not necessarily to the level of industry betas, and furthermore, the convergence itself requires certain restrictive assumptions about parameters underlying asset betas.  Under some highly plausible assumptions about those parameters, we particularly find that a firm’s beta can converge in the long run to level much lower than what the industry average betas may suggest. The economy: Assume that each industry produces a single output and there are n industries in the economy.  There are  numbers of firms in industry j.  If V represents the capital value of the firm, then, the total capitalization in an industry j is, and the capitalization summed across all industries is, and hence, , where m is the total number of firms in the economy and is given by. The return on market portfolio: If each firm is identified as producing a single output, it belongs to a specific industry, and the return on the firm’s asset at time t is  where:

 

Agency Costs and Valuation Effects of International Franchising Agreements

Dr. C. Pat Obi, Purdue University Calumet, Hammond, IN

 

ABTRACT

This study documents the financial market response to the decision of American firms to franchise overseas.  Conventional event study methodologies are used to determine the magnitude of post-entry risk-adjusted returns. Agency theory is then employed as a way to explain the motivation for firms to seek such opportunities abroad. To minimize agency costs, franchisors seemingly charge a higher initial fee in relation to royalties, in comparison to what they would charge domestically. In effect, a high bonding cost is created between the franchisor and the overseas franchisee since the latter would have a disproportionate financial stake in the venture. This study is designed to provide an empirical verification of this notion. This study is an attempt to evaluate the wealth effects of international franchising agreements between U.S. firms and foreign business units. The current global market structure presents huge international investment opportunities for American firms more than ever before. As a result, the international portion of the U.S. gross domestic product has grown dramatically from a low of only eight percent in 1990 to the current high of almost 15 percent. The primary avenues of entry for American firms seeking new opportunities overseas are joint venture and franchising agreements.  The literature documents that among external factors responsible for the growth of international franchises are the level of domestic saturation, competition in the home market, new market opportunities in emerging economies especially in the former eastern bloc and South East Asia, as well as regional economic integrations such as the European Union and North Atlantic Free Trade Agreement.1 According to Burton and Cross (1995), international franchising is a foreign market mode of entry that involves a relationship between the entrant, franchisor, and the host country business unit, the franchisee. In terms of their valuation effects, international franchising agreements are established when the mode of entry requires the host country business entity to take one of three forms of organizations: a franchisee, master franchisor, or a joint venture. This study considers the valuation effects of franchising agreements. The empirical inquiry does not deal with the resource and agency incentives for international franchising as documented in Shane (1996). Rather, it is concerned with the financial market response to the decision to franchise abroad.  We measure the performance of franchising firms by the magnitude of post-entry risk-adjusted returns in comparison to pre-entry performance. Any sustained change in investor wealth profile will be marked by abnormal stock returns around the announcement period. To this end, conventional event study models are used to calculate excess returns for the portfolio of firms in the sample around the announcement time. In addition to resource-based explanations to franchising, Shane (1996) added that agency theory might help explain why some franchisors seek opportunities overseas. To minimize agency costs, franchisors charge higher initial fee in relation to royalties, in comparison to what they charge their domestic counterparts. In effect, a high bonding is created between the franchisor and the overseas franchisee since the latter has much at stake. In many cases, the fee paid by the franchisee is more than half the total investment, which is a huge part of the franchisee's wealth. Failing to abide by the strict format laid down by the franchisor may result in the termination of the agreement.  It could thus be argued that high price bonding encourages franchisors to seek international franchisees. There is also a risk consideration of the bonding cost argument. If a high bonding cost signals inordinate risk in the overseas market for U.S. investors, then the required rate of return on the firm’s equity would increase causing stock prices to fall. Often, this would happen if investor risk perception over the investment horizon exceeds the risk premium captured by the bonding cost. The motivation to franchise overseas may be explained in part by the magnitude of bonding cost implied in the agency relationship between the U.S. firm (principal-franchisor) and the overseas business entity (agent-franchisee). Agency problems arise because under the behavioral assumption of self-interest, agents do not invest their efforts unless such investment is consistent with maximizing their own welfare. Bonding cost is a type of agency cost, which represents the lack of flexibility (such as a no-escape clause) that is built into the franchise contract. Although the intent is to ensure specific performance toward the fulfillment of the terms of the contact, a negative investment outcome is possible if the quality of management and market environment turn out to be unfavorable.  Franchising firms can encourage overseas franchisees to act in the best interest of the franchisor’s shareholders by creating a set of incentives, constraints, and punishments. Incidentally, these tools would only be effective if the franchising firm can observe all the actions (or efforts) of the franchisee. Moreover, more exogenous problems exist in foreign markets, such as unfavorable government policies, labor problems, different set of rules governing market competition, and adverse market conditions.2  These factors further complicate enforcement of franchising terms.

 

Information Security Awareness Status of Full Time Employees

Dr. Eyong B. Kim, University of Hartford, CT

 

ABSTRACT

Based on the framework provided by HIPAA security regulations section 5 (i) (ii), the questionnaire items were developed to measure security awareness level amongst full time employees. The survey was conducted on a sample of sixty three full time workers in Northeast region. Main finding is that information security awareness among full time workers is not in an acceptable level. Surprisingly many full time workers did not have any information security training in their work so that they violated many information security procedures unknowingly. Even though they had security training, the training seemed to be “once-done-then-forget-it” event that results the lack of knowledge on the recent information security issues. In addition, the preventative measures learned from any security training seemed not properly implemented in end users’ everyday computer usage. The author recommends that security awareness should be on-going programs that focus on a user’s behavioral change instead of transferring knowledge about information security. It was reported that overall financial losses due to computer crimes dropped significantly from $456 million in 2002 to $202 million in 2003 based on the survey conducted by the Computer Security Institute (CSI) in San Francisco. These surveys have been conducted by CSI and FBI together on about 500 large corporations and government agencies (503 respondents in 2002 while 530 in 2003). In spite of this decrease of financial losses, information security seemed very important issue to most companies. For example, there was a huge demand for training to helping network professionals to be armed for things like computer viruses, intrusion detection, cyber terrorism and denial of-service attacks, was the No. 1 IT training trend in 2004, according to attendees of Colloquium 2004, hosted by the nonprofit Computing Technology Industry Association (Oakbrook Terrace, Ill.). Based on CSI/FBI survey 2004, the second-most expensive computer crime among survey respondents was denial of service, with a cost of $65 million, up 250 percent from year 2002's losses of $18 million. In addition to hackings or viruses, theft of proprietary information is a serious threat to the CSI/FBI survey respondents that caused the great financial loss ($70 million with the average reported loss being approximately $2.7 million). The CSI/FBI survey results illustrate that information security threats to large corporations and government agencies come from both inside and outside of their electronic perimeter. Actually, forty-five percent of respondents detected unauthorized access by insiders. Because threats are from both directions, many security experts suggested to install proper security systems that can protect their networks from external and internal threats and to provide security workshops or training sessions for the end users to make them aware of the information security threats. There are generally two approaches of improving information security – one is relying on technology, the other is improving security awareness among the users. Until now, companies mostly utilized the technology based security method (i.e., firewalls, anti-virus programs, etc) while any security awareness programs for end users are often ignored or minimally employed. Because the vast majority of the organizations view security awareness training as important, the Computer Security Institute started to include two questions regarding security awareness programs in business from the 2004 CSI/FBI survey [Gordon et al, 2004]. With this much attention to the security awareness programs, this study will investigate the current state of security awareness level among the full time employees to develop a more effective security awareness program. Basically information security threats can be categorized into three different areas such as external threats, internal threats and environmental threats. Most end users heard about external threats thanks to the media coverage nowadays that includes malicious programs and hackings. Among malicious programs, viruses, worms and Trojan horses are most common. Many different types of hacking activities are reported. Among those, popular ones are denial-of-service attacks, spoofing, digital snooping, spamming and phishing. Internal threats could be more disastrous to a single company because equipment or software malfunction may cause massive data loss and employee errors (or attacks) that maybe done without being detected. Environmental threats are damages from fire, water, power loss, riot or war. Among these threats, technologies could be used effectively against the external threats or power loss situation. Against hardware malfunction, a fault-tolerant system or backup sites have been used most widely. Employee errors maybe one most serious issue in information security because it could invite almost all of external attacks as well as hardware and software malfunctions. It is well known that an intruder usually manipulates a security loophole created by a negligent employee. For example, instead of guessing a password, intruders often call an operator to find out what those are by pretending them as employees who just is laid off. Thus, security awareness trainings or workshops for end-users are very important to minimize the possible damage from most of threats. However, it is also true that security awareness is one of the least satisfactory aspects in computer security issues [Schultz, 2004]. To provide a good information security, Wade [2004] suggested helping employees understand what they need to do and implementing some safeguard to minimize the damage if any threats become reality.

 

E-Learning, IT and the Physically Challenged

Dr. Leonard Presby, William Paterson University, Wayne, NJ

 

ABSTRACT

Online instructional courses typically take advantage of the Web. It has facilitated course delivery. A group of learners that is sometimes not considered when a course is developed, however, is the physically challenged. This paper examines how one can incorporate multimedia in an IT class in order to help students, but with consideration towards the physically challenged. It shows the learning process of  all students can significantly improve by including a personalized cd video.  Considerable increases in student interest have resulted as well as involvement in learning. Information Systems/Technology (IS/IT) is one business course that is needed for AACSB accreditation. Even those schools that do not pursue an accreditation route, nevertheless, provide students with a tech course. The goal of such a course is to aid all business students learn how to use and manage information technologies to help the business process conduct electronic commerce, improve business decision making and gain competitive advantage. Without understanding how technology affects business processes and workflow, one cannot anticipate whether it will produce the required results (Greengard, 2003). There is no complete uniform course content from one university to another. But all colleges face the challenge to maintain a current course in the face of a rapidly changing environment (Srinivasan, Guan, and Wright, 1999). With the advancement of IS, more topics need to be covered. For example, not long ago topics like, security and ethical challenges in E-Business were not typically discussed. Enterprise and Global Management did not assume much treatment either. And of course, the role of the Internet, intranet and extranets played a minimal role as well.  It becomes apparent that in a one semester IT course not all topics that were once covered can get now appropriate handling and equal time. In there lies the problem. What is disturbing is that many students do not have a good handle on application software, like Excel and Access. Do students really come in to this course with a working knowledge and understanding of the computer? Can we assume our high tech society has prepared the students with knowledge of technology prior to signing up for their IS course? Educators are challenged by another issue. There are numerous students that take an MIS course that have disability such as visual impairments, hearing impairments, physical disabilities, and cognitive disabilities. Some of these potential students who have access to these technologies cannot fully participate because of the inaccessible design of courses. There is a need to try to help these learners as well as the traditional ones. This paper offers a look at the inclusion of a video cd that is presented complimentary to students to be used as a complement to both traditional classroom and/or online use. Delivery, as well as assignments is provided on the cd. It was the hope and expectation of this paper that with this new inclusion:more material could be effectively be integrated in the course, students would respond positively to this and it would, help students who have learning problems, as well. For this group, they would be able to learn material at their own pace. Yet questions could be handled via email, discussion boards and chat rooms. Internet-based education has grown exponentially. Companies report using online education to train not only their employees in skills and the corporate culture, but also their customers and business associates. Despite some initial misgivings, academia is embracing online education, with universities in the United States now offering academic degrees delivered through online education at the associate, bachelor, master, and increasingly, the doctorate levels. The Massachusetts Institute of Technology has gone so far as to offer all of its course notes through the Internet free. Universities are catering as well to nontraditional students with flexible admission requirements and class schedules. Everyone will undeniably agree that productivity tools are important and necessary in this high tech environment. Understanding spreadsheets, utilizing HTML, creating a database and dealing with the WWW are all technical concepts that need to be used, understood and maintained. Could there be alternate and easier way to address some of the important required topics in an IT course. Blackboard, for example, is a course management tool that combines the best of the web in an easy to use package that allows one to give tests and record grades implementing significant electronic innovation in teaching and learning. Cyber delivery in universities appears to be successful with opportunities for continued growth (Mintu-Winsatt and Myers 2002). Can a new online approach help physically challenged students by providing them a helpful manner to learn the material? Can they take advantage of one or more of the assistive technologies that is available? Course curriculum in an IT class covers numerous topics. These topics establish a strong foundation for students to be able to be equipped to understand problems in today’s business world. One of the topics that seem to be getting the short end of the stick in occupying the MIS curriculum is application software. Not too long ago, there was an obvious need to cover the basics of the computer and its varied applications. Most students did not own their own computers and had minimal exposure to them and their many applications. Their knowledge of word-processing and spreadsheets, for example, came either through some short course or through a class like accounting or writing.  Many MIS texts included application exercises either in the text or through supplements.  Depending on the Professors preference, sometimes hardware and software was covered. The object of most texts is to maintain the challenge of maintaining an up-to-date course in the face of a rapidly changing environment (Srinivasan, Guan and Wright 1999). Textbooks, typically, recognize the importance of some basic topics by including them in the first few chapters of the text. Professors often take class time to teach and to reinforce the students' understanding of these topics.  One finds popular textbooks are slowly moving away from covering applications and hardware/software. O'Brien, for example in the text Introduction to Information Systems 12th edition, 2005, has questions that deal with spreadsheet analysis  given already in chapter 2 and database questions are asked before the topic is even covered . The assumption seems to be either a student knows this material or they have taken a course dealing with this. Unfortunately, there is usually not sufficient time to spend on this matter.  (One text that is beginning to reverse the trend is Haag, Management Information Systems for the Information Age 2005, McGraw Hill. He has introduced topics on web page design, database design and spreadsheet analysis.)

 

The Czech Republic: An in-depth look at its Global Posture

Dr. J. Kim DeDee, University of Wisconsin Oshkosh, Wisconsin

Lynda S. DeDee, University of Wisconsin Oshkosh, Wisconsin

 

ABSTRACT

Despite the interest in global business, scholars give limited attention to the transition economies of Central and Eastern Europe (CEE). This paper provides a framework of the Czech Republic, a key nation to go from command economics to market driven economics as applicable to U.S. firms. Crafting a set of competitive advantages requires management to identify the variability of evolving markets, to understand the needs of those markets, and then to deliver products and services at the right quality, quantity,  price, and timing to meet or surpass most demands. (McDougal, 1989; Yeoh & Jeong, 1995; Morris & Paul, 1987).Entrepreneurial firms intent on a global presence establish ventures that engage in traditional or emerging international markets, which, in certain cases, could involve the ongoing transition economies of Central and Eastern Europe (CEE) Taking advantage of the growing potentials there, while managing for real and perceived risk, calls for approaches that may differ significantly from standard U.S. practice. Considering the merits and differences of the market potentials of CEE countries, the Czech Republic (CR) stands out as a leader in the ongoing transition from centralized government command economics to market driven demand economics The authors first examine how the government of Czechoslovakia, and later the Czech Republic, when given the opportunity, laid the cornerstone for a private sector economy and later acted as a host country for free market capitalists worldwide. Next, the most salient long-range characteristics of the Czech transition economy are identified and finally, we explore important dimensions of the Czech market and its implications to U.S. business. From an historical viewpoint, both perceived and real attributes of a nation can be argued as necessary for the understanding of that nation as a potential market The Czech Republic is land-locked. Because it is at the center and the top of Europe, critical distribution routes have crisscrossed it for centuries. In 1620, Czechoslovakia lost its independence to the Hapsburgs, who ruled for almost 300 years. Immediately following the end of WWI, the Czechs, Moravians, and Slovaks built the only east European parliamentary democracy, which lasted through 1938 when the Nazis forced Czechoslovakia to cede the Sudetenland to Germany. By 1946, the Czechoslovakian Communist Party was voted in and by the end of 1948, the Soviet Communist Party seized power and held absolute control over all aspects of the Czech economy.. The Czechs, however, differed from other buffer states in that they never fully adopted Communist thought (Central Intelligence Agency, 2000).By 1968, discontent with token reforms of party chief Antonin Novotny led to his being replaced by the Slovak, Alexander Dubcek, who took major steps toward political, social, and economic reform. In 1969, the Soviets forced Dubcek to resign and replaced him with Gustav Husak. Throughout the 1970s and the 1980s, the Soviet-style centralized economy stressed heavy electronic, chemical, and pharmaceutical production, nationalized retail trade, agricultural and consumer product sectors, curtailed imports from the West, and let the Czechoslovakian economy stagnate (Encyclopedia Britannica, 2000). The Velvet Revolution (a.k.a. November Events or simply November) that took place between 17 November and 29 December 1989 is recognized as the backbone of the free market economy and the democratic political developments of the new Czechoslovakia. (Tradeport, 2000). On 19 November, the Civic Forum (CF) was established as the main political entity of the Czech people who wanted government reform. Headed by dissident Vaclav Havel, the CF demanded the resignation of the Communist government, investigations into rampant police brutality, and the release of prisoners of conscience. The CF was joined by some 750,000 students, factory workers, professionals and lay people who held massive demonstrations in Prague on 25 and 26 November. As long as these constituents remained interactive, the Communist regime was doomed.  Prime Minister Ladislav Adamec was forced to hold talks with the CF and Vaclav Havel, following which a new coalition government was formed.  On 10 December Czechoslovakian President Gustav Husak was forced to name a new government and announce his resignation. From this a joint session of both houses of the Federal Assembly elected Vaclav Havel as President. With little experience or time, the new government concentrated on the issues of human rights and freedoms, private property ownership, business law, and laying the platform for the first elections in Czechoslovakia since 1946. On 5 July 1990, 96 percent of the voting population voted, and Vaclav Havel was re-elected President of Czechoslovakia. In the next nationwide election (1992), Czechs voted overwhelmingly for the Civic Democratic Party led by then-Federal Finance Minister Vaclav Klaus, a strict Milton Friedman monetarist, Margaret Thatcher politician, and author of Czechoslovakian reform and privatization. Vaclav Havel resigned as president.  On 1 January 1993, Czechoslovakia split into the Czech Republic and Slovakia, the constitution of the Czech Republic went into effect, and its government became a parliamentary democracy (Charter of Fundamental Rights and Freedom, 2000; Central Intelligence Agency, 2000).Similar to U.S. law and in direct contrast to the Soviet centralized model, the Czech government is now constitutional with a freely-elected two-house Parliament. Any of the numerous political parties must garner at least five percent of the popular vote to gain a seat in Parliament, which, in a joint session, elects the President for not more than two continuous five-year terms.. Comparable to much of Europe, the Czech government has a Prime Minister, a Deputy Prime Minister and cabinet Ministers with executive power to manage the budgets and coordinate The Czech Parliament, during its short history, has gone through numerous right-to-left alterations.  In the first election (1992), the Civic Democratic Party, was voted in by higher educated and urban business people, intent on building a free market economy with little state intervention, strengthening local and regional governments, developing strong ties with Western Europe and the U.S., and gaining EU membership (Political System-Czech Republic, 2000). In the 1996 elections, the Civic Democratic Party, Christian Democratic Union, and Civic Democratic Alliance formed a right-to-center government, but by December 1997, due to widespread corruption, weak corporate governance, increased debt, and disagreements on the level of government control, it, along with Vaclav Klaus, was forced to resign. In the early elections of June 1998, a left-of-center, three-party government formed, consisting of The Czech Social Democratic Party, that set out to fight corruption, create new financial policy, and re-start economic growth; The Freedom Union, the modern and conservative party, aimed toward continued privatization and lower direct taxes; and The Communist Party, intent on a state-regulated economy, increased taxes of the rich, and a 35-hour work week (Political System-Czech Republic, 2000). The result was a government splintered into political gridlock (1999-2001) at a time the Czechs needed critical legislation for entry into the EU. After the 2002 elections, the government remained slightly left-of-center, and Vaclav Havel left the presidency and a 13-year stimulus to the end of totalitarian rule across the continent. In February 2003, Vaclav Klaus, the driver of Czech reform and privatization, and staunch opponent to Havel, was elected to the presidency by a one-vote margin on the third ballot in Parliament. Klaus, a convoluted high ranking politician before and after Communism, was prime minister from 1993 to 1997, when he, along with his cabinet, was forced out in disgrace. By 2004 as a Euro-realist, Klaus warned the Czechs of the consequences of EU membership and of being a small country surrounded by a Europe striving to become a super-state. Because of his voice for the Czech people, his popularity soared above 70% (BBC News, 2004, Country Briefings, 2003; National Center of Policy Analysis, 2003).

 

Monitoring and Control of PERT Networks

Dr. Wayne Haga, Metropolitan State College of Denver, CO

Dr. Kathryn Marold, Metropolitan State College of Denver, CO

 

ABSTRACT

This research involved attempts to develop better methods of monitoring and control of projects.  This is accomplished by creating a list of critical dates at which the manager should review the project to decide if activities need to be crashed. Crashing refers to bringing in additional resources to shorten the completion time of an activity. The traditional method of crashing PERT networks ignores the stochastic nature of activity completion times, reducing the stochastic model to a deterministic (CPM) model and simply using activity time means in calculations. A new method of crashing PERT networks is proposed.  A computer simulation model is used to identify “crash points” for each activity in the network.  The crash point for an activity is the point at which the expected value of cost overruns given that the activity is crashed exceeds the expected value of the cost overruns if the activity is not crashed.  Crash points are determined by making a backward pass through the network, such that each crash point is based on decisions that will be made regarding crashing at later points in the project.  Cost overruns are calculated by specifying a penalty function for late completion of the project. A simulation program was written in the C++ programming language to implement and test the new proposed methods of crashing.  It was found that this method could greatly reduce project overrun costs. This work was initially explored by Haga (1998) as part of his Doctoral dissertation. There has never been a more critical time to develop methods of improving project management. Trends toward offshoring and reductions in funding for team projects warrant a re-examination of crashing networks to monitor and control project development.  Although Grygo (2002) suggests that the recent economic downturn might be the best thing that has happened to project management, corporations are still wrestling with time overruns that cripple the organization’s budget, damage its reputation, and tax its cash flow with paid-out penalties.  Of the software projects that are successfully completed, it is estimated that fifty percent of them are not as successful as they should be (Keil, 1995.)  The habit of project managers’ building time buffers into non-critical paths that feed into critical ones in a project network has resulted in almost universal late completion dates (Roe, 1998.)   When crashing project management networks was introduced in the mid 60’s, it was meant to be the answer to the time overruns that had become ubiquitous in project management.  It met with limited success because of the stochastic nature of projects and the uncertainty of which path would become critical.  It also ignored the human factors that play such a big part in successful project management. Using a simulation program to crash networks promises to alleviate the problems of deterministic methods of crashing, and allow project teams to “do more with less.” Since most organizations have far more projects in relation to their available capacity, crashing a network using simulation techniques seems a desirable way to shorten the project life and to guard the reputation of the organization (Roe, 1998.)  The topic of this research involves monitoring and control of projects through crashing a project with the aid of a simulation program.  A list of critical review dates for the project manager is specified.  Handling time overruns by more accurately predicting how crashing activities will reduce the expected completion time promises significant savings for organizations.  A simulation program was first written in C++ by Haga in 1998. The traditional PERT method of crashing uses only the activity time means in calculating the critical path, reducing the stochastic model to a deterministic model.  A single critical path is thus calculated and used, whereas in reality there may be numerous possible critical paths realized.  For a large network the probability that any given path may be the critical one may be very small.  As a result, the traditional PERT method of calculating the time to complete the project is almost always too low.  The extent to which the completion times of projects are underestimated is examined for a variety of networks by Klingel (1966).   The traditional method of crashing PERT networks has been to convert the model to a deterministic model by using the means of the activity times.  The network is then crashed in a series of iterative steps until the expected completion time of the project is acceptable, the cost of crashing exceeds the benefits, or all activities have been crashed as much as possible.    This research was initially reported by Haga (1998) as part of a Doctoral dissertation, and partial results were reported by Haga and O’Keefe (2001) and by Haga and Marold (2004). ¥  Crashing networks with the traditional algorithm developed in the 60’s has been used for several decades, but projects continue to finish late and over budget. The traditional crashing algorithm for the CPM (Critical Path Method) model is not suitable for the PERT model for the following reasons:  At best there is only a 50% probability of the project being completed by the target date, since mean activity times are used to calculate the completion time of the project.  This best-case scenario occurs only of there is a single dominant critical path in the network.   If there are numerous possible critical paths the probability may be much less than 50%.  If there are penalties for late completion of the project this could be extremely costly.  The complete distribution of project completion time needs to be considered when crashing. All decisions regarding what activities to crash are made before the start of the project.  The algorithm does not provide for monitoring of the project and making decisions regarding crashing after the project has started. An improved method for crashing PERT networks could save a company a significant amount of money in crashing and overrun costs.   Even if there are not direct costs in the form of penalties for late completion of projects, there is likely to be costs due to loss of business because of a damaged reputation.  There is a definite need for improving the crashing technique for projects and thus aiding organizations in a true time of need.

  

Diffusing Technologies: Factors Effecting Adoption Decisions

Dr. Jifu Wang and Dr. Ronald J. Salazar, University of Houston-Victoria, TX

 

ABSTRACT

 In this article, we investigated some of the precursors of and responses to the rapid technological innovation being undertaken by China’s machine tool industry.  Following Caselli and Coleman, (2001) we investigate the process of technology adoption taking an industry approach.  Specifically, we analyze that taken by a large Chinese manufacturer of computer numerical control (CNC) machine tools.  The industry has been identified as a key strategic component of China’s plan to join the global economic powers and to supply its rapidly growing industrial base.  Drawing on the technology diffusion literature we observe the possible influences of corporate governance systems, tariffs and trade barriers.  The article suggests directions for further empirical investigation.  Stiglitz (2004), has pointed out that globalization can be seen as a threat to traditional values.  In doing so, his work is enriching a growing body of research Parente, (1994); Young, (1992); Benhabib, (1994); Caselli, (1996, 2001); Stiglitz, (1987) concentrating on the economics of globalization.  The dramatic growth observed Durlauf, 1991) in the radical opening of the Chinese economy to outside influences form the backdrop to the research we report.

 

Companies Prefer Liquidity

José Villacís González, University San Pablo CEU, Madrid, Spain

 

ABSTRACT

 Companies are consumers of money at first and then consumers of production factors (inputs). They later sell the products and regain the money. During that period of time, which is called the company’s average maturing period, the working capital is destroyed and production is created. Production is basically a destruction process. Everything is destroyed except for money, which is transformed. Such is the final conclusion of this paper. Money is a fully liquid means of payment that represents the monetary universe, and is purchased by the company as such. It is then transformed step by step into less liquid segments, still remaining money until it reaches financial assets, which are money substitutes. These constitute the portfolio, which can be unpredictably large and heterogeneous, and even dysfunctional. The production and sale of products is followed by regaining fully liquid money. Money goes back to its original simple status of liquidity.  The company’s financial activity relies on such metabolic process transforming money from the most liquid to the least liquid segments, and finally to return to its liquid status. This process simultaneously represents the consumption–destruction process and the selling of the production for a price. These activities describe the real processes of the economic system. The company’s portfolio is in between, with its role of money distributor for the working capital in production.

 

Why Does HRM Need To Be Strategic? A Consideration of Attempts to Link Human Resources & Strategy

Dr. Martin Wielemaker and Dr. Doug Flint, The University of New Brunswick, Fredericton, NB

 

ABSTRACT

 There is a move in HRM to position itself as strategic, based on the increased importance of people in gaining competitive advantage according to the Resource Based View of the firm. However, we argue that even if an organization’s human resources are deemed important, i.e. strategic, that doesn’t automatically elevate the HRM-function in an organization, - embodied by the HR manager - from a supportive to a strategic level. Not surprisingly, the field of HRM has proposed a number of suggestions and tactics to capitalize on the proposed new role of human resources in organizations such as the use of strategy discourse in HR, the use of performance measures, and the use of integrative tools such as the balanced score card or strategy map. Yet, these tactics still fall far short of situating the HR manager in the room where strategy is formulated. This leads to the question whether HRM should focus its efforts on a superior way to make HR strategic or whether it should accept the support role it currently plays in executing strategy.

 

Importance of Portal Standardization and Ensuring Adoption in Organizational Environments

Dr. Prasad Kakumanu and Marc Mezzacca, University of Scranton, Scranton, PA

 

ABSTRACT

 Enterprise information portals provide delivery mechanisms that overcome information barriers between technical, functional, and cultural silos that limit the internal creation and development of competitive advantages within organizations.  Seamlessly integrating this technology into an organization has represented a daunting challenge since the technology’s inception.  Since the designs of these systems have often lacked uniformity, information portals still lack any significant degree of standardization.  This issue of consistency, accompanied by a scarce supply of in-depth research in the field, has made utilizing enterprise information portals difficult and inefficient.  This study was conducted during a three month research program at The University of Scranton.  There were two primary objectives with regard to this paper.  The first of which is to facilitate a more fundamental understanding of enterprise information portal technology.  The other is to provide the decision makers of various firms and educational institutions with some comprehensible and practical information concerning portal technology and its respective impact on a particular organization.  The research completed during this study includes conclusions drawn from the examination of a real world case designed to clarify many vague subject areas addressed by this new technology.  In addition to this, the paper will also provide insight into future progression of enterprise information portal technology as its popularity continues to increase.

 

Competitive Advantage Through Nonmarket Strategy:  Lessons from the Baywatch Experience

Marc T. Jones, Macquarie University, Australia

Patrick Kunz, University of Technology, Sydney, Australia

 

INTRODUCTION

 This paper examines how international firms operate strategically in the nonmarket environment to secure advantages which improve their cost and/or revenue structures and, hence, their economic performance.  The paper centers on a detailed case study of the globally popular Baywatch television show’s efforts during 1999 to secure attractive locational subsidies by placing the states of New South Wales, Queensland (both in Australia) and Hawaii into competition with each other.  The paper is organized into three core sections: the first reviews the concept of nonmarket strategy; the second examines the Baywatch in detail; the third concludes with some observations/recommendations for various impacted stakeholders in this and similar nonmarket contexts.  According to Baron (2000:3), the business environment consists of both market and nonmarket components.  The market component includes the interactions between firms and other parties that occur across markets or through private agreements such as contracts.  The nonmarket environment includes social, political and legal arrangements that structure interactions outside of – but in conjunction with – markets and private agreements.  It encompasses those interactions between the firm and individuals, interest groups, government entities, and the public that are intermediated by public institutions rather than markets or private agreements.  The distinguishing characteristics of public institutions include majority rule, due process, broad enfranchisement, collective action, and transparency.  Vitally, a firm can secure advantages in the nonmarket environment which serves to protect or enhance its position in the market environment.  Nonmarket strategy thus offers another route to competitive advantage and superior economic performance. 

 

The Effects of Mood and Motivation on Attitude

Michael Ba Banutu-Gomez, Ph.D. and Amanda R. Wingate, Rowan University, Glassboro, New Jersey

 

ABSTRACT

 This paper deals with the effects of mood and motivation on “attitude toward world view.” With increasing business globalization and the different world views of work groups, maintaining motivation becomes a challenge. This meeting room experiment attempted to measure the level of mood and motivation of participants when asked to read a positive story and headlines, a negative story and headlines, and complete a survey. We predicted that positive world view would be higher in participants who read a positive story and headlines versus those who read a negative story and headlines. Statistical analysis revealed no significant effect between mood and motivation.  Motivation can be defined as the reasoning for people to do certain things.  A motive is a need within someone to attain something, which is a key element in achieving effective performance.  To motivate someone requires giving him or her an incentive, which in some minds, can be viewed as an underlying form of manipulation.  Theories of motivation can fall into two categories.  First, there is content theory, which includes the requirements of an individual to complete a task.  Second, there is process theory, which emphasizes how and by what goals an individual is motivated rather than the content of individual needs (Smith, 1993).  The greater the deprivation of a need, the higher its importance, strength, and desirability becomes.  For instance, the longer a human goes without food, the hungrier he gets, and most importantly, the more valuable food becomes. 

 

Service Providers’ Communication Style and Customer Satisfaction

Cynthia Webster, Ph.D., Mississippi State University, MS

 

ABSTRACT

 The research reported here examined the effects of two general communication styles, affiliation and dominance, on customer satisfaction in the service market. Findings reveal that highly affiliative providers produced more satisfaction among their customers and highly dominant providers produced less satisfaction.  However, it was also found that the relationship between a service provider’s communication style and customer satisfaction depends on service criticality and whether the service is experience or credence in nature.  In today=s fiercely competitive service environment, marketers are seeking to enhance customer satisfaction by establishing and improving relationships with new and existing customers (Singh and Sirdeshmukh 2000).  Of particular interest are the communication factors that contribute to the creation of a strong bond between the service provider and customer.  Among the communication factors, communication style is one of considerable importance because of its vital role in connecting employees and customers and in establishing customer trust and satisfaction (Ring and Van De Ven 1994).  Further, communication style has been found to affect a listener=s feeling of confidence, sense of control, sense  of connectedness, and self-esteem (e.g., Albrecht and Adelman 1987).  Although a service employee’s or provider=s communication style is likely to affect the quality of the service encounter by influencing the customer=s impression of the provider and the service firm, there is a lack of research on the impact of communication style on service customers= attitudes, despite the call for such research (see, for example, Czepiel 1990). 

 

How Can Marketing Tactics Build Behavioral Loyalty?

Dr. Yi Ming Tseng, Tamkang University, Taipei, Taiwan

 

ABSTRACT

 This research explores the effects of relationship marketing (RM) tactics on enhancing relationship quality in the services industry. Through data from banking, airlines, and travel agencies, we discuss five types of relationship marketing tactics and how they influence the customers’ perceptions about long-term relationships.  We also include customers’ inclination toward the relationship as a mediator into the model to help the framework more completely. Research findings support that tangible rewards, preferential treatment, and memberships are effective in developing customers’ long-term relationships, and behavioral loyalty is also influenced by relationship quality.   Relationship marketing has been conventionally defined as "developing," "maintaining," and "enhancing" customer relationships (Berry and Parasuraman 1991).  What are the effective methods for developing and keeping these relationships and how they work may be complex questions. Relationship marketing tactics are methods that can be actually executed for implementing relationship marketing in practice.  We would like to propose and discuss five main kinds of relationship marketing tactics in the service industry and construct the relation model of these tactics and other relationship marketing concepts. These efforts will be helpful in yielding insights in the field of relationship marketing.

 

A Comparative Essay on the Causes of Recent Financial Crises

Dr. Tarek H. Selim, The American University in Cairo, Egypt

 

Abstract

 Causes of recent financial crisis are explored. The study points out different reasons for the financial crisis which have plagued different countries and takes a country by country analysis as the main approach. The essay includes aspects of short-term volatility, macroeconomic fundamentals, policy misalignments, banking performance, exchange rate management, and contagion. A comparative summary concludes the analysis. Countries studied include Mexico (1994), Thailand (1997), Korea (1997), Indonesia (1998), Malaysia (1998), Russia (1999), Brazil (1999), Turkey (2000), and Argentina (2001).  Globalization and uncontrolled speculation have been blamed as the leading reasons for the effects of recent financial crises on developing economies (Kraussl 2003). The size and speed of capital flow movements in international financial markets are argued to have created "devastating" consequences for those countries, and as have been quoted by Malaysian Prime Minister Mahatir in the Jakarta Post on January 24, 1998:  "For forty years, all these (emerging) countries have been trying hard to build up their economies and a moron like George Soros comes with huge sums of money to speculate and destroys everything".   Different financial crises arise due to different reasons. It can be argued that a specific financial crisis initially occurs due to specific market failures in specific sectors of the economy and then it may spread to other countries through contagion and other regional spill-over effects. However, three general forms of financial instability have constituted most forms of recent financial crises (based on Kaminsky and Reinhart 1996, Caprio 1998, Kraussl 2003, and White 2000): (1) short term volatility, (2) medium term misalignments including excessive international capital flows, and (3) contagion.

 

Human Capital Convergence: A Cross Country Empirical Investigation

D. Stamatakis, Athens National and Kapodistrian University

Dr. P. E. Petrakis, Athens National and Kapodistrian University

 

ABSTRACT

 This article conducts an empirical investigation comparing human capital convergence in three country groups belonging to significantly different development categories: G7, developed and developing. Human capital evaluation, in this context, goes beyond enrolment and/or attainment rates. Apart from enrollments and government spending, alternative factors determining human capital effectiveness, e.g. book availability, researchers per capita and students per teacher, are included. Results indicate moderate evidence of convergence among the three-country groups when “traditional” variables are included. The convergence “picture” taking into account additional variables is quite different, implying the existence of a “convergence trap” manifesting in a scenario of worldwide polarization.  A key economic issue currently is whether poor countries tend to grow faster than rich ones and converge over time to some level of per capita income. Instead of evaluating convergence based on a specific growth model, the present article considers the factors of production. Specifically, it comprises an empirical attempt to evaluate and interpret human capital convergence and constitutes a supplement to former studies regarding education and growth (e.g. Petrakis and Stamatakis, 2002; McMahon, 1998; Barro and Lee, 1993). Meanwhile, the conclusions are drawn from the intersection between former findings on enrollment rates and the empirical outcome of an augmented set of human capital proxies; stocks and flows.

 

Financial Integration through Capital Mobility:  Does the GATS Approach Help or Hurt?

Karla Scappini, George Mason University, Fairfax, VA

 

ABSTRACT

 In a time when international economic debate is dominated by a trend toward globalization, and fueled by financial crises of the late 1990s, the debate regarding the effectiveness of government capital controls in developing countries for macroeconomic stability has regained importance. Whether for or against capital controls, this debate takes place within the context of a much larger topic: the international financial architecture and the extent to which emerging market economies are integrated or integrating into the global system.  This paper considers the classical theories (1) favoring free trade in and liberalization of financial services, particularly the arguments for free capital flows relative to their contribution to financial integration, and whether General Agreement on Trade in Services (GATS) is an appropriate forum for facilitating integration of emerging markets.  By comparing the arguments for and against capital controls with the free trade theories underlying GATS, insight into the potential role for GATS in the integration process will result.  Part I of the paper is a review of the literature on financial integration and its macroeconomic challenges.  Part II reviews the literature on capital controls and the policy dilemma regarding their use in macroeconomic policy.  Part III delves into the relationship between the free trade models and gains from trade resulting from capital as a factor of production.  Part IV defines economic integration within the constructs of WTO and the GATS.  Conclusions are presented in the final part of the paper.

 

Production and Macroeconomic Equilibrium

Dr. José Villacís González, San Pablo CEU University, Madrid, Spain

 

ABSTRACT

 Companies have to finance themselves in two different ways: on the one hand, by using new monetary resources, and on the other hand, by using the system’s savings, which partly belong to the company and partly belong to others. Both, new money created in the system and previous money –in the form of savings– make it possible for production to be formed and demanded in the system as a whole. New money will have to be converted into new production. We will prove such premise. We will also see that the system requires three groups of assets: the first two groups are the result of production and they are consumer goods and capital goods. The third group is made up of artificial and speculative assets. Macroeconomics main characteristic will experience a change with these new assets. This article develops and expands Germán Bernácer’s Macroeconomics theory. He was the first and last person to study them.  Every time new production is generated, production factors are employed and paid. Such periodic payment, which is a flow, is called income. Income has a monetary equivalent in production, which is why there is an equation according to which net production is equal to domestic income. If we consider period zero the moment when production is initiated, and assuming that there has been no previous production or income, then new production is only possible if there is new money that enables the payment of new income to production factors. Once the incomes and production are generated, two events will take place: the first event is the monetary circulation of initial incomes, and the second event is the creation of new money that will finance new productions. Such events will be dynamically expanded, because production will be stimulated and demanded at the same time. From this dynamic point of view, the equation S = I will be altered with the inclusion of new money.

 

Selecting a Software Package: From Procurement System to E-Marketplace

Yumiko Isawa, Monash University South Africa, Johannesburg, South Africa

 

ABSTRACT

 A leading media group partnered with a software vendor to build a trading e-hub for its group of companies. The project had taken a broader approach than merely buying the technology, rather they adopted a dual focus that included the acquisition of the packaged software as well as implementing business processes aimed at achieving the benefits associated with strategic sourcing.  Following a short procurement and package selection process, and a 3-month pilot phase at two of the Group’s companies, the trading platform was in the process of being rolled out throughout the Group. The project champion, expected savings of approximately R120 million per year on the Group’s R2 billion procurement budget. In addition to the benefits associated with cost savings, came potential new sources of revenue via the new trading exchange enabled by the new technology.  Organisations are increasingly moving to commercial off-the-shelf software for major business applications. In 1999, Butler predicted however, that organisations are increasingly moving towards packaged software and that the ratio would go beyond 3:1 in favour of packaged applications (Butler, 1999).  The benefits of selecting packaged software rather that bespoke software development are numerous. Several researchers and practitioners have offered various advantages of using package software. (Lucas, 1988; Butler, 1999; Lassila & Brancheau, 1999).

Contact us   *   Copyright Issues   *    About us

Copyright 2000-2017. All Rights Reserved