The American Academy of Business Journal
Vol. 7 * Num.. 1 * September 2005
The Library of Congress, Washington, DC * ISSN: 1540–7780
WorldCat, the world's largest library catalog
Online Computer Library Center * OCLC: 805078765
National Library of Australia * NLA: 42709473
The Cambridge Social Science Citation Index, CSSCI,
Peer-reviewed Scholarly Journal
Refereed Academic Journal
All submissions are subject to a double blind peer review process.
The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Journal will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work. All submissions are subject to a double blind peer review process. The Journal is a refereed academic journal which publishes the scientific research findings in its field with the ISSN 1540-7780 issued by the Library of Congress, Washington, DC. The journal will meet the quality and integrity requirements of applicable accreditation agencies (AACSB, regional) and journal evaluation organizations to insure our publications provide our authors publication venues that are recognized by their institutions for academic advancement and academically qualified statue. No Manuscript Will Be Accepted Without the Required Format. All manuscripts should be professionally proofread / edited before submission. After the manuscript is edited, you must send us the certificate. You can use www.editavenue.com for professional proofreading/editing or other professional editing service etc... The manuscript should be checked through plagiarism detection software (for example, iThenticate/Turnitin / Academic Paradigms, LLC-Check for Plagiarism / Grammarly Plagiarism Checker) and send the certificate with the complete report.
The Journal is published two times a year, March and September. The e-mail: firstname.lastname@example.org; Journal: AABJ. Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via the e-mail address above. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above.
Copyright 2000-2020 AABJ. All Rights Reserved
Earnings Predictability: Do Analysts Make Coverage Choices Based on Ease of Forecasts?
Dr. Bruce Branson, NC State University, Raleigh, NC
Dr. Donald Pagach, NC State University, Raleigh, NC
This paper investigates determinants of security analyst following. It builds on prior research in this area by investigating two important earnings-related variables that should be of interest to the analyst community—earnings persistence and earnings predictability. We find that earnings persistence is significantly positively associated with the level of security analyst coverage while the predictability of earnings is found to be negatively associated with analyst following. We also include control variables for firm size (market capitalization) and include a group of firms that have escaped research attention in prior studies—those that have an analyst following of zero. The use of the tobit model allows for inclusion of these firms. Bhushan (1989), O’Brien and Bhushan (1990) and Brennan and Hughes (1991) have identified specific firm characteristics that they find to be associated with either the level of security analyst following or year-to-year changes in analyst following or both. This paper extends this research by examining additional earnings-related firm-specific variables that are found to be significantly associated with aggregate analyst coverage decisions. Drawing upon findings in the so-called “earnings response coefficient” literature, we argue that analysts have an incentive to identify and follow those firms whose time-series properties reveal historically persistent earnings innovations and less predictable earnings patterns. Controls for firm size are included based on prior research. Security analysts provide a variety of services to a broad base of clients, both individual and institutional investors. Specifically, the "sell-side" analysts investigated in this study are employed by brokerage/investment firms to provide recommendations to clients pertaining to the acquisition or liquidation of ownership positions in the companies they cover. Prior research has provided evidence that such recommendations conveyed to the market by these analysts have information content (Stickel, 1991). In this paper we investigate whether firm-specific factors are associated with the extent of security analyst coverage. This study extends research by Bhushan (1989), O'Brien and Bhushan (1990) and Brennan and Hughes (1991) that provide evidence linking levels of and changes in analyst coverage to factors such as firm size, institutional ownership, market-adjusted returns, return variability and industry affiliation. The extension relies heavily on 1) the "earnings response coefficient" literature that describes the price-informativeness of earnings in terms of earnings persistence and earnings predictability proxies (Kormendi and Lipe, 1987; Easton and Zmijewski, 1989; Collins and Kothari, 1989; and Lipe, 1990) and 2) the time-series literature that derives measures of persistence and predictability independent of the price-setting process. In this research, we develop a model expressing the extent of security analyst coverage of a given firm as a function of two earnings-related variables: persistence and predictability. These concepts, while related, are distinct. Persistence involves the degree to which an earnings innovation is permanently impounded in future income. Predictability, on the other hand, involves the variability of the earnings stream.
Nations’ Socio-Economic Potentials: Development, Growth, Public Policies, and Welfare
Dr. Ioannis N. Kallianiotis, University of Scranton, Scranton, PA
In this, mostly general equilibrium and philosophical work, the paper tries to point out some of the existing problems in our world, today. It considers the social and economic potentials for our governments and our people, together with their conflicting objectives and suggests a few long-term remedies, which will contribute to nations’ development and growth and persons’ well-being. The countries’ growth objective, their long-term development and growth and society’s welfare and individuals’ utility functions are emphasized, subject to the endowments of factors, technology, tastes, risk, moral, ethical, and just social constraints. At the end, a lot of discussion is made about governments, public policies, control and regulations, value system, and a few suggestions on new humanitarian, social, political, economic, financial, and philosophical frontiers are given. The problems of our world today are not strictly economic, as many try to present them, but are mostly social, political, philosophical, moral, and ethical ones.2 The magnitude of these problems is so enormous and their solution so hard because we do not depend on our leaders to solve them dynamically, but on some “invisible powers” (we do not even have a name for them) who ignore humans, nations, values, virtues, justice,3 and purpose of our existence and independently of these social constraints (values), they try with all their inhumane means to maximize some “social values” and minimize some “social costs”. Of course, together with the above problems follow some narrow economic problems, like, low development and growth (even lower net economic welfare), inequality in distribution of earnings, income, and wealth,4 high uncertainty and risk,5 low confidence and expectations by all individuals,6 high unemployment,7 high inflation, low money market rates for the small investors8 (negative real risk-free rate of interest), high borrowing rates and credit cards rates9 (unfairly very high risk premium), high liquidity (huge growth of money supply and money creation in the banking industry), high government taxes and even higher spending,10 low savings11 and low capital inflows, due to the low money market rates and the devaluated dollar,12 huge imports, low exports,13 very high prices of oil,14 too many refugees and illegal immigrants,15 and corruption everywhere.16 Of course, there are many factors that have affected countries’ development and growth17 and consequently the financial markets and the social welfare. Some of them are: New inventions, which have created enormous profits and exaggerated hopes (like, radio, talking pictures, and passenger aircraft in 1920s, lately, computers and the internet, etc.). Then, wars, particularly the reconstruction of the destroyed cities, infrastructures, plants, etc. increase production, employment, profits, and expand markets; hopes following the peace agreement, too.
Decolonization and International Trade: The Cote D’ivoire’s Case
Dr. Albert J. Milhomme, Texas State University– San Marcos, TX
Many countries, former colonies of some colonial powers, have acceded in the past century to their political independence. What about their economic independence? A measure of this independence could be reflected in the evolution of their international trade, exports and imports. This study is centered on the evolution of the international trade of Cote d’Ivoire, a former colony of France. For more or less half a century, many countries, former colonies of some colonial power like Great Britain or France, have acceded to their political independence. What about their economic independence? A measure of this economic independence could be in the today pattern of their international trade, exports as well as imports. This study, centered on Cote d’Ivoire (also known unofficially as Ivory Coast for some English speaking people), a former colony of France, might put some light on the rate of the evolution and the achievement or non-achievement of this economic independence. In 1960, as a colony of France, Cote d’Ivoire did import 65% of its imports from France and did export to France 67% of its exports. France had then at that time, a dominant position, a position which was the result of a century of effort to create and protect trade. Cote d’Ivoire was a main customer of France in term of imports and a main supplier of France in term of exports. Has France kept a dominant position in Cote d’Ivoire today, 43 years after the independence? This is the type of question some people have definitely answered by “yes”. French companies are still very active in many formerly colonized countries and do a majority of their “International Business” in their old colonies. The reasons are basically to be found in the cultural ties and traditions established during colonial rule. The colonial language used for business and daily life, the educational system of the country, the financial connections with the outside world, the newspaper read, and numerous expatriates staying in the country after independence are all acculturation’s factors which contribute to a paradoxical degree of dependence upon the previous colonizers on the part of many newly dependent countries. Other people have different feelings. Because of historical events preceding independence, they believe that many formerly colonized countries would spurn companies from the colonial powers. If dependence may have existed for a short while, it did not last, a former colonizer losing very quickly its historically acquired economic advantages. France, a former colonial power, and Cote d’Ivoire, a former colony, have been selected as an interesting pair of trade partners, the independence of Cote d’Ivoire having been realized as a surprising political decision by France in the middle of 1960, so without any apparent reason for resentment from the colony’s people. One can not negate some changes in their relationship.
How Corporate Sport Sponsorship Impacts Consumer Behavior
Dr. Kevin Mason, Arkansas Tech University, Russellville, AR
Corporate sport sponsorship is one of the many tools marketers have at their disposal to try and reach consumers and influence them to buy their products and yet one of the least discussed forms of marketing communications addressed in the marketing literature. A key to effective sponsorship is the understanding of how consumer attitudes are formed and change. It is the purpose of this conceptual piece to examine the relationship between sponsorship and attitudes. Attitudes are comprised of enduring cognitive (beliefs), affective (evaluative emotional attachments), and behavior tendencies towards an object. As such, attitudes have a strong impact on consumer behavior. Attitudes can then be changed by altering one or more of the three components. Sponsorship seems to affect the affective component of an attitude by creating a positive association between the consumer’s sport team and the company’s product. However sponsorship can also affect the attitudinal cognitive component by altering brand beliefs/perceptions. It should be noted though that leveraging activities are helpful when dealing with cognitive changes. Regardless, the ultimate goal of corporate sponsorship is to change the entire attitude resulting in positive behaviors (e.g., shopping and purchases). Marketers strive to make positive connections with consumers via numerous “tools” such as advertising, public relations, promotional tie-ins, and sponsorship. At present corporate sport sponsorship is becoming a very prominent marketing vehicle. Sponsorship occurs when a corporation funds a program (e.g., television or radio) or event whereby the sponsoring corporation has promotional material included into the program or event. Originally, advertising for radio and T.V. programs occurred in the form of corporate sponsorship (Harvey, 2001). Over the years, corporate sponsorship has grown to become a huge promotional tool. For example, in the United Kingdom, sponsorship expenditures increased from 4 million dollars in 1970 to 107.5 million dollars in 1997. Likewise, sponsorship expenditures in the United States increased from 850 million dollars in 1985 to 8.7 billion dollars in 2000. In 1994, 4500 companies spent around 4.2 billion dollars on sponsorship rights in North America and 67 percent of the rights purchased were sport related (McDaniel, 1999). Anheuser-Busch and Phillip Morris are some of the more active companies involved with corporate sponsorship with each spending in excess of 135 million dollars on sponsorship in 1998. In particular, corporate sporting event sponsorship has become increasingly popular. For example, Coca Cola spent at least 650 million dollars on the Atlanta Olympic Games. MasterCard spent around 100 million dollars on the World Cup. North American corporations in 1999 invested 7.6 billion dollars in sponsorship with 67 percent of the money going on sports (Meenaghan, 2001; Madrigal, 2000).
An Artificial Intelligence Stock Classification Model
Dr. Probir Roy, University of Missouri, Kansas City, MO
Using MATLAB's Perceptron model, this paper presents an attempt to train a neural network distinguish between acceptable and unacceptable purchases of publicly traded stock. In the past, Perceptron models have been used, quite successfully, in similar classification exercises. The input vectors used in training the network and in making the classifications in our model, involve readily available financial data like current ratio; quick ratio; gross margin as a percentage; sales/asset turnover; and earnings per share. The initial results of our analysis were quite encouraging insofar as the model had a ninety percent prediction accuracy using held-back test data. On the basis of our initial success, we are currently trying to extend this model to a "forward-looking" investment decision process model. Neural networks are constructed using a large number of simple processing units (analogous to neurons in the brain) that are interconnected at various levels. The behavior in these networks emerges, iteratively, using parallel processing. In some more complicated networks this development of “intelligence” may involve massive parallel processing. A single input neuron is shown in Figure 1. The scalar input p is multiplied by the scalar weight w to form wp. This term is sent to the “summer”, å, where it is coupled with a “bias” input b. The output from the summer, n, is referred to as the net input into a transfer or activation function ¦ which produces a scalar output a. Construction of neural networks involve the following tasks: (LiMin Fu 1994): Determine network properties by defining the topology; Determine node properties: - Determine system dynamics. Network Properties: The topology of neural networks refers to its framework as well as its interconnection scheme. The framework is specified by the number of layers and the number of nodes per layer. Typically, neural networks consist to two to three layers. All neural networks must have an input and an output layer. Some have an intermediate or hidden layer. The input layer consists of nodes called input units. These units encode very simple or basic attribute values for example attribute like P/E ratio, Asset Turnover etc. of the various publicly traded stocks in our study. The output layer consists of nodes called output units that encode basic output values. For example, in our study the output units were encoded 0-1 to represent unacceptable vs. acceptable purchases. The hidden layer contains nodes called hidden units. These units are neither directly observable nor can they be described in meaningful behavioral terms.
Managing Stakeholders Interests in Turbulent Times: A Study of Corporate Governance of Canadian Firms
Dr. Peter A. Stanwick, Auburn University, Auburn, AL
Dr. Sarah D. Stanwick, Auburn University, Auburn, AL
The focus of this paper is to examine whether Canadian firm are following strong corporate governance programs during these turbulent times. A sample of 32 firms was taken from the largest publicly traded firms in Canada. The corporate governance disclosures of these firms were compared with the 14 guidelines required by the Toronto Stock Exchange. Those results showed that the vast majority of firms followed the 14 guidelines. However, some industries groupings held higher compliance rates than other industries. In addition, fewer firms overall and across each industry went beyond the required standard to of corporate governance compliance. Due to the turbulent nature of the global economic marketplace, the role of corporate governance has changed significantly over the past two decades. Although originally established as a legal requirement for incorporation, corporate governance has become a valuable connection between firms and various stakeholders (Vinten 1998). Corporate governance is required to guarantee that the interests of both public and private sector organizations who have a vested interest in the firm are satisfied. Corporate governance helps in enhancing the confidence level for all relevant stakeholders including stockholders, customers, suppliers, employees and the government. The major focal point for corporate governance has been and will continue to be the Board of Directors. If a company has implemented a strong corporate governance framework, it allows the firm the ability to enhance their competitive advantage. It addition, it also allows the firms to be able to formulate and implement more effective strategic decision based on accurate and objective corporate information. The execution of a comprehensive corporate governance framework also ensures the shareholders a higher level of confidence about their investment decisions. A comprehensive corporate governance framework also can be a useful management tool to supervise the overall check and balance system used to evaluate the overall operations of the firm. The responsibilities and duties of the Board of Directors are directly lined to the long term survival of the firm. The Board is responsible to ensure that their decision not only enhance the overall value of the firm, but, properly serve the needs of the interested stakeholders. Previous research on the effectiveness of the Board of Directors has yielded mixed results. studies have shown the Board to be ineffective and are considered mere “rubber stamp” of the self interests of the firm’s managers. (Vance, 1983; Wolsfon, 1984). However, more recent studies including Stanwick and Stanwick (2002) have shown a direct positive relationship between a strong Board of Directors and the financial performance of the firm.
The Self-Fulfilling Prophesy of the Tenure/Promotion Policies at Business Colleges and Schools
Dr. Nessim Hanna, Roosevelt University, Schaumburg, Illinois
Dr. Ralph Haug, Roosevelt University, Schaumburg, Illinois
Dr. Alan Krabbenhoft, Roosevelt University, Schaumburg, Illinois
This paper is an empirical study of tenure and promotion policies at institutions of higher education that offer postsecondary and graduate degrees in business. The sample was from business professors attending the Midwest Business Administration Association (MBAA) conference in Chicago in 2003. The findings suggest that the presence or absence of faculty research support systems at business schools and colleges is the cornerstone in determining a school’s prominence. The research also suggests that the degree of satisfaction or dissatisfaction that a faculty member has towards the policy of promotion/tenure at his/her institution as well as towards the institution itself is highly correlated with the extent of the research support systems present at a school. This paper investigates the role of the tenure/promotion philosophies maintained by administrators in institutions of higher education, and the long-term impact of such philosophies on a school’s reputation, as well as the satisfaction of its faculty members. This research was undertaken with two objectives in mind. The first was to empirically demonstrate that the presence or absence of faculty research support systems that administrators may implement at an institution of higher education are the cornerstone in determining a school’s prominence. Such administrative actions or lack of them, create one of two scenarios; either a team of satisfied faculty members who puts forward a respectable publication record, leading the school to be perceived as a “research” institution, or conversely, leads to a group of dissatisfied faculty members who feel helpless, and blame the system for their lack of progress. In this latter case, scarcity of published work results mainly in a “teaching” school status. These outcomes seem to suggest that the principle of self-fulfilling prophecy is at work where a school’s distinction and reputation are based on its administrative philosophy regarding the extent of faculty support provided. The second objective of this paper was to investigate how dissatisfaction, as an emotion, can affect faculty members who happen to work under minimum or no support conditions. The expected negative emotions will impact not only a school’s academic environment, but often translate into a number of negative actions initiated by dissatisfied faculty. The criteria for tenure/promotion in most schools of higher education center around an evaluation of faculty numbers on the basis of performance in three key areas of activity: teaching effectiveness; scholarly activity; and service to the university, their academic discipline, or to the broader community.
The Trade-Off Between R&D and Marketing Spending for High-Technology Companies
Dr. Kenneth Ko, Pepperdine University, CA
Significant work has been done to show the importance of R&D spending to sales. In this paper, I further this discussion through focusing on how the trade-off spending decision between R&D and marketing should be done. I focus on high-technology companies where R&D is of critical importance. I introduce a simple mathematical model which analyzes the impact that R&D and marketing spending have on sales. I present the strategy that, relatively speaking, a company should spend more than its competitors on R&D (as opposed to marketing). The model and the effectiveness of the strategy are demonstrated and verified through three case studies involving six high-technology companies: 1. Intel and AMD 2. Cisco Systems and Nortel Networks 3. Xilinx and Altera. R&D managers can use the mathematical model, strategy, and case studies to show the need for and thus motivate increased R&D spending within their companies. Every year, companies need to make important budgeting decisions that will affect future sales, and thus the future success of the company. These decisions are never easy to make, and always involve trade-offs. Because a dollar spent somewhere is a dollar not spent somewhere else. For high-technology companies, which are highly dependent on R&D, perhaps the key budgeting trade off is between R&D and marketing. Of course, another key question is how much should a company allocate in total to R&D and marketing. In this paper, I will examine the trade-off spending decision between R&D and marketing. I focus on high-technology companies where R&D is of critical importance. There has been significant research done that demonstrates the link between R&D and sales. Morbey (1) has done a study that showed, in general, successful companies have a higher R&D intensity than their competitors. (Bean and Russo (2) define R&D intensity as “the ratio of R&D expenditures to sales revenue over the same period expressed as a percent.”) Brenner and Rushton (3) showed a positive correlation between R&D spending and sales growth. Gilman and Miller (4) performed a research study which showed a positive correlation between R&D spending and a firm’s sales, and also between R&D spending and a firm’s price/earnings (P/E) ratio (which reflects the opinions of analysts on the future prospects of companies). Branch (5) discusses the importance of R&D in increasing total profits. Leonard (6) wrote about how R&D spending relates significantly to the growth rates of sales, assets, and net income. From the literature, it is clear to see that R&D has a positive influence on sales. Of course, marketing has a positive influence also. So, the question remains as to how high-technology companies should make the spending trade-off decision between R&D and marketing. In order to help answer this question, I have developed a simple mathematical model, the R&D-Marketing Spending Model.
The Grey Relational Evaluation of the Manufacturing Value Chain
Dr. Yuan-Jye Tseng, Yuan Ze University, Taiwan
Yu-Hua Lin, Yuan Ze University & Hsiuping Institute of Technology, Taiwan
As the market is changing more rapidly, a corporation is required to speed the rollout of new products within the shortest time to capture market shares. To meet the demand for diverse and small quantity production, a corporation needs to spend much effort in creating a collaborative commerce model of the manufacturing value chain for cutting product design time as well as enhancing the production capability and cost competitiveness. This study, which is designed to address the decision-making problems with the production chain value and integrate deployment of manufacturing resources, can be distinguished into three stages. The corporation, in the first stage, should be willing to join suppliers for preliminary screening of production capacity and technology. And, in the second, it demands several samples test-produced and sets inspection items for retrieved samples to acquire evaluation data. In the third stage, the corporation selects beneficial suppliers, from analysis results using the evaluation data related to the grey relational analysis, to build an efficient manufacturing value chain. The final section of the article, we demonstrate a case study to explain the operating procedures for the evaluation model. The study result finds that the model is effective for supplier management economic analysis and is available for creating a collaborative commerce model based on the manufacturing value chain. Under the pressure of shortened product life cycle and constantly changing market, a corporation has to speed the rollout of new products within limited time. For this reason, a corporation may take every possible measure to meet customer needs. In the past the evolution of business competition was a productivity-driven approach, underscoring process improvement and lower production cost; the product competence was developed based on the operating philosophy of limited resources and suppliers highlighted their own manufacturing resource management. As time changes, so does this concept. A new concept of ‘Self-Advancement’ replaces the previous one. What the conception of ‘resource limit’ emphasizes is R&D capacity; production capacity and sales channels should be created on your own. This approach is supposed to be free from external control and constraint, however, lacks flexibility. ‘Self-Advancement’ puts great stress on ‘market orientation, customer orientation and value orientation’. It prioritizes customer requirement characteristics. It builds up productivity through fulfillment of customer genuine value needs. In this case, vendors’ value management has become a primary focus of today’s production capability. As operating concepts and theories based on ‘supplier value’ are found everywhere, a number of scholars indicate that supply management will be the source of competitive advantages (Cavinato, 1992). In recent years, the production model has been transformed from standard mass production into diverse, small-quantity and customized production to meet various customer demands. To rectify the defects of conventional production resource deployment, many specialists and scholars present the collaborative production chain.
Negotiation Process Improvement Between Two Parties: A Dynamic Conflict Analysis
Ching-Chow Yang, Chung Yuan Christian University, Taiwan, R.O.C.
Mo-Chung Tien, Chung Yuan Christian University, Taiwan, R.O.C.
The conflict process is a dynamic phenomenon. Negotiation is used to solve conflicts between parties. It is a “solution process” and reaching an optimal agreement is the ultimate goal for negotiation between two parties. A rational conflict resolution will be achieved if the decision maker could understand the probable evolution process for the conflict situation under a given option. In this paper, a dynamic conflict analysis (DCA) approach between two parties was developed and used to analyze two cases. Analysis reveals that the players could grasp at the trajectory of the option changes between the two parties by observing the negotiation process, and could quick knowing the possible outcome for different options before the negotiation or at the duration of negotiation at once and timely adjusting the options if can’t reaching an optimal agreement because it could be simulated repeatedly. Negotiation is used to solve disputes or conflicts between parties and it’s a solution process. Many types for the conflict as armed conflict are likely to break out between the two countries, the political affairs between two parties, the dispute between labor and capital, the conflict between natural resources exploiting and environmental protection, and such business problems as a patent conflict, trade agreement between the two companies or two countries are also important domain for the conflict in global competition environment. Any sort of the problem as the competition, antagonism, different opinions are likely to bring the crisis of the conflict. Therefore, to reconcile the conflict between the two parties is essential in order to achieve an agreement. Global interdependence has increased the number of necessary transactions between governments (as WTO), enterprises (as international joint ventures), and international organizations. Negotiations form a substantial part of organizational activities. Negotiation is a solution process for two parties without rule, conventions or rational methods. Negotiation is a process, not an event. Conflicts are to be resisted and avoided if possible, because the negotiation process is a potential source of conflicts. Negotiation is used to resolve conflicts between parties in a dispute (Lewicki, 1985). Negotiation is an interactive, competitive decision-making process in which it is necessary to try and achieve a balance between two parties in any negotiation type. Although both parties are opposed to each other because they hopes that the maximum benefit itself (no matter what the antagonistic relationship or cooperation relationship), their objective is to reach an agreement (the agreement came to compromise) that will be implemented rather than aborted or avoided. Negotiation research has a long history of identifying factors determining the negotiated outcome (Pruitt, 1993; Thompson, 1998).
Spirituality in the Workplace: Developing an Integral Model and a Comprehensive Definition
Dr. Joan Marques, Woodbury University, Burbank, CA
Dr. Satinder Dhiman, Woodbury University, Burbank, CA
Dr. Richard King, Woodbury University, Burbank, CA
A new awareness has been stirring in workers’ souls for at least 10 years now: a longing for a more humanistic work environment, increased simplicity, more meaning, and a connection to something higher. There are many reasons for this mounting call, varying from the escalating downsizing and layoffs, reengineering, and corporate greed of the 1980s to the enhanced curiosity about eastern philosophies, the aging of the baby boomers, greater influx of women in the workplace, and the shrinking global work village. Straight through the varying opinions about what spirituality at work really entails, there appear to be a set of common themes that almost all sources seem to agree upon. This paper presents a list of these themes; a comprehensive definition and an integral model of spirituality in the workplace, for consideration of future researchers in this field; and some practical strategies for corporate leaders interested in nurturing the spiritual mindset. This paper presents a brief exploration of a new paradigm that is emerging in business – Spirituality in the Workplace. This new awareness has been stirring in workers’ souls for at least 10 years now: a longing for a more humanistic work environment, increased simplicity, more meaning, and a connection to something higher. Although there is diversity in opinions about what spirituality at work really entails, there appear to be a set of common themes that almost all sources seem to agree upon. After surveying the current literature in search of these common themes, this paper presents a comprehensive definition and an integral model of spirituality in the workplace, as well as some practical strategies for corporate leaders interested in nurturing the spiritual mindset. Too many people feel unappreciated and insecure in their jobs. According to Morris (1997, p. 7), “Overall job satisfaction and corporate morale in most places is all time low.” Many re-engineering gurus have come to realize that, in their bid to make processes more efficient, they forgot the most essential element of the equation: the people. According to a recent survey of more than 800 mid-career executives, unhappiness and dissatisfaction with work is at a 40-year high. Four out of ten of those interviewed hated what they do. This proportion is double than that surveyed four decades ago. (Cited in Barrett, 2004, p. 1) Indeed, the emerging paradigm called “spirituality in the workplace” is conveyed in multiple ways: While Schrage (2000) finds that “A fundamental tension between rational goals and spiritual fulfillment now haunts workplaces around the world,” (p. 306), and that “Survey after management survey affirms that a majority want to find ‘meaning’ in their work” (p.306), Oldenburg and Bandsuch (1997) state that for quite some time now, something has been stirring in people’s souls: a longing for deeper meaning, deeper connection, greater simplicity, a connection to something higher.
The Case Analysis on Failures of Enterprise Internal Control in Mainland China
Ta-Ming Liu, Hsing Wu College, Taiwan
The establishment of a well-designed internal control mechanism has become a legislative requirement for enterprises and has won a widespread support and participation from enterprises around the world. A proper and complete mechanism of internal control can not only insure the adding and retaining of asset value, boost the economical efficiency for the enterprises, but also help achieving strategic goals of the enterprises. In Mainland China, however, the reality of enterprise internal control does not quite measure up to the standard. Most of the enterprises in China have not yet accomplished the building-up of an effective internal control mechanism, while some of the enterprises do not even have one. The lack of internal control leads to the destruction for some of the enterprises eventually. Through the case analysis of enterprises in Mainland China, the revealing evidence shows a worsen situation in which that fiscal misconduct, fraudulent financial reporting, illegal activity and law-breaking behavior exist far more commonly in enterprises and business sectors of Mainland China’s. Thus, how to establish and implement internal control in enterprises and boost the efficiency and effectiveness of its operation are the subjects that require further and detailed study. This paper uses Asia Enterprise as the example to discuss the reasons of internal control failures and proposes suggestions on improving them. The concept of enterprise internal control in Mainland China is still at its embryotic stage. Although the government has established regulations for enterprise internal control, feign compliance is still too common a situation to be seen inside the Chinese enterprises. Thus, knowing how to carry out enterprise internal control demands immediate attentions since establishing effective and exact standards of internal control improves the quality of economical activity and management efficiency. The action has a significant urgency for China in facing the challenge of becoming a member to the World Trade Organization. The author of this paper have had the honor to be acquainted with Chairman Hu of the Finance and Account Information Technical Research Institute in Shanghai and some of the related friends. In addition to the author’s personal interest on the subject of enterprise internal control in Mainland China, supports and encouragements from these Chinese experts and scholars have also contributed to author’s study on the subject. The objective of this research is to encourage the sharing of experiences on the subject of fiscal internal control from both sides of the Taiwan strait. There are great differences on fiscal internal control and managerial requirement between business sectors in Mainland China and Taiwan. Along with the more and more intent business collaborations between the strait, probing how to avoid conflicts on the concept of internal control presents an important facet for future studies on analyzing enterprise internal control for both China and Taiwan.
Five Competitive Forces in China’s Automobile Industry
Zhao Min, University of Paris I Panthéon-Sorbonne, France
China’s automobile market posted very rapid growth in recent years, and it was the third biggest automobile market in the world in 2003. Because China’s large market draws many foreign automobile actors, how to be successful in the competition in China is an essential question for Multinational Enterprises (MNE). This paper will attempt to define the conditions of competition for MNEs in China through the industrial competitive framework of Porter, and to demonstrate how it influences the MNE strategy and competitive position. In particularly, this paper provides a comparison of the competitive position of American, European, and Japanese automobile multinationals in China. In the past ten years, the production of motor vehicles in China has seen an average annual growth rate of 15 percent, compared to a world average of 1.5 percent in the same period. China produced 4.4 million vehicles in 2003 (OIAC 2004) with a growth of 35 percent from 2002, becoming the fourth biggest vehicle manufacturer in the world, just after United States, Japan, and Germany. Rising consumer wealth levels have been a major contributory factor to the sudden explosion in the car market. According to the world market research center, Chinese consumers' purchasing power has risen to $5,500, which has historically been the level of car consumption in other markets. With 4.5 million vehicles sold in 2003, China now is the fourth biggest automobile market in the world (WMRC 2004). Several institutes argue that China’s vehicle market is set to almost double until 2008, challenging Japan for its position as the world's second-largest auto market. China’s big market draws many of foreign automobile actors. Almost all of the world’s top automobile assemblers and suppliers have invested in China, with Volkswagen, PSA, General Motor, Delphi, Visteon, Valeo, and Man as early entrants, and Honda, Toyota, Nissan, Hyundai, and Denso coming in later. The competition is becoming increasingly fierce. With all the world’s leading global automakers ramping up production in a bid to dominate the local market, the tensions have begun to mount among foreign automobile enterprises. In this context, it seems important to know the environment of the Chinese automobile industry with the view of establishing an appropriate strategy for automobile MNEs to achieve success in China. We analyze the China’s automobile industry through the Porter’s industrial competitive framework because it not only offers insights into the environment of the automobile industry, but also influences the MNE’s strategy and competitive position. We start by presenting this theoretical framework (section 2).
Trademark Value and Accounting Performance: Analysis from Corporate Life Cycle
Dr. C. L. Chin, National Chengchi University, Taiwan
Dr. S. M. Tsao, National Central University, Taiwan
H. Y. Chi, National Central University, Taiwan
The objective of this study is to examine whether the association between trademarks and a firms’ performance is a function of firm lifecycle stage. Following Anthony and Ramesh (1992), this paper classifies year-firms into various lifecycle portfolios using dividend payout ratio, sales growth, capital expenditure, and the firm age. Consistent with our prediction, the results document a monotonic decline in the response coefficients of trademarks from their growth to the stagnant stages. This paper also used the Seethamraju (2000) model to estimate trademark value and find that estimated trademark values also monotonically decrease from the early to later lifecycle stages. Understanding the value of intangible assets has become increasingly important in the “new economy”. Accounting literature on intangible assets focuses primarily on valuation, value relevance, and recognition of intangible assets on financial statement. Under generally accepted accounting principles (hereafter GAAP), most intangible assets, which are of substantial economic assets, are typically unrecognized in the financial statement as accounting assets. However, a growing body of literature documents that non-financial information about intangible assets, such as R&D (Lev and Sougiannis, 1996; Chan and Lakonishok, 2001), advertisement expenses (Landes and Rosenfield, 1994), patents (Griliches, Pakes and Hall, 1987), customer satisfaction (Ittner and Larler, 1998), and brand (Barth er al., 1998; Kallapur and Kwan, 2004), plays a significant role in determining a firm’s performance or value. This paper examines a previously little explored source of intangible assets: trademarks. To strengthen competitive force and increase market share, enterprises will make every effort to leave an irreplaceable image in the minds of consumers. Therefore, trademarks not only are an important factor that affect corporate value, but also become a key point that decides whether a business can succeed or not. This study examines whether the association between trademarks and firms’ performance is a function of corporate lifecycle stages. Specifically, we expect a monotonic decline in the effect of trademarks on a firm’s performance, from its growth to the stagnant stages. Conceptually, a trademark is used to identify the source of a product (service) and distinguish that product (service) from those deriving from other sources.
The Impact of Floating Thai Baht on Export Volumes: A Case Study of Major Industries in Thailand
Dr. Lugkana Worasinchai, Bangkok University, Thailand
The purpose of this study is to examine the impact of exchange rate movements on the export volumes of Thai major industries which are jewelry, textile, automotive, food, and software industries under new exchange rate regime. The 1997's financial woe in Asian region and the export sector coming to a stage of halt were partially caused by the change in the exchange rate system, which adversely affected Thailand's competitiveness during the change period. Econometric time series model is utilized to analyze monthly time series data. The sample data covers from July 1997, the beginning date of managed float regime in Thailand, to April 2004. The result suggests that the movement of exchange rate impacts to the movement of export volume of software industries in the same direction. The result implies that the depreciations (appreciation) of Thai baht will increase (decrease) the export volume of software industry. However, the result indicates that changing of exchange rate will not affect to export volumes of jewelry, textile, automotive, and food industries. The face of today's international trade environment changes greatly from that of the recent past. Globalization has brought with its a fierce competition from international rivals. The world's aggregated export volume has grown enormously, from US$20,000 million in 1913 to US$154,000 million in 1963, US$6,473,000 million in 1997, and US$8,000,000 in 2003. Comparing the world's total trade volume to the growth rate of the world's total productivity, it can be concluded that the world's total trade volume grew at the rate of 1.5 times compared to that of the world's total productivity. This statistical figure indicated that the increased productivity of each country was utilized to serve the increased market demand, which varied greatly from country to country, or even within the same country in terms of the value attached to the product quality and the production process. Consequently, another vital competitive factor determining the business sector's success is an ability to respond to these varying consumer needs in each country. (Arize,1995) An emerging trend among several world's major economic regions is a consolidation of regional economies. An example of the regional economic consolidation is an establishment of the European Union, which has evolved into a final stage of development by using a single currency system, the Euro currency.
The Dynamic Relationship and Pricing of Stocks and Exchange Rates: Empirical Evidence from Asian Emerging Markets
Dr. Shuh-Chyi Doong, National Chung Hsing University, Taiwan
Dr. Sheng-Yung Yang, National Chung Hsing University, Taiwan
Dr. Alan T. Wang, National Cheng Kung University Taiwan, Taiwan
This paper examines the dynamic relationship and pricing between stocks and exchange rates for six Asian emerging financial markets. We find that stock prices and exchange rates are not cointegrated. Using Granger causality tests, bi-directional causality can be detected in Indonesia, Korea, Malaysia, and Thailand. Except for Thailand, the stock returns exhibit significantly negative relation with the contemporaneous change in the exchange rates. It implies that the currency depreciation accompanies with a fall in the stock prices. The conditional variance-covariance process of changes of stock prices and exchange rates is time-varying. The results are crucial for international portfolio management in the Asian emerging markets and also of particular importance for testing international pricing theories as misspecifications could lead to false conclusions. Given the increasing trend toward globalization in financial markets, a substantial amount of research has been devoted to the investigation of the correlation of stock returns across international markets. Eun and Shim (1989), Hamao et al. (1990), and Bekaert and Harvey (1996), among others, investigate the dynamics of international stock movements, and find significant cross-market interactions. These empirical findings are of interest for two reasons. First, portfolio theory suggests that if the stock returns between markets are negatively correlated, investors should be able to reduce their risk through international diversification. However, if countries’ stock returns are positively covarying, it is possible to use the information in one market to predict the movement in the other market. Second, the Asian emerging markets had policy and regulation changes in recent years to facilitate cross-border investing. The expected returns from the investment in foreign stocks are determined by changes in local stock price and currency value. If the effect of exchange risk does not vanish in well-diversified portfolios, exposure to this risk should command a risk premium. Therefore, the interaction between currency value and stock price is an important determinant of global investment returns. This paper focuses on the dynamic relationship and pricing between the stocks and the exchange rates for Asian emerging markets. We first test for cointegration and causality between these two variables. Then we apply a bivariate GARCH-M model to investigate such relationships with a time-varying covariance matrix.
The Effects of Repatriates’ Overseas Assignment Experiences on Turnover Intentions
Dr. Ching-Hsiang Liu, National Formosa University, Taiwan
In today’s global marketplace, it is critical for Taiwanese multinational corporations to remain competitive in the area of international human resource management. This study is taken in an effort to determine if repatriates overseas assignment experiences will affect Taiwanese repatriates’ intentions to leave their organization. Overseas assignment experiences include number of repatriates’ overseas assignments, length of most current overseas assignment, time return from the overseas assignment, host country of most current assignment, and family accompany during the overseas assignment. By building on the repatriation adjustment and turnover theories and researches, this study expanded these recent findings to Taiwanese repatriates. The study results indicated that number of repatriates’ overseas assignments, time return from the overseas assignment, and family accompany during the overseas assignment were found to be significantly related to repatriates’ intent to leave the organization. Length of most current overseas assignment and host country of most current assignment were not related to intent to leave. The implications for international corporations of these results are discussed in detail. This study may help multinational organizations in Taiwan to enhance the international assignment process of their employees and keep valuable human capital within the organization. Improvements in information and communications technology have facilitated the globalization of markets and industries. Moreover, the impact of regional free trade agreements among nations has contributed to an increase in foreign direct investment in member countries. In today’s global marketplace, it is critical for Taiwanese multinational corporations to remain competitive in the area of international human resource management. To ensure expatriates can support a company’s expansion through both technical expertise and cultural understanding, it is not surprising to find that many organizations attempt to provide expatriates with support, programs, and skills to help them be productive and effective in their foreign assignments (Joinson, 1998). While much attention is given to the process of expatriation, much less attention is given to repatriation, the final link to the completion of the international assignment (Bonache, Brewster, & Suutari, 2001; Brewster & Scullion, 1997; Riusala & Suutari, 2000).
Prepare for E-Generation: The Fundamental Computer Skills of Accountants in Taiwan
Dr. Yu-fen Chen, National Changhua University of Education, Changhua, Taiwan
The study aimed to explore the fundamental computer skills of accountants for E-generation in Taiwan, also to examine the proficiency levels of these computer skills possessed by accountants currently to serve as crucial references and suggestions to education authorities, schools, faculties and curricular planners. Literature review, expert meetings and questionnaire surveys were utilized for gathering data concerning the items of fundamental computer skills of accountants and proficiency levels of these computer skills possessed by accountants in Taiwan. Data collected from the questionnaire surveys were analyzed through statistical methods including frequency distribution, T-test, one-way ANOVA, and Scheffe’s method. The study developed 3 major categories and 17 subcategories as a research framework in consisting of the items of fundamental computer skills of accountants for E-generation in Taiwan. The progress of information technology not only trigger the business management mode to change but new graduates entering the job market will invariably be required to possess rudimentary computer skills. As the administration of the Ministry of Education directed that training manpower, the intermediate professional technician should be a key objective in all vocational junior colleges . In 2000, the Chinese Computer Skills Foundation indicated that junior college graduates are not only required to be fluent in the domain of conventional accounting knowledge but also need to be proficient in computer skills, such as computer operating system, system software, word processing software, spreadsheet software, packaged commercial accounting software, database software, graphic software, presentation software, multimedia software, Internet software and so forth. A survey in 2002 also found the top four technology skills for new accounting hires to possess, in order of importance, were spreadsheet software (e.g., Excel), Windows, word-processing software (e.g., Word), and the World Wide Web . In an effort to strengthen the vocational education system, the education authorities have moved to introduce a series of educational reforms, including the upgrade of excellent vocational junior colleges to the full-fledged status of institutes of technology, and the establishment and implementation of a certification system.
The Comparative Study of Information Competencies-Using Bloom’s Taxonomy
Jui-Hung Ven, China Institute of Technology, Taiwan, R.O.C.
Chien-Pen Chuang, National Taiwan Normal University, Taiwan, R.O.C.
After we collect the professional competency requirements related to information occupations from America, Australia, and Taiwan, we manually extract the action verbs which describe the competencies from the competency statements. All action verbs extracted are then classified into six categories based on Bloom’s cognitive taxonomy. Next, we compare the competency requirements for information occupations from the three countries. We also create a classification lexicon for action verbs based on Bloom’s six cognitive categories. The competency requirements have almost similar distributions in six cognitive categories, which reflect the fact that the same information competencies are required in the three countries. The most needed information competency belongs to the synthesis category and the percentage average is 45%. The second needed competencies are application and analysis with around 20% each. In both knowledge and comprehension level together, the percentage is low, which is up to 5% only. Competencies, synonymous with abilities, are the state or quality of being able to perform tasks. Therefore, competencies are observable or measurable knowledge, skills, and attitudes (KSA) (Rychen & Salganik, 2003). Knowledge and skills give a person the ability to perform tasks, while attitudes give a person the desire to perform tasks. Many countries have developed their own occupational information system to provide services to people who can understand the competency requirements needed for each occupation, such as O*NET OnLine (http://online.onetcenter.org/) developed by the U. S. Department of Labor, National Training Information Systems (NTIS) (http://www.ntis.gov.au) developed by Australia National Training Authority (ANTA), and the Occupation Information Systems (OIS) (http://www2.evta.gov.tw/odict/srch.htm) developed by the Bureau of Employment and Vocational Training of Taiwan. Since competencies are the abilities to perform tasks, they can usually be grouped into two categories: (1) generic competencies and (2) professional competencies (European Training Foundation, 2000; Kearns, 2001). The former are more general and domain independent competencies such as listening, speaking, reading, writing, problem solving, and etc., which are needed in every workplace. The latter are more specific in terms of knowledge and skills, which are domain dependent. Competency statements are used not only to describe competency requirements in skill standards systems (Aurora University 2003; Mansfield & Mitchell, 1996), but also to describe basic academic skills, teaching objectives, assessment criteria, and learning outcomes in school systems (Ruhland & Brewer, 2001), as well as to describe personal profiles, curriculum vitaes, career plans, and job recruitment advertisements in job market systems (Michelin Career Center, 2004).
Expected Default Probability, Credit Spreads and Distant-from-Default
Dr. Heng-Chih Chou, Ming-Chuan University, Taiwan
This article analyzes the information content of the distant to default regarding a firm’s default risk. Under the Merton’s (1974) option pricing model, both the relations between the expected default probability of a firm and its distant-from-default, and the relations between the credit spreads and distant-from-default are examined. We demonstrate that both expected default probability and credit spreads could be expressed by the analytical function of the distant-from-default. This means that people can easily infer a firm’s expected default probability and also its credit spreads from the information of its value of distant-from-default. The distant-from-default metric, proposed by KMV Corporation is new but one of the most potential tools in terms of its ability to model default risk. The distant-from-default measures how many standard deviations a firm’s asset value is away from its debt obligations. Higher value of distant-from-default means that the firm's asset value is further from its expected default point, and thus less its expected default probability; lower value of distant-from-default implies that the firm's asset value is close to its default point, and thus higher its expected default probability. However, like the credit rating, the value of distant-from-default does not straightly tell us what the expected default probability and credit spread are. In order to extended this risk metric to a cardinal or a probability measure, one alternative is to use historical default experience to determined an expect default frequency as a non-linear function of distant-from-default. According to KMV’s approach and based on its huge default database, a firm with asset value 7 standard deviations away its debt obligation has 0.05% chance to default over next year (Crosbie, 2003). Thus, the predictive power of distant-from-default is based on the assumption that the past default experience is a good predictor of the future default rate. However, it is obvious that the value of distant-from-default contain the information of a firm’s expected default probability. Different from the KMV’s regression approach or without its database of default experience, in the article under the Merton’s(1974) model we connect the value of distant-from-default with the expected default probability of the firm. Meanwhile, with the same approach we also find the relations between the value of distant-from-default and credit spread. Credit spread is the difference in the yield of a risky debt versus that of a risk-free debt of similar maturity. It tells us the risk premium of the risky debt and thus it is also regarded as a credit risk metric. where the default point is dependent on alternative default triggering conditions, and the simplest choice would be the amount of debt. The distant-from-default metric is a normalized measure and thus may be used for comparing one company with another.
Yearning for a More Spiritual Workplace
Dr. Joan Marques, Woodbury University, CA
Spirituality in the workplace is a term, which, for some, has merely meant yet a new buzzword in the business environment, but which is fortunately for an increasing number of business executives and workers at various levels, emerging into a serious trend that can no longer be pushed aside with an annoyed shrug, or rejected with the cry that it is just another disguise for bringing religious practices into work environments. This paper reviews 3 main insights that have arisen since the topic of spirituality in the workplace has become such an extensively discussed one, and subsequently elaborates on some major advantages of applying this mindset versus some major disadvantages of refraining from doing so. The paper finally examines one of the main reasons for today’s corporate workplaces to remain unspiritual. 'Treat people as if they were what they ought to be and you help them to become what they are capable of being.' Johann Wolfgang von Goethe (1749-1832) Spirituality in the workplace is a term, which, for some, has merely meant yet a new buzzword in the business environment, but which is fortunately for an increasing number of business executives and workers at various levels, emerging into a serious trend that can no longer be pushed aside with an annoyed shrug, or rejected with the cry that it is just another disguise for bringing religious practices into work environments. The multiple publications and presentations on this topic have, by now, educated the reader’s world enough about the difference between spirituality and religion, and at the same time, have brought about some significant realizations within the minds of members of Corporate America. The first and least complicated insight that the American workforce has picked up on is the acknowledgement that something is wrong with the majority of our work environments: more and more people want to feel comfortable and important in their workplace, and don’t want to be considered yet another name tag with yet another set of functions to fulfill. Workers want to be recognized for who they are: people, with families, ups and downs, skills and talents, and diverging - and oftentimes very useful - perspectives on matters. The second and slightly more comprehensive realization is that the implementation of spirituality in the workplace is not happening as smoothly and rapidly as may have initially been expected. This unfortunate setback has a number of important reasons of obstinate nature at the core: Cultural values and social trends that have been in place for almost a century, and that are therefore very hard to correct.
The International Harmonization of Accounting Standards: Making progress in Accounting Practice or an Endless Struggle?
Kellie McCombie, University of Wollongong, Australia
Dr. Hemant Deo, University of Wollongong, Australia
This research paper aims to use a Foucauldian theoretical framework to explain Australia’s attempts to develop a set of global accounting standards. The involvement of the current Australian Federal Government (AFG) in the standard setting process is crucial to this understanding. The argument put forward by the AFG is seen as one that is constructed according to the “totalizing” discourse of globalization. The power and knowledge of the Australian Accounting Profession (AAP) and the AFG, are highlighted using the Foucauldian framework, providing a means by which this process of harmonization can be appreciated. Australia’s IH project will be explored, with particular emphasis on the AFG’s involvement. This research paper provides an understanding of the Foucauldian theoretical framework, highlighting the significance of discourse, and power and knowledge relationships being studied in an archaeological and genealogical context. This allows a discussion and analysis of the IH project in Australia, with special attention given to the AAP and AFG roles in creating and sustaining a globalization discourse for the accounting and business community, through the interplay of power and knowledge. In order to promote the IH project, the paper reveals the promulgation of proposed benefits of IH. These benefits are shown to be based on the neo-liberal version of globalization. In a final section, this research paper critiques the proposed benefits of a globalized accounting discourse that business in Australia now faces. It is concluded that the IH project in Australia can be best described as an endless struggle to dominate accounting discourse, rather than as representing progress in accounting practice. Accounting standards provide the accountant with a guideline to report economic transactions and events for an organization. Accounting standards in Australia have legal backing through the Corporations Act (CLERP, 1997) and are binding on members of both professions (ASCPA & ICAA, 2004). The accounting standards are also described “as a piece of delegated legislation…parliament has given the power of making accounting standards to a body that has experts on it rather than developing the documents itself as a body of legislators” (Ravlic, 2003). The number of companies that have to apply standards in preparing financial reports is therefore quite significant, only part of which includes listed companies.
The Impact of Environmental characteristic on Manufacturing Strategy under Cleaner Production Principles Guidance
Dr. Jui-Hsiang Chiang and Dr. Ming-Lang Tseng, Toko University, Taiwan
The intense competition and environmental protect in the current Taiwan marketplace has forced the firms to re-examine their methods on current manufacturing strategy. All manufacturers are struggled within the market competition and awareness on green production principles. Above all, this research formed from various previously studies and further to organize the questionnaire and come with research design on causality. This study surveys on 25 ready-food manufacturing companies in Taipei area to determine the model under such circumstance and manifested statistical tools such as independent sample t test, reliability scale and path analytical method to determine the causality on manufacturing strategy. Ultimately, this research results demonstrated the total direct effect and total indirect effect of exogenous environment and cleaner production principles on manufacturing strategy, and also indicates that exogenous environment and cleaner production principles are vitals on manufacturing strategy. The intensified competition in a number of global manufacturing industries has triggered in the manufacturing function and the contribution of manufacturing strategy can make towards a company’s competitiveness. The exogenous variables of the model are three dimensions of the environment: dynamism, munificence and complexity (Fluent, 2004). These attributes represent the main characteristics of the exogenous environment considering the resources dependence on manufacturing strategy under the guidance of cleaner production principles. Furthermore, the citations have frequently appeared in the literature for analyzing the effects of environment on organizations (Scutcliffe, 1998) (Boyd, 1993). The proper management of manufacturing requires that sufficient and maximum strategies should be formulated to govern its operations. Such strategies should consist of coordinated objectives and strategic plan, which shall have the purpose of securing medium and long-term sustainable competitive advantages over the firm’s competitors (Tseng & Chiu, 2004). And sustainable development presented for the preservation of resources for the future generations while the present generation continuous its growth and development. However, the manufacturing industry is not only integrated the manufacturing strategy into its business system but also the environmental protection has to be the guidance for its future operation (Azzone, 1998) (Ward, 1996).
The Relationships Between Explicit and Tacit Oriented KM Strategy, and Firm Performance
Dr. Halit Keskin, Gebze Institute of Technology, Turkey
In this study knowledge is considered as explicit and tacit; and in line with this, KM (KM) strategies are classified into two categories: explicit oriented KM strategy and tacit oriented KM strategy; and the relationships between these variables and firm performance are investigated. Also the environmental factors are used as moderators between KM strategies and firm performance. According to the regression analyses, explicit and tacit KM strategies have positive effects on firm performance; and the impact (magnitude) of explicit oriented KM strategy is higher than the tacit oriented one on firm performance. Also it was found that greater environment hostility, the greater relationship between explicit and tacit oriented KM strategies, and firm performance. Disappearing boundaries, globalizing competition and rapid changing technology and business life; lead the economy to a knowledge based direction (Clarke, 2001). While the factors of production, such as labor and capital were concentrated assets in traditional economic structure; knowledge has come on the scene as a factor itself and has become the most important one (Cliffe, 1998; Hansen, Nohria&Tierney, 1999; Davenport, 1997). In this sense, firms have become much more interested in stimulating knowledge, which is considered as the greatest asset for their decision making and strategy formulation. For example, Drucker (1993) stated that, “We are moving to a society in which the basic resource of economy is knowledge, instead of capital, labor and natural resources”. As a result, it is necessary to manage knowledge effectively in the new economy, because the achievement of a sustained competitive advantage depends on firm’s capacity to develop and deploy its knowledge-based resources (Perez & Pablos, 2003). In this sake, firms have to adopt an appropriate Knowledge Management (KM) strategy in regard to their knowledge entity. For instance, Choi and Lee (2002) indicate that applying tacit and explicit-oriented strategies is imperative for firm performance by large sized firms in western countries. However the impact of tacit and explicit oriented KM strategies on firm performance in SMEs in developing countries is interestingly scant.
The Cultural Dimension of Technology Readiness on Customer Value Chain in Technology-Based Service Encounters
Chien-Huang Lin, National Central University, Taiwan
Ching-Huai Peng, National Central University, Taiwan
Most extant literatures discuss how technology can successfully infuse into the new emergent service setting. Only a few articles, posing customer might be reluctant to interact with technology, are discussed from the stance of customer. To investigate how country level variables, such as culture dimensions, influence the chain is also important for international marketing strategy planning. A conceptual model of customer technology-based service value chain that integrates micro- and macro- level perspectives is developed, in which possible influence from customer technology readiness and from national culture dimensions are also discussed. As technology advances daily, many traditional human-to-human service encounters are now replaced by human-to-machine service encounters, where technology plays an important role in delivering service to customer. Hence, it is valuable to distinguish within conventional solid service value chain how various customers’ beliefs toward technology affect their service perceptions and consequential behaviors. Furthermore, in a global environment, service firms do compete one another beyond country boundary. This research reviews papers in related domains and leads to the construction of a conceptual framework for better explaining consumer behavior under technology-based service encounter from cross-culture points of view. Quality-value-loyalty chain (Parasuraman and Grewal 2000) from marketing viewpoint is the important antecedent of firm profits. Firms with higher performance in the chain are thought to gain higher profits than those firms with lower performance. Service quality then is the most important root of firm profits for non-pure physical products transaction. To measure “service quality” is not easy due to its nature of intangibility, inseparability, variability and perishability until the most well-known SERVQUAL, a multiple-item scale, had been developed to measure consumers’ perceived service quality (Parasuraman, Zeithmal and Berry 1988; Parasuraman, Berry and Zeithaml 1991). SERVQUAL captures how customers perceive service quality from five distinct dimensions: reliability, responsiveness, assurance, empathy and tangibles, where reliability is the most important factor to customers and tangibles the least important (Berry, Parasuraman, Zeithaml and Adsit 1994). SERVQUAL has been cited and replicated widely in marketing studies for interpersonal service settings in particular; there are many researchers have attempted to test and adapt the SERVQUAL instrument in various settings (e.g., Fick and Ritchie 1991; Lewis 1991; Young, Cunningham and Lee 1994; Boshoff and Tait 1996;
Applying Knowledge Management System with Agent Technology to Support Decision Making in Collaborative Learning Environment
Rusli Abdullah, Universiti Putra Malaysia, Malaysia
Shamsul Sahibudin, Universiti Teknologi Malaysia, Malaysia
Rose Alinda Alias, Universiti Teknologi Malaysia, Malaysia
Mohd Hasan Selamat, Universiti Teknologi Malaysia, Malaysia
Knowledge management system (KMS) with agent technology has played the major roles in managing knowledge to support making decision among communities of practice (CoP) of collaborative learning environment. This service is provided that to ensure utilization of knowledge as the corporate assets could be acquired and disseminated at anytime and anywhere, in the context of reaching and sharing the knowledge between CoP. Agent technology has also been used to speed up and increase the quality of service in KM process of collaborative learning environment, in term of creating, gathering, accessing, organizing and disseminating knowledge. This paper described on the conceptual and its relationship with agent technology in KMS and also been demonstrated on how it was being used to support the community members to make decisions of their purposes of learning environment. In this case, the KM system development is implemented by using the groupware software that is Lotus Notes product as a case study. Emphasis will be given in discussing the algorithms process specification of agent used that helps members to making decision and work collaboratively. The identification of critical success factors (CSF) of KMS in collaborative learning environment to ensure its initiatives will also been discussed. Knowledge is information that is contextual and relevant of event, as well as actionable by something like human or agent. The concept of knowledge also could be as information in action as proposed by O’Dell (1998). This relationship between data, information and knowledge is shown in Figure 1. According to Nonaka and Takeouchi (1995), knowledge could also be categorized into two types that are explicit and tacit. The differences between two types are shown in Table 1. Therefore knowledge is an asset that should be managed well to be more valuable and more meaningful. The understanding of knowledge is very essential as we are about to define what does it really mean by knowledge management. Knowledge management system (KMS) has become a common medium to acquire and disseminate knowledge nowadays by using the IT as enabler tools for everyone to reach, share with among the members, and use it from any workplace in world at any time (Alavi and Leidner, 2001; Andriessen, 2002). In order to speed up and increase the quality of service for the communities of practice in an organization, agent technology (AT) can be used to help in the search and retrieval methods with knowledge management system (KMS) as well as to assist in combining knowledge thus leading to the creation of new knowledge (Barthes and Tacla, 2002; Barbuceanu and Fox, 1994).
Using Balanced Scorecard and Fuzzy Data Envelopment Analysis for Multinational R & D Project Performance Assessment
Dr. Kuang-Hua Hsu, Chaoyang University of Technology, Taiwan, R.O.C.
Performance indicator is an important factor used in business goal setting and management performance assessment. Traditionally, companies tend to measure business performance in terms of financial data such as ROI, ROA, etc. However, those indicators are not enough for management to evaluate business performance as a whole. Balanced Scorecard can be used to overcome this problem by integrating all aspects of business operation such as customer aspect, organizational process aspect, business creation and learning aspect, and financial aspect. While we are proposing the Balanced Scorecard method, we have to, at the same time, evaluate the performance of the organization. Whilst business performance can be expressed in certain clear numbers or words, it can only be described in vague languages for most of the times. Thus, the Data Envelopment Analysis Method (DEA) along with the Fuzzy Theory can be implemented to generate the objective performance indicators. This paper uses the Fuzzy DEA to evaluate the performance of Balanced Scoreboard for a multi-national research and development project. Business organization’s goal is to pursue its operational performance. It is to assessment how business organization’s goal is achieved. Therefore, the management performance indicator is an important factor used in business goal setting and performance assessment. However, the operational performances are clearly defined and should be explained with concrete events in order to be understood. Marsh and Mannari (1976) defined operational performance in two parts: the top goal and the sub-goal performances. The former includes the product and service output level, sales volume, and profitability. The latter is set to help achieve the top goal, which means with the sub-goal the organization has to reduce work-absence rate, promote work morale, and implement a projects proposing system in order to reach business’s top goal. Hitt(1988) classified the organizational performance into two categories. One is Executive Policy Research Approach which is usually used by company management and business policy researchers. The financial data such as ROI, ROA are used as performance indicators in this approach. The other is Organization Research Approach which is used by researchers. Some non-financial indicators such as total production output and work morale are used to measure business operational performance. Venkatraman and Ramanujam(1986) considered operational performance as only one part of organizational performances. They pondered performance assessment in three dimensions: financial ,organizational and financial and operational integration. Quinn and Rohrbaugh(1996) also proposed a 3-dimension analysis model.
Contemporary Knowledge Management Platform - EPSS
Dr. H. W. Lee, National Chia-Yi University, Taiwan
The goal of an EPSS is to provide whatever is necessary to generate performance and learning at the moment of need. Gery (1991) states that people have been provided with some of the help to accomplish this goal with powerful tools such as job aids and CBT. However, these tools are not an EPSS by themselves, although they can be part of an EPSS. The common denominator that differentiates an electronic performance support system from other types of systems or interactive resources is the degree to which it integrates information, tools, and methodology for the user. This paper examines the definition of an EPSS, reviews EPSS components, examines some course management tools currently used in educational programs, and explores the use of EPSS technology in education. Definitions of electronic performance support systems (EPSSs) vary, but they are generally agreed to be software programs or components that directly support a worker's performance when, how, and where the support is needed. EPSSs are intended to improve workers' ability to perform tasks, usually being performed on a computer. They are related to, but more than, task-oriented online help. In order to determine if these systems can be used in education, more complete definition is essential. Also, an examination of the literature will reveal the current state of affairs relating to the existence of fully formed EPSS systems in education or the alteration of these systems to accommodate educational needs. Finally, an effort will be made to draw conclusions that will provide directions in educational settings. The integration of different tools to help the user perform a task is the key feature of an Electronic Performance Support System (EPSS). An EPSS system is build to integrate resources and tools and to facilitate working on complex tasks (Laffey, 1995). It is a computer-based system that improves worker productivity by providing on-the-job access to integrated information, advice and learning experiences and it is the electronic infrastructure that captures, stores and distributes individual and corporate knowledge assets throughout an organization, to enable individuals to achieve required levels of performance in the fastest possible time and with a minimum of support from other people (Raybould, 1990). It is an integrated tool suite that supports the user of a complex system by providing embedded assistance within the system itself (McGraw, 1994). As a result, EPSS is credited with reducing the amount of time required to access information and bring workers to an entry level of job competency (Bastiaens, Nijhof, & Abma, 1996; Tait, 1995; Lamy, 1994; Bramer & Ghenno, 1993; McGraw 1994; Geber, 1991).
An Empirical Investigation of Sexual Harassment Incidences in the Malaysian Workplace
Dr. Mohd Nazari Ismail, University of Malaya, Kuala Lumpur, Malaysia
Lee Kum Chee, University of Malaya, Kuala Lumpur, Malaysia
This paper presents the findings of a study which investigates the factors that contributed to incidences of sexual harassment at the Malaysian workplace. A questionnaire survey which was partly based on the Sexual Experience Questionnaire (SEQ) developed by Fitzgerald et al (1988) was carried out involving 656 respondents. The findings showed that sexual harassment incidences are rampant at Malaysian workplaces. The findings also indicate that it is aggravated by several factors related to both the organization as well as the individual worker. Specifically, a working environment characterized by lack of professionalism and sexist attitudes biased against women would cause female employees to be more prone to being sexually harassed. When the various demographic characteristics were studied, the findings reveal that the sample of women employees who face a greater risk of sexual harassment tend to fall under the category of the unmarried, less educated, and Malay. As the level of competition increases at a rapid rate, workers remain undeniably an important factor in the competitiveness of modern organizations. However, as more women enter the labor force, the phenomenon of sexual harassment is increasingly becoming a workplace problem thereby affecting the competitiveness of organizations involved. According to a major study conducted in the United States in 1981 by the U.S. Merit Systems Protection Board on 23,000 federal government employees, as much as 42 percent of the females had experienced sexual harassment in the two years prior to the survey (cited in Stockdale and Hope 1997). Sexual harassment also exists among women in 160 of the Fortune 500 list of companies surveyed (Frits, 1989), where 15 percent of these workers have been sexually harassed within the past year while 50 percent tried to ignore it. In addition, 24 percent of victims took leave from work so as to avoid it and five percent resigned after experiencing such incidents. These in turn result in resignations, absenteeism, and reduced productivity of victims. A similar trend in Malaysia has been emerging as women have been entering the workforce in increasing numbers. By the year 2000 almost half of Malaysian women were economically active. This, coupled with the simultaneous upward trend of women in traditionally male-dominated occupations, has set the stage for the sexual harassment threat. As a consequence, sexual harassment has also become a widespread problem in Malaysia, as shown by recent studies. In fact the rates of occurrences do not differ too much from the situation found in the United States. Between 35 percent and 53 percent of women have experienced sexual harassment at work (see for example, Ng et al, 2003; Marican 1999; Muzaffar 1999)
Mortgage Decision … Lower Payment or Faster Payoff ?
Dr. Ralph Gallay, Rider University, NJ
Homeowner financing decisions are confused by the difficulty of comparing the benefits of long-term low monthly payment mortgages, with those of short-term faster payoff loans. The author presents a simplified return on investment perspective that allows one to make an objective comparison of the two alternatives based on increased equity as well as reduced interest costs, relative to extra payments made. This paper is positioned to be both a borrower’s, or lender’s tool to evaluate and explain the relative merits of different mortgages, as well as a pedagogical instrument for educators of personal financial issues. With interest rates in the United States near their lowest in over forty years, an entire new generation of homeowners is faced with the beneficial opportunity of refinancing. The ensuing discussion and calculations apply to every state within the country and affect all of the approximately 74 million owner-occupied homeowners therein (2. Danter, p.1), many of whom either are, or have been recently engaged in seeking homeowner refinancing. In fact, “(2002) was surely one of the most memorable years ever experienced by the home mortgage market”, to quote Federal Reserve Board Chairman Alan Greenspan (1. AFSA, p.1) and these represent more than sixty percent of all loans (6. MSN Money, p.1). The decision many people make in refinancing, inevitably hinges largely on how monthly payments might be reduced and most promotional efforts seem directed at emphasizing this point above all others. Many lending institutions do little to clarify the situation further. In fact some intentionally seem to confuse and confound the consumer with their messages (5. Mokhiber, 2002). A second concern, but one less stressed, is how quickly the loan is paid off, thereby reducing interest payments which do relatively little to enhance the benefit of the borrower. While on a United States itemized tax return, a homeowner may qualify to deduct mortgage interest payments, usually one of the taxpayers largest single deductions, for most this yields only about thirty cents of savings on taxes for every one dollar of interest paid. Clearly, the best scenario would be to avoid interest payments altogether unless one has a higher priority, or better reason to borrow such as for college, medical, home improvement costs, etc. Several studies have sought to explain the confusing alternatives homeowners face in their selection among various mortgage offerings.
Taiwan Multinational Companies and the Effects Fitness Between Subsidiary Strategic Roles and Organizational Configuration on Business Performance: Moderating Cultural Differences
Dr. Ming-Chu Yu, National Kaohsuing University of Applied Sciences, Kaohsuing city, Taiwan
The purpose of this study is to examine the relationships among Taiwan’s overseas subsidiaries based on their strategic roles (including degrees of integration and degrees of localization), organizational configurations (including degrees of resource dependence and degrees of delegation) and business performance. However, their relationships also depend on the subsidiaries’ cultural differences. Using regression analysis, we show that different types of industries, stages of internationalization, degrees of integration, degrees of localization, and degrees of resource dependence are the most important factors on the subsidiaries’ perceived activity satisfaction. The results indicate that the sample of Taiwanese MNC affiliates falls into three subgroups (Autonomous Strategy, Receptive Strategy, and Active Strategy) depending on their global strategies. Active Subsidiaries are highly integrated and have high local responsiveness; Autonomous Subsidiaries have high local responsiveness but low integration, while Respective Subsidiaries have low local responsiveness but are highly integrated. Corporate internationalization and globalization is the focal point in the twenty-first century. The internationalization and the liberalization of business activities are two important elements for the success of contemporary enterprises. Recently, enterprises in Taiwan face surges of wages, rigorous requirements on the benefits of workers and changing business climates. In addition, the establishment of regional economic cooperatives has further accelerated the pace of overseas corporate investment to gain competitive advantages and market niches. Most of these investments concentrate on Mainland China and South East Asia. There has been a profound impact on multinational corporations (MNCs) in the past decade in Taiwan. Most MNCs are operating in markets where the competition is intense and the number of competitors is large.
An Analysis of Global Retail Strategies: A Case of U.S. Based Retailers
Soo-Young Moon, The University of Wisconsin Oshkosh, Oshkosh, WI
Since international marketing channels have played an important role in distributing products from international marketers to consumers in the world, and the impact of some global channel members has been far beyond the traditional functions of retailers, this study reviews the global strategic issues of U.S. based retailers. Specifically, this study analyzes the global strategies of Wal-Mart, Home Depot, Kroger, Target, and Sears based on the concept of generic strategic options. Overall, the study found that globalization is not the answer for all retailers. If a retailer finds that globalization is consistent with its strategic advantages, it is critical to analyze this option along with others. Otherwise, the retailer should seek other alternatives. However, all major retailers may have to expand their operation to the global market due to its growth potential and heavy competition in the U.S. market. Since Levitt (1983) advocated the importance of globalization in international business, much research (e.g., Bakhaus, Muhlfeld, and Van Doorn, 2001; Czinkota and Ronkainen, 2003; Hong, Pecotich, and Schultz, 2002; Laroche, Kirpalani, Pons, and Zhou, 2001; Shoham, 2002; Zou, Shaoming and Cavusgil, 2002) has been conducted empirically or theoretically to identify which strategy, standardization or customization, is more appropriate for a global expansion. However, most focus has been on how to market products in the global market by using a standardization or customization approach. The literature review in this field indicates that though marketing channels are an indispensable part of global marketing, there has not been enough research to find major channel issues. International marketing channels have played an important role in distributing products from international marketers to consumers in the world. Furthermore, some global channel members such as Wal-Mart, Carrefour, and Metro AG became global retailers, and their impact is far beyond the traditional functions of retailers. Thus, the objective of this study is to review the global strategic issues of U.S. based retailers. Specifically, this study analyzes the global strategies of Wal-Mart, Home Depot, Kroger, Target, and Sears based on the concept of generic strategic options. All firms have four major strategic options: market penetration, product expansion, market expansion, and diversification (Aaker, 1995). Though the origin of these options was for “goods” producers, they can be transformed easily to retailers for their strategic direction. A market penetration means that a retailer expands its business with the existing retail format (e.g., department store or convenience store) in the existing market.
The Impact of Consumer Product Knowledge on the Effect of Terminology in Advertising
Shin-Chieh Chuang, Chao-yang University of Technology,Taiwan
Chia-Ching Tsai, Da-yeh University, Taiwan
The use of terminology in advertising is rather popular and commonplace. Previous research suggested that using terminology in ads was intended to create more vividness effect on the audience, who may adopt the “central path” in the Elaboration Likelihood Model (ELM) and be convinced by the terminology in the advertisement message. However, we found that the better vividness effect occurs, when the subjects possess low product knowledge; conversely, a worse vividness effect is present when the subjects possess high product knowledge. In recent years, terminology has been used in large quantity, especially in ads, to which lots of terminology is attached. Shibata (1983) pointed out in his study that there was an increase of the use of monolingual message in Japanese society, and the English language was of greater and greater importance and was used more frequently. Mueller (1992) studied the use of Western languages in Japanese ads in 1978 and 1989, and the results showed the percentage of using English in Japanese ads was increasing and there was an upward trend of using English (which is not translated into Japanese) in Japanese ads. The main reasons of adopting terminology is that when audiences receive terminology, a vividness effect will occur to capture audiences’ attention and audiences may process ads containing terminology via the “central route” of the Elaboration Likelihood Model (ELM) and ads become persuasive. Moreover, the use of terminology may appeal to professional recognition, which is associated with technology and a professional image is formed to impact consumers’ purchasing behaviors. Previous studies suggest that ads containing terminology can increase effectiveness of ads (Hong, 2002). In his further studies, Hong examines whether different effects occur in different product items in which terminology is used. The result of his study shows that when target products in ads are less innovative, such as daily necessities, subjects adopt an informational searching model with low involvement, and they do not need to collect too much information; therefore, no obvious vividness effect occurs in response to terminology. The persuasive effect in ads containing terminology drops dramatically. In contrast, when target products in ads are more innovative, a high degree of persuasion will occur, due to the fact that the products require high involvement, subjects will employ a better informational searching model, and ads containing terminology can appeal to audiences for more recognition, attract more attention and explanation. As a result, the ads generate better persuasive effect. This study is focused on whether or not ads containing terminology can create different persuasive effect in consumers who possess high level and low level of product knowledge.
Analyzing Functional Performance of Hong Kong Firms:Planning, Budgeting, Forecasting, and Automation
Dr. Steven P. Landry, Monterey Institute of International Studies
Dr. Terrance Jalbert, University of Hawaii at Hilo
Dr. Canri Chan, Monterey Institute of International Studies
This paper presents a case study concerning benchmarking one particular set of performance attributes of firms. Specifically, the study addresses issues of planning, budgeting, forecasting and automation of the accounting and finance functions of Hong Kong firms. Students are required to compile raw data gathered from a survey into a format that can be utilized for benchmarking. Further, students are asked to address inherent data limitations associated with using low return-rate, small sample size, survey data for benchmarking. Students are also required to develop various tables and discuss how the data and tables might best be used. The case is appropriate for sophomore and junior undergraduate students. Students should be able to complete the case in 1 hour of preparation outside the classroom. In order to survive in a competitive economy, firms must continuously improve and enhance their products, services and operations. Benchmarking is the primary method by which firms assess their performance relative to their peers. Benchmarking involves comparing an attribute of one firm to the same attribute in a group of comparison firms. To make the comparison fair, the comparison firms should be similar to the firm being examined. Thus, comparison firms are generally selected from firms in the same industry that are similar in size and operate in a similar geographic area. By comparing a firm to its peers, managers can identify areas where they have a competitive advantage, areas where they have a competitive disadvantage and areas for improvement. In so doing, firms can best position themselves in the market to better deliver their value proposition and therefore maximize shareholder wealth. In many instances the data for benchmarking can be obtained from data collection and reporting firms, such as Robert Morris and Associates, which specialize in collecting benchmarking data. These data are generally related to the overall financial operations of firms. In some instances, data available from reporting firms may not be sufficient. This might be the case when the firm operates in a highly specialized industry, in a unique geographical area, or when more specialized data than is provided by reporting firms is needed. When faced with such data challenges, firms and/or associations must collect their own industry data to complete a benchmarking analysis. This study examines one particular set of performance attributes of the Accounting and Finance functions of publicly traded firms in Hong Kong. Because data regarding the accounting and finance functions of Hong Kong firms was not available from other sources, a survey of firms was conducted under the sponsorship of the Financial Management Committee (FMC) of the Hong Kong Society of Accountants (HKSA).
Risk Perception, Risk Propensity and Entrepreneurial Behaviour: The Greek Case
Dr. P. E. Petrakis, Athens National and Kapodistrian University, Greece
The analysis clearly projects two main points: a) the way that risk is perceived by the entrepreneur, which is a primary procedure that determines other important aspects of the entrepreneurial behaviour and performance, b) that there are no determining factors of the risk perception; although it influences the way that cultural idiosyncrasies, knowledge and flexibility characteristics are developed and affects the firm’s performance. On the contrary risk propensity is determined within the entrepreneurial behaviour framework and it takes mediating role of transforming influences mainly from the external macroeconomic environment into important personal traits. It also mediates the entrepreneur’s independence, his need for achievement and his risk perception. The research concerning the factors affecting the entrepreneurial activity has a long history and is extended in the fields of economics (Schumpeter, 1934), sociology (Weber, 1930), and psychology (McClelland, 1961). The entrepreneurial activation is the combined result of macro level environmental conditions (Aldrich, 2000), which have economic or social origin, the characteristics of entrepreneurial opportunities (Christiansen, 1997), and of human behaviour that are related to entrepreneurial motives (Shane, Locke and Collins, 2003) and cognitions (Mitchell, Smith, Morse, Seawright, Peredo and McKenzie, 2002). The issue of risk is central to the study of entrepreneurial behaviour and performance. Different points of views are employed in entrepreneurial risk research agenda (Norton and Moore, 2002): opportunity recognition (Hills, Shrader and Lumpin, 1999; Rice and Kelley, 1997), opportunity evaluation (Hean Tat Keh, Maw Der Foo and Boon Chong Ling, 2002) decision making and problem framing (Kahneman and Tversky, 1979; Tversky and Kahneman, 1981; Schneider, 1992), risk propensity and cultural approval of risk (Wallach and Kogan, 1964; Brockhaus, 1980; Gomez-Mejia and Balkan, 1989; Rowe, 1997) and cognitive approaches to entrepreneurship (Palich and Bagby, 1995). Finally the issue of entrepreneurial alertness in relation to risk has been restated (Norton and Moore, 2002) in the light of Bayesian model. The present article focuses on the relationship between three different aspects of entrepreneurial risk, namely: entrepreneurs’ risk propensity as a personal trait, entrepreneurs’ risk perception and finally the firm’s risk undertaken as it is actually observed in the firm’s data. These three different aspects of the risk concept are related to the main sub-frames which shape entrepreneurial behaviour: macro-environmental factors, cultural values, entrepreneurial motives and traits, cognitive variables, personal characteristics and finally the microeconomics of the project. Thus the paper tries to contribute to the study of ex ante and ex post entrepreneurial attitudes towards risk. Section 2 focuses on opportunities evaluation under risk, while section 3 examines entrepreneurial risk attitudes.
Growth, Entrepreneurship, Structural Change, Time and Risk
Dr. P. E. Petrakis, Athens National and Kapodistrian University, Greece
This article is about the role of the entrepreneurial perception of time and risk vis à vis structural change and growth. Entrepreneurship is a basic constituent element of social capital which in turn is a productive lubricant of the growth process. Different structural entrepreneurial prototypes with respect to time and risk have different structural change effects. Those structural changes (and any structural changes) are not neutral as far as the implications of growth rate changes are concerned. Therefore the time and risk characteristics of active entrepreneurship are reflected in the growth process either in the form of structural change and/or in the form of growth rate change. The paper is developed as follows: section 2 focuses on social capital, entrepreneurship and growth relationship; sections 3 and 4 relates to the analytics of risk and time; section 5 clarifies the time and risk dimensions of entrepreneurship; section 6 analyses the effects of entrepreneurial time and risk on structural change. Finally conclusions will be drawn. The concept of social capital has been put forward alongside the traditional concept of financial, real and human capital during the 1990s (Portes and Landolt, 1996) and it has recently been related to entrepreneurship (Westlund and Bolton, 2003). According to Bourdien and Wacquant (1992) social capital is an individual or group-related resource that accrues by possessing a durable network of more or less institutionalised relationships. According to Coleman (1988, 1990) it is to be found in the relations between individuals and it includes obligations, expectations, information channels and social norms (Piazza-Georgi, 2002), like high-trust and low-trust attitudes (Fukuyama, 1995) or family-based social trust vs community-based trust (Fukuyama, 1995). Social capital should be regarded as the most diversified of capital forms. The extent of the diversification will largely depend on how its basic nature is analysed: Coleman’s (1990) endogenous phenomenon of social relations vs Fukuyama (1995)’s view that it is the result of society's trust and cooperation. Woolcock (1998) and Fedderke et al. (1999) proposed that we should see in the concept of social capital two interacting dimensions: ‘transparency’ (the transaction-cost-lowering functions of social capital) and the rationalisation potential of maintaining increasing returns to scale, i.e. delaying the onset of diminishing returns. Two more notes were added to this by Piazza-Georgi: (a) social capital operates to a significant extent through human skills capital and entrepreneurial skills by lowering their creation costs; (b) there may be a significant substitution effect between human and social capital (towards human capital) through the increased cost of human time. If we then accept that investing in human capital is more efficient that investing in social capital we could have another reason for delaying the diminishing returns process.
The Effect of Genetically Modified Seed Technology on the Direct and Fixed Costs of Producing Cotton
Dr. D. W. Parvin, Mississippi State University, MS
The introduction of genetically modified seed technology dramatically changed cotton production practices. Production systems based on reduced tillage and varieties containing genetically modified genes improved net returns by $47.35 per acre (53%) when compared to systems based on conventional tillage and non-transgenic varieties. Changing the planting pattern to 30” 2x1 full skip and reducing the seeding rate to 3 per foot of row increased net returns by 78% when compared to a 38” solid planting pattern and 4 seed per foot of row. Emerging harvesting technology, cotton picker with onboard module builder, which eliminated boll buggies and module builders (and the tractors they require) will reduce harvest cost by 32%. Monsanto introduced their genetically modified seed (GMS) technology in 1996 and dramatically changed the way cotton is grown. In 1995 most of the cotton grown in the Mid-south was based on conventional varieties and employed conventional tillage practices. In 2004 approximately 95% of the Mid-south cotton acreage was planted with genetically modified varieties (Mississippi Agricultural Statistics Service) and was based on conservation tillage or no-till production practices. The new technology has reduced the number of trips-over-the-field, reduced labor and equipment requirements per acre of cotton, and stimulated the development of other new technology. Basically, the new systems of production employ fewer trips-over-the-field with wider equipment. The Department of Agricultural Economics, Mississippi State University, annual cost of production estimates, (available on line at http://www.agecon.msstate.edu/ Research/ budgets.php) indicate that since the introduction of Monsanto’s genetically modified seed technology (1996-2004) tractor hours per acre of cotton have been reduced by 49% and labor hours have been cut by 43%. Harvest is the most costly component of cotton production. Cotton harvesting systems require that the cotton picker be supported by a boll buggy (BB) and a module builder (MB). Each requires a tractor. Currently, on most cotton farms, more tractors are required during the harvest season than any other period during the production cycle. New harvesting technology, which eliminates boll buggies and module builders and the tractors that support them is expected to increase the reduction in tractor hours to 74% and labor hours to 64% relative to 1995 levels. Data on the cost/unit of production inputs such as labor, fuel, fertilizer, herbicides, insecticides, etc. are 2004 estimates (Cotton 2004 Planning Budgets). Data associated with power units (tractors and cotton pickers) and towed equipment include 2004 estimates of price, length of life, annual hours of use, performance rate (hours per acre), repairs, salvage value, etc. (Cotton 2004 Planning Budgets). Cost to producers of technology not yet marketed was estimated by contacting knowledgeable individuals in the cotton industry.
E-Government – A proactive Participant for e-learning in Higher Education
Sangeetha Sridhar, Majan University College, Sultanate of Oman
The advent of Information Communication Technology (ICT) has empowered both learners and teacher with capabilities to reach and resource beyond physical borders. The information age is changing the way people work, learn, spend their free time and interact with one other. In the Knowledge era, communication technology has made it possible to have a Global University Campus in all its true senses: a collaborative community, multinational staffing, distributed campus resources and multimedia technology. ICTs are driving down costs, improving efficiency and creating a climate of innovation, with competitiveness moving from the national to the global level. They are challenging existing methods of governance, commerce, education, communication and entertainment. This paper presents the findings of a research into the issues regarding the role of e-governance in higher education especially incorporating ICT technologies. The recent trends in higher education demand knowledge creation, capture, dissemination and application for crafting a sustainable development of the entire economy. The paper looks into the Vision 2020 statement for strategic goals, while analyzing the Census 2003 data for trends and direction of higher education in the local region. The research begins with an appraisal of the advent of higher education in Oman, the different centres of excellence and disciplines to begin with. It draws statistics from the Census 2003 to identify further requirements and potentials for future direction and initiatives. A set of strategic issues in the practical implementation of e-learning in higher education through e-governance is presented along with coverage of ICT policy framework. The focus areas would be capabilities of e-learning mode enabled by ICT technology and its implications in the region, Oman market trends, current levels of literacy, legal framework, nation-wide digital library, public services online and citizen awareness. The paper concludes with strong recommendations both at National and Institutional levels for serving as guidelines in paving the path for their directions in future. E-government is the process of offering better government service to the public at a lower cost (1). The right balance to be achieved in implementing e-government with ICT is to balance the public ability to reach and access electronic information.
The Role of Affect and Cognition in the Perception of Outcome Acceptability Under Different Justice Conditions
Dr. Douglas Flint, University of New Brunswick, Canada
Dr. Pablo Hernandez-Marrero, University of Toronto, Canada
Dr. Martin Wielemaker, University of New Brunswick, Canada
Prior research has focused on negative affective responses to distributive justice. This study extends this to consider both affective and cognitive responses to various combinations of procedural and distributive justice. The effects of these responses on perceptions of outcome acceptability are then determined. Structural equation modeling is used to measure the interrelationships of the affective and cognitive effects. In the conceptualization of equity theory Adams (1965) and Walster, Walster and Berscheild (1978) postulated that perceptions of injustice would lead to negative emotional states that would then motivate a search to redress the inequity. Since that time a limited amount of research has confirmed the production of negative affective responses to injustice (Clayton, 1992, study 2; Hegtvedt, 1990;Mikula, 1986; Sprecher, 1992). The most comprehensive study to date by Mikula, Scherer and Athenstaedt (1998) involved 2,921 students who reported situations in which they had experienced positive and negative affective reactions. Situations perceived of as unjust elicited feelings that were longer in duration and more intense. The studies to date have focused on affective responses to distributive justice. To the best of our knowledge no studies have examined cognitive responses to justice. Therefore, this study proposes to extend this to the consideration of the effects of both procedural and distributive justice on cognitive and affect responses. Further, we propose to examine the effects of cognitive and affective responses to the formation of perceptions of the acceptability of an outcome. We begin with a description of procedural and distributive justice. This is followed by a discussion of affect, cognition and their interrelationships. Research hypotheses are formulated about the effect of different interrelationships on the formation of perceptions of outcome acceptability. Organizational justice is concerned with the fair treatment of employees in organizations. Organizational justice is conceptualized as two factors: procedural and distributive justice. This study examines the role of cognition and affect on the formation of perception of the acceptability of decisions made under different conditions of procedural and distributive justice. Distributive justice asks: How fair is an outcome? For example, previous research has examined the fairness of pay (Folger & Konovsky, 1989) and performance evaluation (Greenberg, 1986) outcomes.
Developing Measurements of Digital Capital in Employment Websites by Analytic Hierarchy Process
Dr. Chung-Chu Liu, National Taipei University
Dr. Shiou-Yu Chen, Chung-Yu Institute of Technology
The Internet has provided a place for businesses to come together with a speed and possibility for communication that was never possible before today's information age. Digital capital, the currency of the future, refers to intangible assets gained through knowledge and relationships. This research developed 17 indicators to assess the digital capital on employment websites. The researchers used in-depth interviews to collect data followed by Content Analysis and Analytical Hierarchy Process (AHP) for analysis. According to the analytical results, the study identified four dimensions of digital capital; customer capital, innovation capital, service capital, and relational capital. These results provide a reference point for web-base organizations to assist in determining the key digital capital of their employment websites. The emergence of Internet technology and the World-Wide Web as an electronic medium of commerce has brought tremendous change to the way businesses compete. Companies that fail to make use of Internet technology are regarded as not delivering value added services to their customers and are consequently at a competitive loss. Internet technologies provide businesses with tools to adapt to changing customer needs and can be used for their economic, strategic and competitive advantage (Hamid & Kassim, 2004). Recruitment has emerged as a critical human resource management function for organizations, particularly in an environment of competitive labor markets and mobile employees. Despite changes in the nature of work and the adoption of new technologies, organizational effectiveness is still largely dependent on the competency and motivation of individual employees (Allen, Scotter & Otondo, 2004). Recently, employment websites in Taiwan have played an important role as a cost and time effective means of helping job hunters find new employment. According to a 1999 survey of HR professionals by the Society for Human Resource Management, while nearly two-thirds of human resources professionals placed classified ads in Sunday newspapers, almost 40% also relied on Internet job postings. It is estimated that 32% of all recruitment- advertising budgets in the year 2000 would be spent on the Internet, while the share that went to newspapers would reduced from 70% to 52% (Mondy, Noe & Premeaux, 2002). The significance of intellectual capital has risen proportionately with the boom of the information age and virtual economy (Litan & Wallison, 2000; Blair and Wallman, 2000). As many authors point out, a major proportion of growth companies are valued beyond book value.
Socializing a Capitalistic World: Redefining the Bottom Line
Dr. Joan Marques, Woodbury University, CA
As the trend toward global integration intensifies, the awareness increases that no single ideology has proven flawless. Although still very much in evaluative stage, and unfortunately not yet accepted to the same degree by all participants in the global playing field, there is significant progress to be ascertained in determining what the new, workable trend will be. Interestingly, this development synchronously happens at multiple levels: within the global environment, and within the corporate world, for there, too, it has been proven that no single overbearing style sorts lasting success. An integrative, local approach from all angles seems to be the way to go in the future. This paper reviews some of the current trends and criticisms, and presents a model for the new bottom line in the new world. While countries in today’s ever-intertwining world are gradually detaching themselves from the utilization of explicit ideologies and embracing a more moderate and meaningful approach, companies are mirroring this very trend in a more compact format. What, exactly, are we talking about? Well, just observe the global trends for a moment: countries that used to be known for decades on behalf of their strict capitalistic, socialistic, or communistic systems are now, due to their increased exposure to other cultures, mindsets and procedures, moving toward what is popularly referred to as mixed economies. How did this come about? Through the dazzling events and dynamic developments of the past century, of course! Cars, airplanes, and the Internet have enabled people from different cities, countries, and continents, to observe, mingle with-, and learn from one another in no time. This development gradually yet massively elicited the insight that no ideology works satisfactorily if performed to the extreme: Capitalism enhances the opportunity of undertaking and wealth creation, but it also increases the gap between rich and poor, and puts a firm price label on even the most elementary commodities such as medical care, education, and transportation. Rogoff (2005), for instance, foresees huge problems arising on the capitalistic medical horizon, explaining that “as rich countries grow richer, and as healthcare technology continues to improve, people will spend ever growing shares of their income on living longer and healthier lives” (p. 74).
A Study of Factors Affecting Effective Production and Workforce Planning
Dr. Halim Kazan, Gebze Institute of Technology, Gebze-Kocaeli, Turkey
Four hundred small and medium-sized manufacturing companies, operating in the iron-steel, construction materials, and food industries and registered with the Chambers of Commerce and Industry in Konya, Kayseri, and Kocaeli, have been selected and their managers interviewed to identify those operational practices that have the greatest influence upon workforce planning and effective production. The 71.59% of the effective production and workforce planning is expected by internal factors, whereas external factors which are not under the control of firm explain the variance of 28.416 %. Nowadays, global developments have a profound effect on local and regional industries, necessitating frequent and ongoing reevaluation and reorganization of business practices and priorities, particularly in the areas of workforce allocation and production planning. In this study, four hundred small and medium-sized manufacturing concerns in the iron-steel, construction materials, and food industries, registered with the Chambers of Commerce and Industry in Konya, Kayseri and Kocaeli, have been examined with the object of determining those factors that most crucially affect their allocation of human and material resources. A number of issues related to research on effective production and workforce planning have been addressed in a number of studies over the last couple of decades. They have considered workforce planning and effective production research from a general perspective (Alfares 2000); proposed three- and four-day work weeks (Browne and Nada ; Burns et al. ; Hung [1993, 1994]; Hung and Emmos ; Steward and Larsen ); and classified workforce planning problems into three categories (Baker 1976). Eylem Tekin, Wallace J. Hopp, and Mark P. Van Oyen (2004) have examined simple models of serial production systems with flexible servers operating under a constant work -process release policy; Esma S. Gel, Wallace J. Hopp, and Mark P. Van Oyen (2002) have considered how to optimize work sharing between two adjacent workers each of whom performs a fixed task in addition to their shared task(s); and Wallace J. Hopp and Mark P. Van Oven (2004) have investigated the assessing and classifying manufacturing and service operations to determine their suitability for the use of cross-trained (flexible) workers. Hooks and Cheramy (1994) and Wooton and Spruill (1994) have examined the increasing proportion among new public accounting recruits of women, who, according to a recent survey (AICPA 1999), comprise approximately one-half of all new hires. Kalyan Singhal (1992) has explored a noniterative algorithm for multiproduct production and workforce planning; and Joseph M. Milner and Edieal J Pinker J. Pinker (2001) have developed mathematical models to describe the interaction between manufacturing firms and labor supply agencies when demand and supply are uncertain.
The Response of Various Real Estate Assets to Devaluation in Argentina
Dr. Ricardo Ulivi, California State University, Dominguez Hills, CA
Jose Rozados, Arch. Reporte Inmobiliario, Argentina
Germán Gomez Picasso, Arch. Reporte Inmobiliario, Argentina
This study reviews the effects that Argentina’s currency devaluation had on the prices of residential, industrial and office real estate in that country. The findings indicate that residential prices in the best locations kept their prices better, in dollar terms, that the other sub-markets; and that the reaction of prices in the industrial and office market is initially greatly affected initially by the devaluation, but that the subsequent performance is mainly determine by what happens to economic growth as a result of the devaluation. In Argentina’s case, the devaluation led to strong economic growth—two years of over 8% growth—which in turn helped recover the market prices of the industrial and office market. Real estate prices, as any other assets, are affected by the socioeconomic environment they are set in. Generally speaking, market prices are set by the interaction of supply and demand factors. But, what happens to real estate prices when a sudden devaluation occurs? Some lessons can be learned from the Argentine experience of recent years. By the end of 2001 and beginning of 2002, the Republic of Argentina was shaken by a very profound financial crisis that drastically impacted the overall financial system and resulted in a currency devaluation of nearly 70%. How did this crisis and devaluation affect the prices of real estate? Was the reaction of all real estate submarkets similar or can we establish differential and particular behavior characteristics to each type of real estate submarkets? For example, was there any difference between the residential properties, industrial, and office markets? In an earlier paper, the authors concluded that the argentine devaluation affected residential real estate prices as a function of its location. The better the location, the lesser the fall of prices as a result of the devaluation. This paper will analyze the residential, offices and industry values before devaluation and their price behavior after the crisis. We will try to explain how the market prices of each real estate submarket (“residential”, “office” and “industrial”) behaved as a result of the devaluation. During the decade of the 1990’s the Republic of Argentina had a money exchange parity regime established by law. The so called “Convertibility Law” issued during 1991 established a fixed conversion of one peso into one dollar. This law was abolished in early 2002 and in its place a free exchange market was in established. Even with out the convertibility law, it was a well established fact that, in the real estate area, most real estate assets were historically quoted in USA dollars.
Managing Marketing Standardization in a Global Context
Dr. Boris Sustar, University of Ljubljana, Slovenia
Rozana Sustar, GEA College, Slovenia
The article discusses the possibilities for standardizing the marketing programs of Slovenian firms. The study, using factor analysis, surveyed 298 exporting firms in Slovenia. The survey found that environmental factors, such as political and economic stability, significantly affected the possibilities for standardization enabling firms to improve sales margins. Strategic control over distribution and promotion exercised by Slovenian managers was identified as constraints to standardization. The study also found many country-specific variables constraining the degree to which, standardization and its benefits could be undertaken by Slovenian managers. The evolving world of international business is witnessing the emergence of additional players, including firms from the former Eastern block. These firms are playing a game of catch-up as they attempt to learn the intricacies of doing business in today’s global economy. The speed at which this process is occurring varies across nations. Firms in Slovenia, the Czech Republic and Hungary, for example, are rapidly acquiring the skills necessary to compete on the world stage. These firms have adopted both general approaches to marketing as well as targeted actions, which have been influenced by the local environment. This article will discuss the possibility of standardizing marketing programs and the factors influencing the process of cost lessening, as they apply to the case of Slovenian firms. The literature in this area broadly examines the numerous variables that affect standardization. Both internal and external components impinge upon the decision to standardize the marketing program of product, price, distribution and promotion (Kreutzer, 1988). The magnitude of differences in local physical, economic, social, political and cultural environments, are being invalidated by the globalization of markets. As a result, there may be no differences between domestic and international marketing (Perry, 1990). However, a standardized marketing cannot be set once and for all. Matching firms’ resources with environmental requirements, anticipating changes in consumers’ needs, and forecasting competitors’ behaviour (Easton, 1988; Kogut, 1988) are critical business activities for developing effective standardized export marketing initiatives (Akhter and Laczniak, 1989). The literature concludes that economic environment (Hooley et al., 1993; Huszagh et al., 1992; Sullivan and Bauerschmidt, 1988), political environment (Kobrin, 1988) and cultural environment (Jain, 1989), effect standardization process. An objective of this study is to test environment and standardization correlation for emerging economies in order to instruct firms’ management in Slovenia to pursue efficient standardization. Based on the literature we study how environmental factors result in standardization. Organizational characteristics, such as firm size, global marketing experience and the marketing strategies of management in exporting firms (Koh, 1991) compose the internal factors, which create the conditions for global standardization.
The Association Between Firm-Specific Characteristics and Disclosure: The Case of Saudi Arabia
Khalid Alsaeed, CPA, CMA, CFM, CI, Institute of Public Administration, Riyadh, Saudi Arabia
The main thrust of this paper is to examine the effect of specific characteristics on the extent of voluntary disclosure by a sample of non-financial Saudi firms listed on the Saudi Stock Exchange for the 2002-2003. The variables investigated were as follows: firm size; debt; ownership dispersion; age; profit margin; return on equity; liquidity; audit firm size; and industry type. The association between the level of disclosure and various firm characteristics was examined using multiple linear regression analysis. It was hypothesized that for the sample firms, the level of disclosure would be positively associated with firm characteristics. It was found that firm size was significantly positively associated with the level of disclosure. The remaining variables, however, were found insignificant in explaining disclosure. The purpose of this paper is twofold. The first is to report on the extent of voluntary financial and nonfinancial disclosure by a set of non-financial Saudi publicly held corporations, and the second is to empirically investigate the hypothesized influence of several firm characteristics on the extent of disclosure. This paper will contribute to the growing literature on the determinants of corporate disclosure level. Two reasons lend to the importance of this study. First, voluntary disclosure, information in excess of mandatory disclosure, has been increasingly receiving growing amount of attention by recent accounting studies. Because of the inadequacy of compulsory information, the demand for voluntary disclosure provides investors with the necessary information to make more informed decisions. Thus, this study attempts to assess the quality of voluntary disclosure reported by non-financial firms listed on the SSM, especially the annual reports, which are the chief vehicle firms apply to convey information to investors. Secondly, this study provides insight into how the effect of certain firm-specific characteristics; namely structural, performance, and market variables may hold up in other international financial reporting and regulatory jurisdictions. The results of the analysis are expected to help explain the variation of current and prospective disclosure extent in light of the aforementioned firm-specific characteristics.
A Scorecard on Intellectual Capital Performance in the Economy
Vernon P. Dorweiler, Michigan Technological University, Houghton, MI
Mehenna Yakhou, Georgia College & State University, Milledgeville, GA
The future of commerce is forecast to rely heavily on advances in Intellectual Capital (Caddy, 2000). This research is to establish; (1) the accepted concepts of Intellectual Capital, (2) its strategic uses, and (3) gaining acceptance on a world basis. If the forecast is valid, then Intellectual Capital can be integrated into traditional accounting. The expectation is that once Intellectual Capital’s measurement issues have been resolved, use will greatly expand (Roslender, 2004).This research is to examine the issues involved. In addition to individual business, a national program of China is also presented; an outline of that program is found in Exhibit 1. Intellectual Capital has come under scrutiny for its uses. A scorecard reports an excess of benefits over costs. Once an accounting viewpoint is established, Intellectual Capital will have an impact on the Income Statement and the Balance Sheet (Caddy, 2000). The Organization for Economic Cooperation and Development (OECD) conducted a survey of 1800 companies, on their uses of Intellectual Capital (Shalkh, 2004): in organization (structure), in business relations (to customers, and to stakeholders), and with employee (competence). Results of the survey showed ; (i) the extent that companies have adopted Intellectual Capital, and (ii) how many companies have exerted effort to fit. Intellectual Capital within traditional accounting and in management reporting. The survey defined how Intellectual Capital should be measured, including Human Assets and the use of a Balanced Scorecard. The survey questioned specifically on the respondents having an appearance of entrepreneurial spirit. Clearly this was to determine whether companies with an interest in Intellectual Capital would also have an open management style. Results showed that the European and Australian countries lead in positive response to the development and use of Intellectual Capital (Shalkh, 2004). An issue remaining is whether or not Intellectual Capital can be measured (Abeysekera, 2003). Measurement of Intellectual Capital is defined as: difference in Market-to-Net book value; calculated intangible value; and valuation of knowledge capital. Abeysekera recognizes that these measures are not predictive, but recommends that ratios can be used for that purpose. Key ratios are in a comprehensive role of valuing Intellectual Capital, and are presented below. Why are forecasts of significance?
Enhancing Education Through Technology?
Dr. H. W. Lee, National Chia-Yi University, Taiwan
The purpose of this paper is to synthesize some empirical research articles that video and other multimedia tools are used in the subject matters of science and mathematics in teacher education. First, I will introduce what research is in the field of instructional technology and what kinds of research issues have been mostly addressed in this field. Second, two quantitative and two qualitative research articles related to the topic will be examined and discussed in terms of the reliability and validity of these articles by using the standards of research proposed by Reeve (1995), including whether the research is “pseudoscience” (Reeve, 1995), keeping its norms, and sticking to its social responsibility. In addition, I will discuss how and what video and other multimedia tools have been used in each of these research articles. Furthermore, some implications will be proposed for the future use of video and other multimedia tools in the subject matters of science and mathematics in teacher education. Video is one of the most popular instructional media in public schools and with the combination of other multimedia tools it has become increasing prevalent in our schools. However, the amount of videos and other multimedia tools in schools may not be the promise of success; how teachers effectively use these tools indicates the quality and success of our education. Some research (Ferguson, Holcombe, Scrivner, Splawn, & Blake, 1997; Holmes & Wenrich, 1997; Kumar & Sherwood, 1997; Viglietta, 1992; Whitworth, 1999) explore different issues and aspects of video uses in teacher education for teachers of the specific subjects of science and mathematics in public schools. These empirical studies also present the possibilities of combining other technologies with the use of video in the classrooms and address some difficulties and limitations of video use for both teacher education and classroom settings of the public schools. However, some of these studies need to be further investigated in terms of their validity and reliability for the purpose of adopting some suggestions elicited from them in the real classroom setting. Thus, I will use the criteria proposed by Reeve (1995) to examine whether some of these studies are “pseudoscience”, which means the lack of norms and standard, and social responsibility. Furthermore, I will discuss the suggestions made from these studies and conclude some implications for the use of video in both teacher education and classrooms in public schools. After I read some research articles in the field of instructional technology, I found some studies do not meet the norms and standards that Reeve (1995) have proposed, and also may not be social responsible. Thus, before discussing the content of these empirical studies, I will discuss the methods of research and some mistakes have been made in these research designs. First, I will categorize some main research methods in the field of instructional technology.
An Examination of Dynamic Panel Models Using Monte Carlo Method
Dr. Junesuh Yi, Information and Communications University (ICU), Daejeon, Korea
This paper investigates the best appropriate parameter estimator among ten recent prominent estimators based on dynamic panel data by using Monte Carlo experiments. For finding the best appropriate estimator, the bias and the RMSE (root mean square error) are calculated through the application of various values of parameters. This study finds that it is more likely that Alonso-Borrego and Arellano’s systematically normalized generalized method of moment (SNM) and limited information maximum likelihood (LIML) estimator show the least bias and dispersion. They are turned out to outperform all other estimators so that SNM and LIML are observed to be the best appropriate estimator. Panel data sets that are the cross section of samples at several moments in time are recently common in econometrics or finance. The most of empirical data for analyzing phenomenon over the time are composed of cross-sectional time series such like gross national product (GNP) per capita of several countries during a certain periods or the performance of technology stocks for 90’s, turbulent market periods. Therefore, panel data sets allow a researcher to analyze a number of important economic or finance questions that cannot be addressed using conventional cross-sectional or time-series data sets. Initiated by Anderson and Hsiao (1981), the studies of dynamic panel data analysis that assumes the lagged value of the dependent variable is one of the explanatory variables have led the main stream in this area. The papers related to dynamic panel data have been written by changing some assumptions with respect to the additional moment condition to find more efficient and consistent estimator. However, it is less likely that they draw the agreement the best appropriate additional moment conditions imposed in a generalized method of moments (GMM) framework. In this paper, I examine which estimator is the best appropriate among ten prominent models by using Monte Carlo experiments that are standardized method to apply all estimators. Most of the methods for estimation and hypothesis test discussed in econometrics field including dynamic panel data are typically based on large sample asymptotics. Therefore, if such dynamic models are estimated from small samples, then the standard asymptotic approximations might be very poor. Unfortunately, since time (T) is often very short, in practice, it is relatively difficult to interpret the results with statistically confidence solving the asymptotic question. One of the methods to solve this problem is Monte Carlo experiment. Monte Carlo method is used in many disciplines to refer to procedures in which quantities of interest are approximated by generating many random realization of some stochastic process and averaging them in some way.
Rational Conduct, Fairness, and Reciprocity in Economic Transaction Processes
Prof. Dr. Josef Neuert, University of Applied Sciences Fulda, Germany
Jeanne Butin, University of Applied Sciences Fulda, Germany
Anne-Sopie Farfelan, University of Applied Sciences Fulda, Germany
Petr Kolar, University of Applied Sciences Fulda, Germany
Thilo Redlich, University of Applied Sciences Fulda, Germany
How would the ordinary “man on the street” describe the famous “Homo Oeconimicus”? Finding a truly right and all-comprising answer is probably almost impossible. Nonetheless, this concept has been of great interest for years, decades or even centuries since the beginning of modern management theory in the 18th century by Adam Smith. The term “Homo Oeconomicus” mirrors an interdisciplinary idea, having tremendous influence on social, political as well as economic science. The following project is supposed to identify and evaluate the significance of the concept in reality – in other words, examining whether humans are really that rationally oriented when making-decisions. Moreover, the claim of individuals being merely focused on profit-maximization provides the second focus researched in this paper. In the end, the final conclusion aims at either supporting or rejecting the Homo Oeconomicus as the valid theory. In the latter case, the more or less opposing approach to human behaviour – the theory of reciprocity – is deemed more useful. Of course, the evaluation of a theory requires thorough/profound research. Therefore, the theoretical basis of both models is given in the first part of the paper. Afterwards, the results of the conducted survey are presented. The final conclusion is then drawn by relating the theoretical fundamentals to the outcomes of the practical study performed. Generally, the project and its findings are supposed to support the further development of the model of individual behaviour in its entire scope as it is one of the most important concepts in social sciences, providing the ground for major advances in this field. After having clarified the goals and intentions of this project, this chapter aims at introducing the Homo Oeconomics as a model of individual behaviour creating the basis for, on the one hand, the modern economic theory and, on the other one, also being applied in other social sciences that perceive human behaviour as a rational choice of available alternatives. Within this context, the single human being is placed at the center of the analysis facing a situation of scarcity – not all preferences or needs can be satisfied at once – requiring a decision for one out of several available alternatives. Consequently, the interesting question arises: Is the individual socially (altruistic) or egoistically motivated when making decisions within this framework? The description is primarily based on the book “Homo Oeconomicus” written by Kirchgässner since it offers an all-comprising and complete overview of the whole topic.
Copyright: All rights reserved. No part of the material protected by this copyright notice may be reproduced or utilized in any form or by any means, including photocopying and recording, or by any information storage and retrieval system, without the written permission of the journal. You are hereby notified that any disclosure, copying, distribution or use of any information (text; pictures; tables. etc..) from this web site or any other linked web pages is strictly prohibited. Request permission / Purchase article (s): email@example.com
Copyright 2000-2020. AABJ. All Rights Reserved