Copyright 2000-2017. All Rights Reserved
Uncorrelated Emerging Equity Markets
Dr. Tulin Sener, State University of New York-New Paltz, NY
Most of the emerging markets recently have high returns with high volatility and low correlations with the World Index. They have high potential to improve the return and risk performance of global equity portfolios. In global investing, geographical factors remain dominant for emerging markets. This study shows that, at the outset of regional analysis, it is a good approach to find uncorrelated emerging market classes. Consistently, the uncorrelated emerging market classes justify the regional effects. Further, this approach contributes significantly to the regional analysis in several ways. The purpose of this study is to develop a strategy that might be helpful for global portfolio managers to find out uncorrelated country or region classes, as well as to evaluate the contributions of emerging market diversification to global equity investing. It is widely known that the diversification benefits to global equity investing are time variant over countries and are declining through time. For the global equity investors and managers of the new century, the controversial question is whether diversification benefits will continue. What is the relative importance of emerging markets for global diversification? Are geographical effects still important for global investing? Global diversification benefits are closely associated with the degree of world market integration and efficiency. The more segmented the markets are, the better the risk reduction and return enhancement benefits of global equity diversification will be. In a fully efficient and well-integrated global financial market, investing in the world market portfolio (i.e. the Global Equity Index) would be the optimal passive strategy for a global investor. However, recent literature indicates that global equity markets are neither entirely synchronized nor fully efficient, but integrating at a gradual pace and mostly in developed markets. Hence, benefits of active and efficient global equity diversification continue (Shao et al. 2002, Cooley et al. 2003, and Stultz 2005). Geographical effects, compared to industry and style effects, are still important while selecting global investment strategies (Lin et al. 2004, Puchkov 2005). In a comprehensive study, Campa and Fernandes (2006) find that geographical factors have remained stable over time. Industry factors have significantly increased during the last decade, but have dropped again since 2000. Further, Vardharaj and Fabozzi (2007) state that allocation policy among regions explains about one-fourth of monthly or quarterly return variation for international funds. Similarly, Sener and Cinar (2002), Dijk and Keizer (2004) and Brooks and Del Negro (2005) emphasize the significance of regional effects. Here, I raise a further question. Do uncorrelated country classes represent regional blocks or combinations of countries that may come from entirely different regions? This issue is the focus of my study. The study is limited to emerging markets which have time- variant popularity for global investing. Investing in emerging markets has made a significant revival recently and their relative importance for global equity portfolio diversification has been re-emphasized. Goetzmann et al. (2005) indicate that approximately half of the benefits of global diversification come from the increased number of world markets and the other half stems from a lower correlation among the available markets. They conclude that globalization expands investment opportunities and diversification benefits rely more on investing in emerging markets. Indeed, stocks from these emerging markets returned an enormous 55 percent in 2003 and 25 percent 2004, which continue to be very high. In this paper, as a complementary alternative to the regional analysis, I attempt to create emerging market classes, which are uncorrelated with each other and can be combined with developed markets. Those classes can provide the international investors and portfolio managers with a better idea while dealing with global diversification strategies. The organization of the paper is as follows: Following the introduction, Section 2 explains pros and cons of global diversification with emerging markets. Section 3 describes the methodology used, while Section 4 reports the sources of data. Section 5 explains the empirical evidence for the uncorrelated emerging markets. Finally, Section 6 summarizes the findings and draws conclusions. Low international correlations across countries are the cornerstones of the global efficient diversification. However, international correlations have recently trended upward, particularly between the developed markets. The correlations of developed markets with the World Index have gone up to 60-95 percent range over the past decade. While investing globally including emerging markets, the lure of emerging markets has always been the promise of higher returns and lower correlations with developed markets. These correlations tend to increase in periods of crises, and emerging markets are subject to cyclical fluctuations. On the other hand, an emerging market boom or crisis does not necessarily affect the economic cycle of developed markets, indicating the rather low correlation between emerging and developed markets (0 to 60 percent for the past decade). Emerging markets have not been viewed as efficient and were found somewhat segmented from the major international markets.
Lean Operations Management: Identifying and Bridging the Gap between Theory and Practice
Dr. Daniel L. Tracy, University of South Dakota, Vermillion, SD
Dr. John E. Knight, University of Tennessee at Martin, Martin, TN
The lean management philosophy is at the forefront of advances in the practice of operations management today. Lean principles meanwhile are built upon solid academic operations management theory. Both academicians and practitioners can benefit more fully from understanding both the practice of lean management and theory of operations management by combining them into an integrated management philosophy. This paper examines the gap between operations in academe and managers in the marketplace, the problems caused by that gap, and the potential benefits that could be realized by bridging that gap. In addition, effective avenues for bridging the gap are suggested for marketplace managers, faculty, authors, and publishers. Lean management is philosophically grounded in traditional operations management theory and models, and subsequently has been applied in today’s marketplace by non-academic practitioners in virtually every industry. Although much of the progress in lean management has come from the manufacturing sector, explosive growth in lean management is now occurring in the service sector as well. The driving force behind lean management is continuous process improvement through the elimination or reduction of non-value-added waste. “The driving force for waste elimination is improved customer value and increased profitability in the products and services offered by an organization.” (Burton, 2003, p. 99-100) Many different types of waste either originate or accumulate in any system with some waste more obvious than others. Table 1 illustrates eight different categories of non-value-added waste. (Burton, 2003, p. 99-100) The culture of firms employing the lean management philosophy is built on five key principles identified by Womack and Jones in their benchmark book, Lean Thinking – Banish Waste and Create Wealth in Your Corporation. The five principles are found in Table 2. (Womack, 1996) These five strategic principles are pursued through the effective use of the tools, techniques, and methods applied in lean operations. “After a decade of conversions by all types of industries to lean manufacturing, the benefits being realized by these companies are indisputable.” (Hobbs, 2004, p. 43) Typical lean implementation results are summarized in Burton (2004, p.104). If the application of lean strategy, tools, and methodologies results in these kinds of bottom line benefits (and others), and lean management is based on traditional operations management theory, it is curious to think that academics and marketplace managers are not working in concert with one another on a regular basis. Unfortunately, they are not working together with any kind of regularity and the result is educational waste! The study of operations management, in most academic settings, is centered on identifying and understanding processes. A great deal of theory and modeling has developed to provide mechanisms for analysis and decisions regarding processes and their management. From an academic perspective, these are the basic building blocks for process analysis and improvement. Most traditional operations management course textbooks and syllabi contain fundamental topical coverage including, but not limited to the items in Table 3. Lean management is heralded by marketplace leaders in virtually every industry today. The lean management philosophy is centered on the elimination of various forms of process waste. While some characterize lean as a management fad that will pass, marketplace managers and academics who are using/teaching lean management realize that it is a sustaining and refining philosophy built upon traditional industrial engineering and operations management concepts that can be traced back to Taylor and Gilbreth. The philosophy will continue to be an integrated element of operations management even as new and additional concepts or philosophies emerge. Eliminating waste is synonymous with productivity gains which are often at the heart of operations management theory and practice. The ‘bottom line’ successes of lean firms are frequently and convincingly documented in professional publications and the media. With its roots in JIT and TQM, the lean management philosophy has developed through the integration of many concepts and techniques from the past and present. Basic philosophical process principles guide lean efforts, while the details include tools and concepts similar to those in Table 4. (A to Z, 2005) These tools/concepts are essential in today’s competitive marketplace. Firms that embrace lean management are typically leaders in their industries. Failure to recognize the importance of lean management will result lower profitability, lower market share, and diminished competitiveness. “Comfort, loyalty, and reliance on the old models of production to meet the increasing needs of the Internet-age customer will separate the marginal companies from the stars of their industry.” (Hobbs, 2004, p. 43) Success in operations management is limited when managers are not exposed to both skill/knowledge sets –lean management and operations theory. According to an interview (The Lion, 2005) with James Womack, a lean leader and author, “we begin with the premise that all value in life is the result of a process.” In business, whether manufacturing or service oriented, the goal is to make money. In today’s marketplace when customers demand only what they are willing to pay for, making money requires a greater understanding of processes and the waste within them. Those managers who do not possess knowledge of operations management theory intertwined with a lean philosophy may be less compelling in the eyes of the customer in a fiercely competitive marketplace. Thus, the gap between theory and practice may be a critical factor governing the success of firms in every industry.
Banking Industry in Spain: Trends in Securitization of Loan Portfolios
Dr. Phani Tej Adidam, University of Nebraska at Omaha, Omaha, NE
In the current booming Spanish economy, banks are enjoying tremendous growth and profitability. Not only is the housing market growing exponentially, the small and medium businesses are also seeking higher amounts loans for their working capital needs. With such ever-increasing demands for loans, banks are looking for innovative methods of generating liquidity for not only offering loans to their current customers, but also to expand into more lucrative markets along the Mediterranean coast. Increasingly, banks in Spain are securitizing their loan portfolios. This paper investigates such trends, and compares the same to the banking industry in the European Union. Spain boasts one of the fastest growing economies in the European Union, and economic growth has steadily increased above the Eurozone average over the past decade. A construction and housing boom contributed to a decline in the unemployment rate and the rise in household wealth, creating overall positive economic conditions and tremendous growth opportunities for financial institutions. Demographic statistics indicate that 40.4% of the 44.7 million inhabitants in Spain are between the ages of 20-44. With a large young-adult population and growing immigrant population, banks have a sizeable target audience for origination of loans. They are prime candidates for loans as this population enters the workforce, establishes consumer credit, purchases a vehicle, and takes out a mortgage. The savings/investment rate in Spain is 10.4%, which has led to a decline in deposit funding, and an increased need for capital markets funding by financial institutions. While there is considerable debate over a projected recession versus a mild growth slowdown, most in-country experts tend toward the latter. They cite the cultural norm of pride-of-homeownership, immigration growth, and vacation homes along the Mediterranean coast as stabilizers to the mortgage industry. The banking industry in Spain has experienced a period of extreme growth over the past decade, especially when compared to Europe as a whole. Between 2004 and 2006, Spain experienced a 16% growth rate in loans and a 13% growth rate in deposits. Compared with Europe’s 8% growth rate in both categories, it is clear that Spain is a healthier market as compared to Europe. Not only has the Spanish banking industry demonstrated healthy growth, but Spain is also one of the leaders in Europe in terms of profitability; and also has one of the lowest loan default rates. Given such robustness in the banking industry in Spain, how can banks maintain an optimal level of liquidity that would satisfy the demands of the European Central Bank as well as the Banco de Espana? Several banks are increasingly turning to the secondary loan market in order to securitize and/or sell their loan portfolios. Therefore, it would be interesting to investigate the trends visible in Spain’s secondary loan market. First, we provide a brief overview of the banking industry situation. There are four different types of financial institutions in Spain – Banks, Savings Banks (Cajas or Caixas), Credit Cooperatives, and Financial Credit Establishments. There are a few main differences between banks and savings banks in Spain. One of the major differences is the savings banks’ requirement to invest a certain amount of net income into social programs or organizations (CECA, 2007). The percentage of income given to social works’ responsibility projects typically ranges from 15% to 35%. For this reason, savings banks are technically considered foundations. Because of their social orientation, ordinary capital increases for savings banks are not possible. Another difference between savings banks and banks in Spain is their ties to the local community. Savings banks were originally created in order to help members of the community save or invest money, while at the same time improving communities through social works. These savings banks originated in smaller towns and rural provinces to counteract limited access to the financial sector. Although the banking industry has changed dramatically, the same mission applies to savings banks today. Therefore, most savings banks are regional banks that do not have a country-wide presence in Spain. In addition, it is against regulation for savings banks to expand internationally (CECA, 2007). Both Credit Cooperatives and Financial Credit Establishments together make up a small percentage of the banking industry. Typically these are “quick credit” places and do not provide full banking services. These financial institutions do not play a big enough role in the Spanish banking industry to warrant further investigation. Therefore, the research included in this paper will focus on banks and savings banks. We can now focus on key target markets and product offerings that are offering banks immense growth opportunities. In the past ten years, Immigration (both documented and undocumented) has made a huge impact on Spain’s economy. Naturally, immigration rates have a large impact on the banking industry as well. One of the major trends in banking right now is the focus on the immigrant population. Financial institutions in Spain are targeting immigrants by offering more extensive and less expensive money transfer services, special mortgages in the immigrant’s home country, and even reassurance that a documented identity is not necessary for banking services. “Banks are competing fiercely for immigrants' custom, tailoring branches to cater to them or setting up special affiliates and staffing them with Polish, Romanian, Chinese, or Latin American employees. They offer a slew of specialist financial products, such as instant credit and special mortgages as well as non-financial products like repatriation insurance, legal help-lines, travel agents and courier services” (Burnett, 2006). Immigration is not the only consumer trend targeted by banks.
Investment Decision Making in an Entrepreneurial Firm: An Application
Dr. Prakash L Dheeriya, California State University, Dominguez Hills, Carson, CA
This paper is a case study of a capital budgeting decision made in a small manufacturing plant in Torrance, California. The firm manufactures generic type of laser cartridges for use in monochromatic laser printers and has a distribution network all over the United States. The firm is considering the purchase of automation equipment which will increase its production three-fold, and will use the excess production to serve the Latin American markets, including Mexico. It has been in operation for over 4 years and has experienced substantial growth since prices of laser printers have fallen. The data represented in this case study is from the 2006-2007 fiscal year and is used to show the benefits of implementing good capital budgeting techniques in a small, rapidly growing entrepreneurial firm. This paper will discuss the current capital budgeting process, problems, and proposed solutions in the framework of entrepreneurial finance. Small entrepreneurial firms are typically run by single owners who may lack financial expertise to evaluate investment proposals. They may rely on their personal accountants, tax advisors, bankers to provide key input in the capital budgeting process. Many studies have been undertaken to evaluate methods used in capital budgeting in small firms. Pereiro (2002) looked at Argentinian firms and found that (a) discounted cash flow techniques like NPV, IRR and payback are very popular among corporations and financial advisors; (b) the CAPM is the most popular asset pricing model (c) cross-border adjustments to U.S. multiples and betas are rarely used by corporations; and (d) corporations tend to disregard firm-related unsystematic risks, like small size and illiquidity. Results of mainstream capital budgeting research typically apply to large firms. The smallness of entrepreneurial firms creates unique problems in application of traditional capital budgeting principles. For instance, Banz (1981), Chan, Chen & Hsieh (1985) & Fama & French, 1992 have documented firm related unsystematic risk to be a key issue for very small firms. Small firms are also prone to violent changes in profitability when faced with economic downturns than large firms which tend to be diversified as well as have access to financial resources. This ‘small size’ effect culminates in a higher discount rate when evaluating capital investment proposals. Studies have found this effect to be substantial (for e.g. Pratt (2001)) whereas others have not found it to be significant (Amihud, Christensen and Mendelson, and Black, cited in Jagannathan & McGrattan, 1995). Pereiro (2002) finds that none of the firms in his sample made any adjustments for their small size in the discount rate used in capital budgeting. Hall & Weiss (1967) noted that large firms have all the options of a small firm, and can also undertake large scale projects which the small firms are excluded from. In a study of firms using sophisticated capital budgeting practices, Verbeteen (2006) found that large firms have a tendency to use them as opposed to small firms. Earlier research by Williams and Seaman, 2001 and Rogers (1995) had also confirmed their findings. Large firms seem to have more at stake and are more likely to have the available resources to use sophisticated capital budgeting practices (Chenhall and Langfield-Smith, 1998; Ho and Pike, 1998). Other empirical research has also indicated that size has an impact on capital budgeting practices (Farragher et al., 2001; Ho and Pike, 1996; Klammer et al., 1991). Stein (2002) found that a decentralized approach to capital budgeting often found in smaller firms works best when information about a project is “soft” and cannot be credibly transmitted within the organization. In a behavioral study by Boot et al (2005), a junior analyst’s behavior towards capital budgeting is more a reflection of his assessment of how his boss views him rather than an analysis of the project itself. In other words, the analyst tends to agree with his boss’ prior beliefs about acceptability or not of a project rather than come up with a totally objective analysis. This problem is particularly crucial in small, entrepreneurial firms where the chain of command between the decision making authority and the analyst is very small. In another study, found that smaller firms tend to have less complex, more interpersonal evaluation system. Marino and Matsusaka (2002) state that the choice of a particular decision making process in a firm is a function of the agency problem, quality of information and project risk. Holmen & Pramborg (2006) found that larger firms are more likely to use the NPV method or the IRR method when evaluating foreign direct investments than smaller firms. They also found that larger firms and public firms are also more likely to use real option methods when evaluating FDIs. They defined a firm to be small if its sales were less than $ 100 m. This paper is organized as follows. The following section discusses prior research in the investment decision process. This is followed by a case study of an entrepreneurial firm involved in making a capital equipment decision. Results of numerical analysis are provided thereafter and the paper concludes with general comments. Palliam (2005) and Reeb et al. (1998) argues that total risk is more important than unsystematic risk in computation of a small business’s cost of equity. In a survey of small and large firms in the U.S. forest products industry, Bailes at al. (1979) note that smaller firms are less likely to use any risk measurement techniques. In a later study of the same industry, Hogaboom and Shook (2004) find that discounted cash flow techniques are the most preferred capital budgeting decision criteria used in the industry. Some larger firms have started using more sophisticated evaluation methods, such as economic value analysis. These results have been corroborated by, Keasey and Watson (1993) and McMahon et al. (1993). It has been established widely in academic literature that small firms are often remiss in using the appropriate technique while evaluating projects.
“The Relationship Between a College’s Success in Sports to Applications, Enrollments and SAT Scores”
Dr. Chaim Ehrman, Loyola University of Chicago, IL
Dr. Allen Marber, Fordham University, NY
The relationship between a college’s success in sports to applications, enrollment, and SAT scores can be viewed as a research study, trying to identify several trends. If a college is successful in sports, especially football and basketball, does it reflect that success in more students applying to that particular institution? In addition, what is the impact of success and the increasing number of applications? How will this increase in applications reflect on enrollments? Will the college’s policy towards enrollments change? Finally, how will a successful sports program which generates a larger applicant pool impact on the average SAT scores of the incoming freshman? There is a strong, enthusiastic feeling among College students and Alumni regarding Varsity Sports. There is a quote from Bobby Knight, a famous, well known Coach. He said that if the Dean were to ask him to raise $50,000 for the University Library, it will take him at least 3 months to raise the money, and he is not sure if he will meet his quota. If, however the dean were to ask him to raise $50,000 for a new Sports Arena for the Varsity Team, he will have the funds ready by the next day! There are a few examples that highlight this relationship between success in sports and applications and enrollment. Doug Fluti was a football “superstar” on the field, and his performance had a direct impact on applications and enrollment at Boston College. Similarly, Pat Ewing was a basketball superstar and his performance affected applications for his College, Georgetown University. If the number of enrollments at the college is fixed, that quality of these enrolled can increase, i.e. higher SAT average for the incoming students. There is also a financial bonus for the college. When the team does well, it is known that the Coach donates substantially to the college. Sports, as the driving force that elevated a college to new heights, can be observed before World War 11 as well as in the post-war years of the late 1940’s and 1950’s, which pre-date the start of the research study. Two schools are prime examples of this phenomenon: Notre Dame in the pre war years and Michigan State in the post-war years. If not for the success in football, Notre Dame might still be a small, relatively unknown, parochial college. Two elements are responsible for the astonishing growth of M.S.U. in the post-war years which elevated that institution into the Big Ten conference and also into a university of first-rank status. It was John Hannah and his use of sports as the vehicle to attain those goals. What do we mean by success? What researchers generally define as “success in sports” is participating in and winning major events in either collegiate football and/or basketball, such as bowl games, NCAA and NIT tournaments, being in the regional final and in the final four of the NCAA, and of placing in the final football and basketball polls conducted by the various national press services. The study finds that success in sports, for the most part, will generate an increase in applications activity right away. Basketball can have an impact on applications for the coming academic year, since it is played in the winter, and it affects applications for the fall of the same yeas. However, since some schools cut off applications prior to the NCAA tourney in March; their success can have an effect on applications even a year and a half later. Football, being played in the fall will have an impact on applications for the following school year. So we see that success in either sport will result in an immediate increase (flinty) in applications activity for admission to the fall semester very possibly for the next fall as well. Actual enrollment tends to lag applications by a year or two. In smaller, private colleges, there is less of a lag time because they resemble an entrepreneurial business, and can increase admissions with an increase in applications. In other words, as demand increases, which is the result of success in sports, the smaller or private colleges can increase supply (enrollment) almost instantaneously. This is not the case with a large state university, whose enrollment is usually set by the state legislature. That institution will not increase enrollment even though application activity has increased, until the state legislature increases enrollment on a per capita basis. This process generally takes a period of time so only continued success will generate an increase in enrollment activity. When a school was successful in sports (football and basketball), almost unanimously applications activity increased, and in most cases, the increase was considerable. As to the question of enrollment, basically the same point can be made, that when a school is flourishing in sports the enrollment in a great majority of the cases increased. Regarding the third major component of the study, SAT scores, do they likewise increase with collegiate success in sports? The answer is yes, in the majority of the colleges studied. There is no doubt that success in sports can be used as a catalyst to improve the academic quality of the student body. With a larger pool of students to draw from, admissions personnel can select from the higher SAT applicants while still staying within the projected enrollment targets. Average SAT scores for the incoming freshman class will be affected positively by continued success in sports, which will generate an even larger applicant pool from which that college would select its next freshman class. As a result increased applications can be likened to an increase in demand for a firm’s product. Now the administration has a much larger applicant pool from which to draw prospective students and thus can enroll the same amount of students, but can select those with higher SAT scores. This is achieved by an administration policy which increases enrollment activity only to a small percentage of the increasing applicant pool. Continued success in sports will generate a substantial increase in the scores of the incoming freshman class. On the other hand, if administration policy is to increase enrollment activity, then enrollment will rise considerably, while SAT scores will just trend upwards with minimal increase. In addition to the major study seeking to find a relationship between a college’s success in sports to applications, enrollment, and SAT scores, two other areas were of interest to the researchers, one regarding a newly formed basketball conference, and the other involved colleges. The Big East Basketball Conference was formed in 1980 and was made up of nine eastern colleges. What made that conference unique and worth studying was that the conference received television coverage from the outset. Researchers investigated if applications, enrollment, and SAT scores, benefited by the nine schools involvement in the Big East. Further if they reached an even higher plateau as a result of each schools individual successes by either winning a bowl game or having done well in the NCAA tournament. The results of our research on the Big East again confirm the larger study. Applications, enrollment and SAT scores increased among the nine colleges when they joined the conference. Further analysis showed that when a member school received national publicity for its athletic achievements, applications, enrollments, and SAT scores reached an even higher plateau, one never reached before breaking all records up to that time. The statistical analysis (Table 2-B) shows that at a confidence level in excess of 99.9% accuracy, applications, enrollments and SAT scores are indeed associated with the number of successes of their varsity team. In the second side study, four small Georgia colleges were analyzed. All four began football at the same time in the early I 980’s.
Comparison of Existing and New Tax Burden and Tax Effort Measures: Evidence from the Southeastern United States
Dr. Sanela Porča, University of South Carolina Aiken
Dr. David S. Harrison, University of South Carolina Aiken
Economists and policy analysts frequently use the concept of tax burden to compare the effect of state and local tax policies on residents’ economic well-being over time or between states. There are a number of methods to measure tax burden, and none can be considered perfect. Which measure to use depends on the question a researcher is trying to answer? One has to keep in mind that different measures of tax burden have strengths and weaknesses. Therefore, the results should be carefully reported and interpreted for the readers. The current study compares the existing tax burden and tax effort measures and introduces a new, simplified Representative Tax System measure. Economists and policy analysts frequently use the concept of tax burden to compare the effect of state and local tax policies on residents’ economic well-being over time or between states. There are a number of methods to measure tax burden, and none can be considered perfect. Tax burden represents a loss of economic well-being that arises from state and local government taxation. Knowing how a proposed tax change can affect the state and local tax burden is essential for assessing changes in adequacy, efficiency, and equity. A satisfactory measure of tax burden would take into account all taxes imposed by state and local governments and would indicate how tax changes alter the economic well-being of residents and the profitability of business firms. When tax burdens are measured as a percentage of tax capacity, the resulting number is know as tax effort. How can a jurisdiction’s tax burden and tax effort be measured? How high are tax burdens in one jurisdiction compared to others? How big a tax burden does an average taxpayer bear? Answers come from one or more of the various methods of measuring tax burden. Historically, tax burden measures ranged from such simple measures as tax revenue per capita and the ratio of taxes to personal income, to more complex and “scientific” measures such as the Representative Tax System. The current study compares the existing tax burden and tax effort measures and introduces a new, simplified Representative Tax System Measure. These measures are compared and contrasted using the data from twelve Southeastern States. One of the most frequently used measures of tax burden is tax revenue per capita measure. Figure 1 shows the levels and rankings of state and local tax revenue per capita for all 50 states. According to the tax revenue per capita measure, the state of New York has the highest state and local tax burden in the nation ($4,684) while the state of Alabama has the lowest one ($2,185). In order to give a consistent comparison of different tax burden measures, the current study utilizes data from twelve Southeastern States with a special focus on tax burden and tax efforts in South Carolina. South Carolina’s total tax and non-tax state and local revenue for fiscal year 2002 was $25 billion, of which $5.3 billion was intergovernmental revenue, $15.9 billion was general revenue from own sources, and $3.7 billion was non-general revenue. Total state and local taxes in South Carolina for the same fiscal year were $9.8 billion, while charges and miscellaneous general revenue were $6.2 billion. Table 1 presents the tax revenue per capita measure for South Carolina’s major state and local tax revenue sources. Measured using tax revenue per capita, South Carolina’s tax burden is below the national average for most taxes. If South Carolina’s state and local taxes per capita are combined, the state ranks as 45th in the nation. South Carolina’s tax burden remains low even when the per capita measure is calculated for the five major state and/or local tax sources: property tax (35th), general retail sales tax (39th), selective sales taxes (47th), personal income tax (33rd), and corporate income tax (44th). One area in which South Carolina ranks higher than the national average is in intergovernmental transfers from the federal government to the state and local level. In intergovernmental aid, the state ranks 20th in the nation with revenue per capita of $1,329. Based on per capita state and local revenue from own sources, South Carolina ranks 39th in the nation. The conclusions drawn from the tax revenue per capita measure are often misleading, because the same tax revenue structure will result in a lower tax burden if a jurisdiction is densely populated. Therefore, the most frequently available and reported estimate of tax burden, tax revenue per capita, should be carefully analyzed given its strong correlation to population. The tax revenue per capita measure of the tax burden does not tell the whole story. Considering the tax burden relative to the size of the economy (rather than the size of the population) might be a more appropriate measure of the combined state and local tax burden. The most direct and most widely used measure of tax burden is a ratio of taxes to personal income. This measure implicitly adjusts for differences in per capita income from state to state. In this case any specific dollar amount of taxes is a smaller burden to someone who makes more money than to someone with a lower income. Figure 2 shows levels and rankings of state and local tax burdens measured as a percentage of personal income for all 50 states. When the tax burden is measured as a percentage of personal income, New York has the highest tax burden (13.0 percent) while Tennessee has the lowest (8.3 percent). Once again, the current study’s focus will be on South Carolina’s tax burden for fiscal year 2002. Table 2 presents the components of the South Carolina’ tax burden when measured as a percentage of personal income.
CEO Annual Bonus Plans: Do Performance Standards Influence the Association between Pay and Performance?
Aamer Sheikh, Ph.D., Quinnipiac University, Hamden, CT
This paper provides evidence on the effect of internal versus external performance standards on the association between CEO annual bonuses and firm performance. It is the first paper to provide large-sample evidence on whether the type of performance standard influences the association between CEO annual bonus awards and firm performance. Using a sample of 754 firms for the years 1992-2005, I find that CEOs of internal standards firms have a lower sensitivity to accounting earnings than CEOs of external standards firms. The sensitivity to firm stock returns does not differ between internal and external standards firms. These results are robust to the inclusion of various control variables and alternative specifications of the regression model. Virtually every for-profit corporation uses annual bonuses to award its Chief Executive Officer (CEO). In light of the recent corporate scandals (Enron, WorldCom) which many attribute to the proliferation of stock options in CEO pay in the nineties, bonuses are likely to become an even more important component of CEO compensation (Leonhardt, 2002; and The Conference Board, 2002). Tony Lee, editor-in-chief of CareerJournal.com, the Wall Street Journal’s executive career website notes that “we’re seeing a rise in incentive plans that pay bonuses in place of stock options” (ERI/CareerJournal.com Executive Compensation Index, 2002). Murphy (2001) shows that the standards against which performance is measured play an important role in determining the amount of bonuses awarded to CEOs. This is in addition to the performance measures used by compensation committees in awarding CEO bonuses. These performance standards are especially important whenever bonus plan participants can influence the standard-setting process. Murphy (2001) classifies standards as “internally determined” when standards are directly affected by managers’ actions in the current or prior year and as “externally determined” when they are less easily affected by the actions of managers. This paper examines whether the types of performance standards used by compensation committees influence the association between CEO bonus and firm performance. Using a sample of 754 firms for the years 1992-2005, I document that 86 percent of firms disclose that they use internal standards in their proxy statements. The remainder use external standards in their annual bonus plans for the CEO. This is consistent with Murphy (2001) who reports that 77 percent of his proprietary sample of 177 firms use internal standards. Next, I examine the association between CEO bonuses and firm performance given the presence of internal and external standards in CEO annual bonus plans. I find that CEOs of internal standards firms have a lower sensitivity to accounting earnings than CEOs of external standards firms. The sensitivity to firm stock returns does not differ between internal and external standards firms. In order to investigate whether different types of performance standards influence the association between a CEO’s bonus and firm performance, I examine proxy statements of all firms in the 1999 release of Standard and Poor’s ExecuComp database. Out of the S&P 1500 firms covered by ExecuComp, 937 firms disclose the type of bonus plan performance standard in use in their proxy statement (the remaining 563 firms (or 37.5 percent) do not disclose the performance standards used to award their CEO an annual bonus). I then classify these 937 firms according to categories that best describe the performance standard determination process in that company (following Murphy, 2001). These categories include Prior-Year standards, Budget standards, and Discretionary standards (called “internally determined” standards), as well as Peer Group standards, Timeless standards, and Cost of Capital standards (called “externally determined” standards). Prior-Year standards include bonus plans which are based on year-to-year growth or improvement (such as growth in sales or earnings per share (EPS), or improvement in operating profits). Budget standards include bonus plans based on performance measured against the company’s annual budget goals (such as a budgeted net income objective). Discretionary standards include “bonus plans where the performance targets are set subjectively by the board of directors following a review of the company’s business plan, prior-year performance, budgeted performance, and a subjective evaluation of the difficulty in achieving budgeted performance” (Murphy, 2001). Murphy (2001) classifies these three types of standards as “internally determined.” Peer Group standards include plans based on performance measured relative to other companies in the industry or market. Timeless standards include plans measuring performance relative to a fixed standard (such as a pre-specified return on assets). Cost of Capital standards refer to performance standards based on the company’s cost of capital (such as plans based on economic value added, EVA®). Murphy (2001) classifies these three types of standards as “externally determined.” Table 1 describes the classification of the 937 firms. In order to enhance the power of my tests, I eliminate 151 firms which use a combination of internally determined standards and externally determined standards. Another 32 firms do not have a bonus plan for the CEO. Thus, my final sample is 754 firms. I classify 545 firms (72 percent) as using budget based standards, 69 firms (9 percent) as using prior-year standards, and 33 firms (4 percent) as using discretionary standards. Thus, a total of 646 firms (86 percent) use internally determined performance standards. I classify 65 firms (9 percent) as using cost of capital based performance standards, 28 firms (4 percent) as using peer based standards, 15 firms (2 percent) as using timeless (fixed) standards. Thus, a total of 108 firms (14 percent) use externally determined performance standards.
A Means-End Approach to the Analysis of Visitors’ Perceived Values of Leisure Farms in Taiwan
Yueh-Yun Wang, Far East University, Taiwan
The domestic agricultural industry faces the challenge of internal competition since it has joined the World Trade Organization (WTO). In order to help farmers, the Council of Agriculture of Taiwan has started to promote agricultural tourism. This study conducted an analysis of visitors’ perceived values toward leisure farms from the perspective of their experiences. With a Means-End Chain (MEC) approach, the results of this study revealed that the concept of experiential marketing is applicable to the leisure industry. Both the experiential modules, FEEL and SENSE, were considered to be most important from visitors’ viewpoints. The perspective of FEEL was formed by the results of relaxing, scenery, atmosphere of rural living, artificial Landscape. Based on these findings, we conclude that the attributes of scenery, artificial landscape and the atmosphere of rural living are essential for leisure farms to attract visitors. SENSE was derived from the results of feelings of appreciating natural beauty. Furthermore, the attribute contributing to the formation of appreciating natural beauty is scenery. As leisure farms are a newly developed industry in Taiwan, there has been little research in this field. This study could be of practical reference to marketing leisure farms and be an example of researching related industries. The importance of customers’ perceived value has been evidenced in the practice of marketers and with the research work of scholars for decades. As pointed out by Woodruff (1977), customer value has become one of the major sources for the advantage of competition. In order to satisfy customers’ needs, research projects should focus on exploring customers’ expectations of products and services provided (Albrecht 1994). Customers have been regarded as an important asset to a company, and how to find out and keep long-term relationships with key customers has become an essential issue for the operation of business (Blattberg, 1998). With regard to customer value, many scholars have presented their arguments from different perspectives. These viewpoints include：1) Transaction-Specific： Sense of saving from the transaction (Berkowitz and Walton, 1980; Urbany et al. 1988); 2) Price-Adjusted Quality：A trade-off between quality and price (Monroe, 1990; Dodds, Monroe, and Grewel, 1991); 3) Utility-Oriented：Utilities of Transaction & Acquisition (Zeithaml, 1988; Krishnamurthi et al. 1992); 4) Experiential：Subjective conception derived from the consuming experience (Woodruff, 1997; Butz and Goodstein, 1996). Among these four different viewpoints, the viewpoint of “Experiential” has gradually gained the attention of leading businesses in recent years. Pine Ⅱ & Gilmore (1998) argued that the economy of the world has switched from an agricultural, industrial and service-based economy to an experiential economy. The characteristics of an experiential economy include: consumers’ perceived values about their consumer experiences and a new consuming environment whereby consumers act rationally or emotionally under different consuming situations (Schmitt 1999). According to Schmitt (1999), Experiential Marketing consists of five strategic experiential modules. These include sensory experiences (SENSE), affective experiences (FEEL), creative cognitive experiences (THINK), physical experiences, behaviors and lifestyles (ACT). These experiences are implemented through so-called experience providers (ExPros) such as communications, visual and verbal identity, product presence, and electronic media. With the above concept in mind, we consider that the agricultural industry in Taiwan has been encountering a challenge of survival, especially after Taiwan joined the World Trade Organization (WTO). In order to help farmers, the Council of Agriculture of Taiwan has started to promote agricultural tourism. Consequently, the agricultural industry, which used to be dominated by farmers, has been turned over to the hands of consumers. In order to attract visitors, the concept of Experiential Marketing can be considered as one of the most efficient ways of providing visitors with better services in relation to their needs. Therefore, the purposes of this study were： (1) to conduct an analysis of the relationships of the attributes, consequence and values as perceived by visitors to leisure farms utilizing the MEC approach; (2) to provide leisure farms with suggestions for upgrading the quality of their services. As pointed out by Csikszentmihalyi (1975), the flow experience denotes the state of mind whereby one is completely absorbed in a specific condition in which there is no interference from the outside world. The effects of “Flow” result in the most enjoyable feelings, and by which one feels life is meaningful. As compared to the notion of the flow experience, Schmitt (1999) emphasizes the portion of experiences. He regards experience as a response to outside stimuli. Experiences are derived from direct observations or personal participation in events, and are usually induced rather than occurring spontaneously. As mentioned earlier, the economy has progressed from the “commodity” to the experiential stage. Pine Ⅱ & Gilmore (1998) and Schmitt (1999) argued that the trends of marketing include: (1) prevailing technology information, (2) the priority of brands, and (3) an integration of communication and entertainment. Therefore, the competitive advantage of business is to be built on the base of the consumer’s “experience.” Pine II, Joseph & Gilmore, James (1998) explained that the stages for the progression of the economy include Commodities, Goods, Service and finally Experience. At this stage, remembrance, rather than the product itself, is the point for product selling. From an experiential marketing viewpoint, consumers’ values lie in the consequence of interaction or from their subjective judgments of the experience. Customer value tends to be highly influenced by consuming situations (Sinha & DeSarbo, 1998). Holbrook (1994) defined the customer value as a preference that comes from the comparisons of past experiences. The highlights of Holbrook’s arguments include: (1) Customer value tended to be influenced by a consumer’s personal interests and preferences; (2) Customer value was derived as the result of interactions among products, services and salespersons; (3) Customer value was a relative, rather than absolute, result of a consumer’s assessment; (4) Customer value was formed after, rather than before, the use of products or services. In short, the formation of value does not rely solely on the benefit of a product itself. Based on the above discussion, it is essential that leisure farms focus on the “experience” in order to meet the demand of the new experiential economy.
Mergers & Acquisitions (M&As) As the Micro Effect of Globalisation and Turkish Insurance Sector
Suna Oksay, Ph.D., Marmara University, and Director, Turkish Insurance Institute, Istanbul
From 1980s, we see the influences of globalisation in all over the world. With globalisation, countries in several parts of the world with different rules and regulations, eliminate their current rules and liberalise their markets (de-regulation) and organise them with the rules similar to other markets (re-regulation). Consequently, as financial markets in the world are regulated with similar rules, the players (buyers and sellers) of different markets can easily operate in all markets. From 2000s, the effects of globalisation began to be seen in financial and insurance markets. It is clear that they appear intensively in Europe as well. As the macro effect of globalisation, European countries have come together, established the European Union and begun to use single currency. In this context, it is also decided to have a single insurance market. On the other hand, as the micro effect of globalisation, in Europe, from 1990s, the number of mergers and acquisitions (M&As) significantly increased in order to increase the competitive power. This paper will elaborate the issue of mergers and acquisitions in Europe, as the micro effect of financial globalisation and the results of the mergers and acquisitions. Financial globalisation can be defined as “financial markets in different countries having similar rules and regulations and acting as a single market.” With globalisation, these markets with different rules, eliminate their current rules and liberalise their market (de-regulation) and organise it with the rules similar to other markets (re-regulation). Consequently, as financial markets in the world are regulated with similar rules, the players (buyers and sellers) of different markets can easily operate in all markets. With the globalisation, borders between markets are also eliminated. Globalisation, which increases integration, has both micro and macro effects. While macro effect appears as a result of the integration of the countries to increase their competitive power, micro effect appears as a result of the integration of companies by way of mergers and acquisitions. We can shortly call the macro effect of globalisation as regionalism. Multilateral production and increased commercial and financial relations not only accelerate globalisation but also push countries in the same geographical area and with similar characteristics to have intensive regional relations in order to combine their power. In other words, globalisation and increased competition engender regional movements. Because of the increased competition, countries of the same geographical area which are weakened, unable to compete and/or willing to increase their competitive power come together and form economic zones with different levels of integration. As emphasized by Oksay, according to the theory of integration, integration has four different phases (Oksay,M.S. 2005a:45). In free trade agreement, integrated countries apply free trade policies for trading of goods and services between them, but they are free in the policies applied to third countries. Customs Union represents a higher level of integration than free trade aggrements because it necessitates a common customs tariff and common trade rules against third countries. Common Market means not only free movement of goods and services but also free movement of factors of production, capital, labor and technology between member countries. On the other hand, Economic Union necessitates the integration of all economic policies of member countries, which means not only free movements of goods, services and factors but also a full consistency of financial and fiscal policies. The European Union (EU) can be a good example for highly integrated markets. The EU is an integration of 25 member countries, regulated with similar rules. With the opening of financial markets of member states, 25 financial markets now operate as a single financial market. Thus, member countries, by unifying, aim to increase their competitive power against the countries of out of the block and one can claim that they are successful to do so. The micro effect of globalisation appears not at the level of countries but at the level of companies. Companies, to protect themselves from increased competition due to globalisation, take over a company or merge with other companies. This is called mergers and acquisitions (M&A). Globalisation and increased competition give rise to mergers and acquisitions. Companies, operating with cost, revenue and profit functions that are related to the production function, try to decrease their cost of production. Therefore, they prefer to come together to increase their competitive power. There are two ways to do that: merge with a company or acquire a company. Companies, by way of buying other companies or merging with them, increase their volume of production and diversify their product lines. This reduces average costs by leading to economies of scale and consequently this makes the competitive power of companies increase. Also, joint use of the factors of production makes the costs lower by leading to economies of scope and this makes the efficiency of companies increase. Mergers and acquisitions, by replacing inefficient managers with efficient ones, introducing more efficient administrative systems, improving the technology of the acquired firm, eliminating duplicate operations (like eliminating duplicate insurance agencies, claims settlement offices, data processing operations, etc., especially in within market M&As) and exploiting efficient back-office operations by expanding the amount of processing to be done (both within-market and cross-border transactions) lead to improvements in technical efficiency, allocative efficiency and revenue efficiency. Oksay shows that as a result of mergers of insurers with other financial institutions, big financial conglomerates emerge. These conglomerates can be divided into two, as bank dominant groups like Citigroup, Credit Suisse, Deutsche Bank and insurer dominant groups like ING and Axa Group (Oksay,S. 2004:11). On the other hand, M&As also lead to some difficulties for companies. For instance, rating agencies decreased the credit rate of some insurer dominant financial groups because of the difficulties of the integration of banking parts to the group. When we analyse world insurance sector, we see an acceleration of mergers and acquisitions and emergence of new financial conglomerates, especially from 1990s. When they are analysed according to production theory, mergers and acquisitions can be evaluated from two points of view: economies of scale and economies of scope. According to the theory of economies of scale, the increase of the volume of production of a company lead to a decrease in fixed costs per unit. More the volume of production of a company, more the efficiency increases and as a result, costs decrease.
Strategic (Big Picture) Technology
Dr. D. Keith Denton, Missouri State University
The problem for an organization or even a group is that this individual thinking and behavior are frequently at odds with other well-meaning and purposeful behavior. When we do have a moment to lean back and ponder, there is no system to help us understand where we are headed, where we are at, or how we are doing. Creating an effective work system entails linking strategy to operation objectives. A good system of performance measurement and feedback should facilitate a broader understanding of how the pieces fit together. This performance measurement and feedback system should address three critical concerns. There is help today and it is coming from a most unlikely source. Intranet technology can help groups improve their teamwork and performance by cutting through the details to reveal the connections and relationships at work. New use of intranets can be used to track and display the relationships between key outcomes and processes that lead to those outcomes. The intranet and supporting software has the capability to integrate outcome issues, like profitability or service concerns, with those processes that lead to those outcomes. Intranets can be used to integrate disconnected metrics to create a more realistic picture of what is really going on within the organization. Key objective measures of performance, like financial performance, could be compared to important subjective concerns, such as the degree of teamwork or perceived group effort. The everyday work world is often filled with endless details and low-level decision making. All of this, of course, is usually out of the sight and minds of senior-level executives. The work world, for many, is a maze of disjointed efforts. The problem for an organization or even a group is that this individual thinking and behavior are frequently at odds with other well-meaning and purposeful behavior. As group members, we try to do a good job and digest as much information as possible, but we often receive mixed signals. Senior executives always seem to be making references to vague platitudes like customer service, quality, or teamwork. There are procedures to adhere to and rules to follow, but many of us are left with a sense of vagueness. Some people may even like it this way because it lets them interpret things to suit their own needs. One point most of us can agree upon is that work contains a great deal of distorted and detached information. Everyone is moving, but often not at the same speed, nor in the same direction. One minute we are supposed to be controlling overhead, and the next we are concentrating on customer service. Ask yourself, How many employees, departments, or groups in my company specifically understand the implications of their mission or true purpose for what they are doing? If you believe you know the implications of your group’s mission or purpose, how many others in your group would agree with those assumptions? We are not talking about being able to recite a mission statement. The reality is that for many of us there is little awareness of our organization's true mission or higher purpose. We become lost in endless details. We are so busy figuring out how to do something that we forget to ask why we are doing it and whether it is getting us anywhere. And even when we do have a moment to lean back and ponder, there is no system to help us understand where we are headed, where we are at, or how we are doing. Creating an effective work system entails linking strategy to operation objectives. It means tying objectives to results and then measuring performance and making corrective changes. A coordinated performance measurement is an essential ingredient to being able to see the big picture rather than just a jumble of pieces. Many individuals within groups and organizations only see the pieces of their jobs and often fail to comprehend the connections between what they do and the facts, figures, ratios, goals, and objectives flowing through the organization. A good system of performance measurement and feedback should facilitate a broader understanding of how the pieces fit together. It should be a system designed to deal with reality and truth, not ideas, goals, objectives, dreams, and aspirations. Such a system does not use information that is proactive or detached, but rather is cross-referenced, reactive, and real. It is not about posting absolute numbers or ratios, but rather displays interplay of dynamic changes occurring within and outside the group.
Fostering and Funding Entrepreneurship (Innovation and Risk-Taking) in the Biotechnology Industry
Dr. Sumaria Mohan-Neill, Roosevelt University, Chicago, IL
Michael Scholle, formerly, Argonne National Laboratory, IL and currently, Ammunix Corporation, CA
Entrepreneurship, through innovation and risk-taking, has driven the recent phenomenal growth in the biotechnology industry. However, due to the very sophisticated technologies and processes required to accomplish innovations in biotechnology, it is a very capital-intensive industry. Sources of both human and financial capital are critical components for the survival and the eventual success of biotech companies. This paper explores the sources of funding in the biotechnology industry. It argues that biotechnology companies are more entrepreneurial, productive and innovative than larger pharmaceutical companies, and it predicts that mergers and acquisitions (M&A activity) will continue to accelerate, as larger and less innovative firms use more entrepreneurial firms to supplement their technology and product pipelines. However, the challenge is how to maintain innovation and foster the entrepreneurial spirit of a smaller biotech company after it has been digested by a larger entity. There are certain types of industries in which entrepreneurship and innovation do not require significant investment of financial and human capital; biotechnology is not one of those industries. The knowledge base and skill-set required to innovate require a very sophisticated understanding of science and technology. Furthermore, in addition to the investment in human capital, financial capital is required to provide the resources which facilitate the research and development activity by scientists. Ultimately, both financial and human capital is required to compete in this very sophisticated industry and marketplace. They form the foundation which helps to facilitate the entrepreneurial activities such as risk-taking and radical innovations, which are necessary conditions for success in this industry. They do not guarantee success, but success is highly unlikely without them. This paper explores the sources of funding used by biotech companies in their search for financial capital, so that they can fund their scientific innovations and business entrepreneurial activities, in the race to compete in a very technological and sophisticated industry. The importance of biotechnology in the great strides in medical care and the health of the population cannot be overstated. It has been an area of phenomenal achievement and potential for the future. Generally, publicly held companies undergo more rigorous scrutiny by investors, and more scrutiny and regulation by governments than privately held companies. So, the expectation is that on average, companies that go public have stronger financial positions and potential than private companies in the same industry. Therefore, it is reasonable to use the percent of public companies in a given industry as a proxy index to measure the relative financial strength (in terms of ability to get funding) across countries or economies. In the U.S., approximately twenty-three percent of biotech firms are public, which gives them access to public capital markets as an important source of funding (Figure 2). Asia-Pacific ranks second, where approximately nineteen percent of their biotech companies are public; Canada is third with about seventeen percent, and Europe is the lowest with only about five percent of the companies being public. If one utilizes the percent of public companies as a proxy index to measure the relative financial strength of companies in an industry, across various economies, it is obvious that the U.S. is strongest, and Europe is the weakest of the four major global economic regions, with respect to the biotechnology industry (Figure 2). The recent fervor and pace of funding by “major private equity firms” (e.g. Blackstone, etc.) in certain industries may create some doubt concerning the validity of using percent of publicly held companies as a proxy for financial strength. However, it is important to note that major private equity players have shown less interest in the biotechnology sector, compared to other more cash-rich industries. Also, the index is a relative measure within this specific sector (biotechnology). The revenues of the global biotechnology industry continue to grow showing a 14 percent increase during 2005. The US continued to dominate the industry with over 78 percent of total revenues. Some would argue that the better access to public capital markets in the U.S. makes it more competitive relative to Europe and other global economies. The global industry raised a total of $21.2 billion in capital during this year. In the US, a total of almost $17 billion was raised (Table II; Figure 3). The total biotechnology financing has increased for the last four years with 2004 funding being eclipsed only during the bull market in 2000 (Ernst and Young 2005). Niosi (2003) surveyed 60 Canadian biotechnology firms and found that “Access to Capital” was the number one perceived obstacle to growth. Coombs and Deeds (2000) indicate the importance of the capital markets and venture capital, but focus on alliances and direct investment from large pharmaceutical companies (big pharma), both domestically and internationally. Duca and Yucel (2002) report on the importance of venture capital to the biotech industry, but also indicate the important role of public funding in this area. Dibner and Howell (2002) looked at the various forms of funding biotechnology firms exploit during their growth and the importance of venture capital firms versus initial public offerings. Powell et al. (2002) looked at how close physical access to funding sources drives the geographical location of biotechnology firms. Friedman and Seline (2005) discuss how private and public funding sources limit collaboration between biotechnology firms. Much earlier, Paugh and Lafrance (1977) discussed sources of funding for biotechnology companies. Included in their list were public equity offerings, partnerships with other companies (both big pharma and biotechnology) and venture capital. US Department of Commerce (2003) in their survey of barriers to competitiveness in the biotech industry found that access to capital ranked third behind the regulatory approval process and R&D costs.
The Antecedents and Outcomes of Union Commitment in Turkey
Dr. Tunc Demirbilek, Dokuz Eylul University, Izmir, Turkey
Dr. Ozlem Cakýr, Dokuz Eylul University, Izmir, Turkey
The concept of union commitment is becoming increasingly important because of the decreasing number of union members in recent years. This paper presents findings of a study which examines the antecedents and consequences of union commitment on Turkish workers. The research is carried out by the members of a union established on oil and chemical plants located in the Aegean Region of Turkey. 461 respondents from 8 private sector firms are used as samples. Hypothesis about union commitment and normative approach, instrumental approach, attitudes toward industrial relations, attitudes toward unionism, socio-psychological support are confirmed. The findings also support the hypothesis regarding the relationship between union satisfaction, union participation and union commitment. Since 1980s, behavioral researches have focused on the attitudes of both current and potential union members. These researches have examined members’ attitudes in a wide range. The attitudes in question are depicted respectively as: general attitudes toward unions, union instrumentality perceptions (i.e. members’ degree of belief in unionism about succeeding in considerable subjects – such as salary, job safety), satisfaction about union performance and union-member relations, members’ feelings about their initial experiences in union (member socialization), comments about making use of grievances procedures and its effectiveness, and opinions about leadership skills of union stewards and union officials. Union-based behavioral researches arranged different attitudes and opinions about unions under the title: “union commitment”. In this study, the concept of union commitment its dimensions, sorts, private and general attitudes towards unions, participation in the union and union satisfaction are examined respectively. Finally, the findings of a research led in an organized union from the oil and chemical industry examining the union members’ commitment and its affecting variables is also presented. Despite being interested in commitment in order to realize the properties of union psychology since 1950s, the matter was consolidated by Gordon et al.’s reshaping the union commitment definition upon organizational commitment in 1980. The topics as allegiance and loyalty to the union had been basically examined in 1950s. Gordon et al. (1980), took the first step towards conceptualizing the union commitment, and determining the characteristics of the union commitment, using organizational commitment researches. The thought underlying Gordon et al.’s conceptual approach explains the commitment, as an individual’s committing to an organization. The four probable degrees of the union commitment can be divided into two separate structures. One is loyalty to the union and their belief in loyalty which is a matter of “attitude”, the other is worker’s responsible behaviors towards the union and their willingness to work for the union which is defined mostly “behavioral” (Paquet and Bergeron, 1996: 4). The union loyalty has three components. It reflects the sense of being proud of the union, expresses the change in relations, and points the wish for protecting or carrying on the union membership. The union responsibility focuses on member’s willingness to fulfill obligations about protecting the benefits of the union (Shore et al., 1994: 972). Behavioral signs of commitment in union activities can be guessed beforehand by looking at workers’ union responsibilities and willingness to work for the union. Fourth dimension, called “belief in unionism”, focuses on members’ ideological beliefs in unionism. It is, in general, a measure showing the rooted attitudes of members towards the union (Clark, 2000: 23). An opinion about member’s degree of commitment can be formed by determining the degrees a member is in, by means of four dimensions in question. For example, if a member has a high proportion overall, he can be considered as having a high degree of commitment to the union. On the other hand, a member who is loyal and believes in the union, but feels less responsible and shows low performance, not willing to work for the union, can be concluded as having a lower degree of commitment (Clark, 2000: 23). The Conceptual Model of The Union Commitment, developed by Newton and Shore (1992) has two divisions. These are normative commitment (ideological, value-based or moral commitment) and instrumental commitment (interest, profit-cost based commitment). Normative commitment is a value-based attachment in which union member assimilates organizational goals and beliefs. Such members are committed to the union via shared beliefs and values; and not for monetary gains. Instrumental commitment is another kind of commitment which is based on earnings and social assistance that members gain from the union. Member’s commitment to the union is based not on values and beliefs shared, but on the intellectual assessment of utilities about representing the union. As a matter of fact, Klandermans’ (1989) findings show that it is determinable, whether a member has a tendency to quit voluntarily or not, considering union commitment and perceived instrumentality.
Intellectual Capital and the Perceived Relevance of the Balance Sheet as a Value Measure for Corporations
Dr. Shaniz Khan, Al Hosn University, Abu Dhabi, United Arab Emirates
This paper investigates the impact of Intellectual Capital and its components on the perceived relevance of the balance sheet as an indicator of corporate value. It represents an attempt to integrate previously fragmented literature on Intellectual Capital to a single framework while providing an empirical examination on the effects of Intellectual Capital components. Results from the study have demonstrated that the balance sheet was indeed perceived to be irrelevant as an indicator of corporate value and was significantly associated with the level of investment in Intellectual Capital. This association was largely contributed by the Human Capital component. Thus, organizations that fail to disclose Intellectual Capital will find their balance sheets no longer perceived relevant as an indicator of their corporate value. Further, in disclosing Intellectual Capital in the balance sheet, more attention needs to be given to the Human Capital component. Assets are probable future economic benefits obtained or controlled by a particular entity as a result of past transactions or events (Statements of Financial Accounting Standards 3, 1980). These future economic benefits give every asset a value (Bernstein, 1993). Therefore, assets owned or controlled by corporations collectively determine the entire value of organizations (Edvinsson & Malone, 1997). Since the balance sheet is a financial statement that captures organizational assets, it becomes an important tool used by companies to measure and communicate their value (Edvinsson & Malone, 1997; Hopwood, 1976; Cheney, 1995; Jones, 1995). In the light of this, many corporations work hard to maintain a good balance sheet as it influences their corporation’s perceived value (Wipperfurth, 2002; Black & White, 2003). Besides this, corporations also monitor changes in their balance sheet closely as any change in value will influence the decisions of their various stakeholders (Bernstein, 1993; Norris, 2001; Basu, 2003; Shepherd, 2003). In today's 'Information Age', knowledge has become the pre-eminent economic resource as it forms the basis of competitive advantage (Lev et al, 2000; Edvinsson et al., 1998). It is fast replacing traditional factors of production like capital and land (Khaw and Leong, 2001) as the industries of today are no longer competing on the basis of natural resources but rather on the basis of brainpower or knowledge (Drucker, 1993). Thus, intangible assets like knowledge and innovation are more important for business success than tangibles like mass, size or physical assets (Rastogi, 2000). Therefore, knowledge may well be the most meaningful resource of today (Drucker, 1993). This knowledge embedded in individuals and organizations is known as Intellectual Capital (Taylor, 2001 and Koenig, 2000) and is a key strategic factor in gaining organizational competencies (Petty and Guthrie, 2000). This is obvious as knowledge drives ideas and innovations, which have increasing value (Stark 2001, and Karlgaard, 1993). When markets shift, technologies proliferate, competitors multiply, and products become obsolete quickly; successful companies consistently create new knowledge, disseminate it widely within the organization, embody it rapidly in new products and services, and innovate continuously (Rastogi, 2000). Therefore, it is not surprising to find that these knowledge embedded resources or Intellectual Capital (IC) are now any organization's most valuable asset. (Kargaard, 1993). However, IC does not appear as a positive value in the traditional balance sheet of companies (Taylor, 2001 and Koening, 2000). The balance sheet does not measure, recognize and report the organization's most valuable asset, that is, its IC (Cheney, 2001). Although indicators have been very clear, it is obvious that the accounting profession is still not equipped to embrace the challenges of the information era (Guthrie, 2001). Traditional reporting still does not cater for intangible assets such as IC as these values do not fit the bill of measurement and objectivity concepts in preparing the balance sheet (Gan, 2002). Managers are also finding themselves caught in a situation where the balance sheet brought from the industrial era provides nothing more than a snap shot of where the company has been (Edvinsson and Malone, 1998). It does not provide information about the company's knowledge, ideas and innovative capabilities, which have wealth generating potential (Guthrie and Petty, 2001). Investors too are finding the current balance sheet format as redundant perhaps even wasteful in the new information era (Bontis, 1998) as it fails to capitalize the true value of a business. Therefore, it is not surprising that advocates of IC are speculating that the balance sheet is losing its relevance as a key financial statement in the Information Age (Batchelor, 2000, Lev et al, 2000, Edvinsson et al, 1998). This is probably due to the fact that large corporate investments in IC, which have significant value, have gone undetected by the current balance sheet format (Lebbon, 2002). As a result, this study aims to investigate whether the balance sheet has indeed lost its perceived relevance, specifically as an indicator of value among corporations. And if so, is IC or its components significantly associated with the perceived irrelevance of the balance sheet as an indicator of corporate value today? Earlier researchers that studied the impact of IC on various organizational components were very much influenced by the researchers' definition of IC. There appears to be many views and definitions of IC. Many studies have defined IC as human capital and tended to only emphasize on investigating human capital components with other variables. Besides this, IC has also been defined as knowledge capital, knowledge management, and organizational learning in past studies.
The Evolution of Sea Navigation between the Two Sides of the Taiwan Strait
Ya-Fu Chang, Chang Jung Christian University, Taiwan, ROC
Dr. Chuen-Yih Chen, Chang Jung Christian University, Taiwan, ROC
Direct navigation between the two sides of the Taiwan Strait has been disrupted since 1949 as a result of political confrontation between Taiwan and Mainland China. However, with the increasing civil exchanges and trade activities and the rapid growth of foreign trade volumes at the harbors of Mainland China in recent years, the policies and approaches regarding direct navigation by the governments on both sides of the Taiwan Strait have given rise to concern by international shipping companies, whose vital business interests are closely connected with East Asian shipping lines. This paper analyzes the evolution of policies regarding direct navigation and shipping modes adopted by governments on both sides, particularly the content of and differences in restrictions on containership shipping lines. Recommendations for improvement of these policies are also provided. Since 1949, as a result of civil war in China, the governments on the two sides of the Taiwan Strait have prohibited cross-strait maritime exchanges and trade, despite increasing demands for an end to restrictions on such activities. Accordingly, seaborne cargo had to be unloaded and transshipped from third places, such as Hong Kong. Taiwan began to allow residents to visit relatives on the Chinese mainland in 1987, and Hong Kong was returned to Chinese sovereignty in 1997. Anticipating the loss of Hong Kong’s role as a third place apart from the two sides, Taiwan introduced an Offshore Shipping Center strategy in order to maintain its “no contact, no negotiations” policy toward Mainland China. With the increase of indirect trade and shifts in the international division of labor, however, both sides faced demands for direct navigation from the international community and private enterprises to reduce the cost and time incurred by transshipment of cargo and personnel through the third places. Since that time, the problems and policies involving direct navigation between two sides have received more attention. The differences in policies regarding direct navigation between Taiwan and Mainland China, as opposed to regulations governing other international shipping lines, stem mostly from different political definitions adopted by the governments on the two sides. Mainland China considers the line a “domestic” one, and claims that cabotage of cargo delivery and the cargo transported between two sides should not be open to foreign shipping companies. Taiwan considers the line a “special” case whereby shipping companies of both sides as well as those registered in foreign countries may obtain a permit to transport cargo from Mainland China to third places through an “Offshore Shipping Center” operated by harbor authorities in Taiwan (Mainland Affairs Council, 2003). Owing to differences in shipping policies between the two sides from 1949 to 2004, including bans on direct navigation, the necessity of transshipment through third places, and the Offshore Shipping Center strategy that adopted disparate restrictions based on nation of registration, shipping companies had to adopt special measures to meet the requirements of these policies and include lines between the two sides into their network of oceanic lines in order to compete effectively in the rapidly growing sea transportation markets. This paper explores measures that the shipping companies could adopt in light of the restrictions imposed on shipping activities by the governments of Taiwan and Mainland China. In addition, recommendations for the conduct of future direct navigation between the two sides are provided for the reference of both governments. East Asia is currently one of the world’s busiest container transportation regions. In a broad geographical sense, the so-called East Asia region includes Japan, Korea, Mainland China, Taiwan, Hong Kong, and the nations of Southeast Asia. But each country or region has its own viewpoint regarding the development of container transportation. Particularly, Taiwan lifted restrictions on importing regions in which many local manufacturers had business ventures, including South Asia and Mainland China, and thus changed drastically the flow direction of investment and consumer markets (Comtois, 1994). East Asia has become a leading center for international trade, boosting the development of container transportation at the harbors of Mainland China and attracting serious competition among the world’s largest shipping companies. Table 1 illustrates the shift in volume of international container transportation from North America and Europe in 1979, to North America and East Asia in 1989, and finally to Mainland China in 2005. Since 1990, Japan and newly developed countries have been investing in Southeast Asia and Mainland China, where they have taken advantage of low-cost raw materials and labor, thus transforming the region into one of the world’s most important manufacturing centers. This development spurred the rapid growth of container transportation among intra-Asia, Europe-bound and trans-Pacific lines (Robinson, 1998). The rapid development of container transportation that is illustrated in Table 2 is having an impact on the network of hub/feeder lines in the region. However, the different maritime trade policies of the governments on the two sides of the Taiwan Strait have complicated the shipping companies’ planning for container transportation and fleet size.
Mergers and Operational Efficiency in Taiwan’s Audit Firms
Tsung-Yi Tsai, Shin Chien University Kaohsiung Campus, Taiwan
Dr. Chung-Cheng Yang, National Yunlin University, Taiwan
This paper uses Data Envelopment Analysis (DEA) to assess the cost efficiency, the technical efficiency, and the allocative efficiency of audit firms, and applies Tobit censored regression model to examine the relationship between operational efficiency and audit firms’ mergers. Estimation of the model uses a balanced panel data for 120 audit firms for the period 1997-2001 in Taiwan. The empirical results indicate that the effects of mergers on scale economies effects are larger than cost effects, justifying recent merger activities in the public accounting industry. In the knowledge-economic age, audit firms’ professional services play a key role in the capital market. Different types of fraud within the audit profession have increased, and many countries have tightened legal restrictions on audit firms, including prohibiting them from providing some non-audit services. These difficulties have been compounded by market competition and other pressures related to the development of audit firms, so the question of how audit firms can survive in this changing environment is worth analyzing. For a long period of time, merger has been an important strategy adopted by audit firms to facilitate growth. At the end of the 1980s, the Big-8 audit firms began to merge, and the Enron scandal in 2002 led to the dissolution of Arthur Anderson. These events resulted in the Big-8 becoming today’s Big-4 audit firms. In the past, the research on mergers often concentrated on the auditor concentration, market power, market share, audit quality, profit margin and audit fees (Lee, 2005; Minyard and Tabor, 1991; Owen, 2003; Payne and Stocks, 1998; Tonge and Wootton, 1991; Wootton et al., 1990, 1994). However, merger is not a solution to all problems. The mergers of audit firms could solve some problems for a firm while bringing about others, such as the overlapping of service that results in layoffs, clashes between different corporate cultures and other conflicts arising from human relationship, bringing into doubt whether consolidation synthesis really can improve audit firms’ operational efficiency. This paper examines if the benefits produced from mergers are greater than the harm that can result from the audit firms’ operational cost. In this paper, operational efficiency is measured by the productivity and efficiency in the process of input to output. The higher the ratio of input to output, the greater the productivity and efficiency will be. Overall, research on the productivity of audit firms show that efficiency and productivity are gradually improved and, thus, marginal profits are increased. Moreover, there also exist cost inefficiency, technical inefficiency, and allocative inefficiency in audit firms, and positive scale economies are found in large audit firms; therefore, mergers could enhance technical efficiency and scale economies (Banker et al., 1999; Banker et al., 2003; Banker et al., 2005; Cheng et al., 2000; Dopuch et al., 2003; Samujh and McDonald, 1999). Thus, although prior studies generally acknowledge the benefits of scale economies and the positive influence of operational efficiency, not much research focus has been devoted to other consequences of mergers. The aforementioned research studies on mergers have not clearly explained whether scale economies outweigh other negative effects that might arise from the cost of internal organization. By adopting DEA and Tobit censored regression analyses, this paper investigates the benefits and costs of mergers to determine whether the effects of mergers to benefit scale economies are larger than the costs of mergers. The empirical results show that the effects of mergers on scale economies effects are larger than cost effects, justifying recent M&A activities in the public accounting industry. The remainder of the paper is organized as follows. Section 2 will develop our hypotheses. Section 3 will construct the empirical model of data envelopment analysis (DEA) to evaluate individual audit firm’s cost efficiency, technical efficiency, and allocative efficiency. Then a censored regression model will be established to explore the relationship between effects on scale economies and the cost. Section 4 presents our empirical results. Finally, Section 5 concludes the paper. The United States General Accounting Office (GAO) surveyed the reasons for mergers in the public accounting industry in 2003 and found that, through the reorganization and redistribution of resources, audit firms could produce scale economies and, thus, increase market share and maintain their standings in the audit market. Two main reasons for merger are, first, to increase operational profits and lower the total cost, and, second, to lower risks. After increasing the operational scale, audit firms can reduce the average cost (or increase the average revenue) by providing a broader range of professional services and increasing the number of branches, leading to an increase in scale economies (Banker et al., 2003; Darrough and Heineke, 1978). Through merger activities, audit firms can increase the scope of operation, financial efficiency and management efficiency, factors which contribute to firms’ operating efficiency (Banker et al., 2003, 2005). However, even though the merger of two audit firms could enlarge the scope of operation and enhance efficiency, the synthesis value of the mergers does not simply double. Because of the difference in organization cultures, audit logic, and management schemes, it often takes the firms a few years to adjust, a process which can be costly and decrease efficiency. For a long time, the audit market has been operated within a dual market structure. Big-4 audit firms differ significantly from NonBig-4 in terms of their audit revenue, number of partnerships, employees’ experience, audit methodology, management strategies, and target competitors. Compared to NonBig-4, Big-4 audit firms form more international alliances and have more global knowledge integration.
Users’ Perception of Entities’ Performance: Risks Arising from Implementing the Revised Version of IAS 1
Dr. Alessandro Mechelli, University of Tuscia, Viterbo
This paper deals with the issue of performance reporting by entities, an old issue that has interested academics and users for a long time. A recent standard (the revised version of IAS 1) issued by the International Accounting Standards Board tries to improve this reporting, attempting to bring IAS/IFRS standards into line with Statement of Financial Accounting Standard 130, Comprehensive Income. This standard, being only a segment – called segment A – of the whole project, does not deal with defining concepts on the basis of every kind of result, limiting itself to ordering that all non-owner changes in equity recognized in accordance with current standards be presented in one or two statements. We will show that these innovations put financial reports on the right track, but do not help users to understand the clear meaning of different kinds of results recognized in financial statements. The revised version of IAS 1 could have the result of affecting the users’ perception of entities’ performance, without defining the concept of performance that entities should show – a concept that, perhaps, will be explained in segment B. This approach could create a clouded situation in the interpretation of an entity’s performance and, consequently, in using the entity’s performance to make economic decisions. The measure of an entity’s performance is an old issue that has interested academics and users for a long time. In the past 30 years, the debate about the best way to report an entity’s performance has been enriched by various proposals that stem from the same initial point, the failure of net income recognized in financial statements to show an entity’s performance (Bacidore, Boquist, Milbourn and Thakor, 1997; Guatri, 1998; Madden, 1999; Ottosson and Weissenrieder, 1996; Rappaport, 1998; Stewart III, 1991; Weissenrieder, 1997). This paper enters this debate analyzing the changes contained in the revised version of IAS 1 – Presentation of Financial Statement (hereinafter IAS 1), published on 6 September 2007 by the International Accounting Standards Board (IASB). IAS 1 is the result of a project started in September 2001 when the Board added the performance reporting project to its agenda. The declared objective of the project was to enhance the usefulness of information reported in the income statement. In April 2004 the IASB and FASB decided to continue this project together as a joint project. The IASB also decided that the project did not only deal with performance measurement, but would constitute a complete set of financial statements. The IASB is undertaking this project in two segments and IAS 1 is the result of the first segment, which proposes «… to bring IAS 1 largely into line with the US-Statement of Financial Accounting Standards N° 130 Reporting Comprehensive Income» (Exposure Draft, IN par. 4). Even though IASB will complete the performance project with the second segment, it is evident, in our opinion, that changes required by IAS 1 – that is, the first segment – will affect the perception of entities’ performance by users. To develop our analysis, we will start from a concept of performance as the difference between the entity’s capital at the end and beginning of a given year (par. 2), and we will go on to discuss the different changes in capital that characterize various kinds of income recognized in accordance with GAAP (par. 3). Afterwards, we will analyze the main features of IAS 1, in particular those connected with reporting financial performance (par. 4). Furthermore, to improve the usefulness of financial statements, we will illustrate some of our proposals (par. 5) to clarify the meaning of the different kinds of results that may be recognized and called – in accordance with IAS 1 – “profit for the year” and “comprehensive income”. This paper ends with some short conclusions (par. 6). As we have anticipated in our preface, IAS 1 proceeds from the need to improve financial statement reports, bringing important news, especially for statements where an entity’s results are reported. To better understand the changes required by IAS 1 in measuring and reporting an entity’s performance in accordance with International Accounting Standards, we start from the concept of capital maintenance. The idea of using the concept of capital maintenance to measure an entity’s performance is not new. With this regard, Hicks defines a man’s income as the maximum value that he can consume during a week and still expect to be as well off at the end of the week as he was at the beginning (Hicks, 1946). Applying this concept to an entity, we may define the entity’s performance in a given year as the amount the entity can distribute to its owners and be as well off at the end as at the beginning of the year (Alexander, 1962; Barton, 1975). In other words, we ought to define the performance of an entity as the difference between its capital at the beginning and end of a given year excluding transactions with owners, such as investments made and distributed. Point a) – the differences stemming from results associated with cycles completed during a period – refers to revenues or gains, and correlated expenses and losses, that are realized and earned. Revenues and gains are realized when products are exchanged for cash or claims to cash; the same revenues or gains are earned when an entity delivers or produces goods, renders services or, more generally, does other activities that constitute its central operations.
Health Insurance: Do You Know What’s in Your Policy?
Marsha R. Lawrence, Albany State University, Albany, GA
According to the U.S. General Accounting Office (as cited in Thomasson, 2003), 95 percent of working Americans, under age 65, receive health insurance through their employers. Health insurance through employment is the number one method of paying for healthcare related expenses, but do people really understand health insurance and how it works? According to Employee Benefit Research Institute (EBRI) President Dallas Salisbury, “ . . . many Americans are not able to identify their health care plan, making a difficult issue more complicated by this lack of knowledge” (Latest survey, 1999). This is a quantitative study that seeks to examine if patients understand health insurance and its processes including why some services are paid and some services are not. The findings of this study are contradictory to the EBRI statement, in that the participants were able to identify their health care plan. Other significant findings include the fact that participants were satisfied with their health plan but they were not sure what is in their plan. Lastly, they read their Explanation of Benefits (EOB) but were not sure why services are often denied. According to the U.S. General Accounting Office (as cited in Thomasson, 2003), 95 percent of working Americans, under age 65, receive health insurance through their employers. Health insurance through employment is the number one method of paying for healthcare related expenses, but do people really understand health insurance and how it works? According to Employee Benefit Research Institute (EBRI) President Dallas Salisbury, “ . . many Americans are not able to identify their health care plan, making a difficult issue more complicated by this lack of knowledge” (Latest survey, 1999). It is crucial that the public not only know what type of policy they have, but the terms in their policy. According to a 1999 Health Confidence Survey conducted by the EBRI, almost two-thirds of the 87 percent of workers who are covered by managed care think they have never been in a managed care plan (Ginter, Swayne, Duncan, 2002). This level of uncertainty serves as the foundation for this study. The purpose is to increase awareness of health insurance among laypersons who do not understand his/her polices. The results from the survey will serve to educate the patient on interpreting their health insurance policy. Many patients have insurance and do not know what is included in their plan and therefore do not use their plan to the fullest extent. The purpose of this study is to increase knowledge of health insurance terminology in hopes that patients will understand their health insurance policy and will be able to communicate better with their insurance company’s representatives and health care providers and staff. Do lay persons know enough about health care policy terminology to understand and utilize their policy effectively? According to Shortell and Kaluzny (2000), external environment is defined as “all of the political, economic, social and regulatory forces that exert influence on the organization” (p. 14). External environments are constantly changing and may not be the same from one health care organization to another. Health insurance is a part of the external environment of the health care delivery system. Health insurance is an arm or extension of the economic forces that aid in driving health care cost up. The Health Insurance Institute (HII) (as cited in Raffel, M., Raffel, N., and Barsukiewicz, 2002, p. 25) determined the first health insurance company began in 1847. Health insurance has its origin in providing for accidents, and disability caused by accidents or sickness. Health insurance catapulted in the early thirties when the American Hospital Association (AHA) asked Justin Ford Kimball to discuss the health plan he created for school teachers (Raffel et al, 2002). Other hospital plans were developed in Illinois, Iowa, and Vermont but none compared to Kimball’s plan. After Kimball’s model caught on, more hospital insurance plans developed which is the beginning of Blue Cross (the hospital part of Blue Cross/Blue Shield). Enrollment in Blue Cross plans grew significantly in the next decade which was 6 million by 1940, 19 million by 1945, and 40 million by the early 1950s (Raffel et al, 2002). After the success of hospital plans, plans for physician services were soon developed. “Health insurance is a contract between a policy holder and an insurance carrier or government programs to reimburse the policy holder for all or a portion of the cost of medically necessary treatment or preventive care rendered by health care professionals” (Rowell and Green, 2002, p12). Health insurance covers all types of health care expenses. Depending on the insurance company, coverage may include preventive care, maternity care, laboratory services, outpatient care, inpatient care, emergency room care and prescriptions. Health insurance is a risk. Most people get insurance because they do not want to become broke due to some unforeseen disease or accident. Therefore, people are willing to pay a small amount today to keep from paying a large amount in the future. “The costs resulting from adverse changes in health can be so great that very few people could afford to pay for them themselves. Therefore, almost everybody needs health insurance” (HIAA, 2000, p.7). In the United States, there are two major types of public-funded insurance. These are Medicare and Medicaid. According to Roswell and Green (2002), Medicare is “a federal health insurance program for people 65 years of age or over and retired on Social Security, Railroad Retirement, or federal government retirement programs, individuals who have been legally disabled for more than two years, and persons with end-stage renal disease” (p.605). Medicaid is “a combined federal/state program designed to help people on welfare or medically indigent persons with medical expenses” (p.604). According to HIAA (2000), health insurance is provided mostly by commercial insurance companies which include approximately 800 life insurance companies. Companies like State Farm who sell life insurance may also sell an individual policy for health insurance.
The Role of Trust in Joint Venture Control: A Theoretical Framework
Samson Ekanayke, Deakin University, Victoria, Australia
This paper develops a theoretical framework and a number of propositions for systematically studying the role of trust in the control and performance of Joint Ventures, a prominent form of inter-firm alliance. The proposed framework is more complete than the frameworks available in the extant literature because it incorporates both transaction related risks and the partner related risks which are likely to impact on the reliance on particular control patterns. Partner-related risks in joint ventures are represented by the level of inter-partner trust, while transaction-related risks are represented by the Transaction Cost Economics (TCE) variables of asset specificity, task complexity, performance measurability, and environmental uncertainty. The framework also links one of the established management control typologies (i.e., behaviour, outcome, and social) to two of the alliance control patterns (bureaucratic-based pattern, and trust-based pattern) identified in the literature on alliance control. Inter-firm alliances, such as joint ventures, are a form based on mutually dependent cooperative relationships between two or more firms. However, alliances often combine elements of cooperation and competition (Parkhe, 1993; Child, 1998). As Child (1998, p. 242) observes “Mutual reliance and competition or conflict between partners can set up a game theoretic dynamic that adds to the risk and precariousness of the cooperation”. Cooperation and trust as well as competition and distrust exist within alliances (Child, 1998). While many studies acknowledge inter-partner trust as an important factor that needs to be taken into consideration in designing controls for achieving shared performance objectives in alliances, the theory and empirical evidence on the role of trust in controlling alliances is still limited and largely inconsistent. Researchers often comment about the considerable ambiguity that exists in the literature on the role of trust in controlling inter-firm alliances (e.g., Zaheer, McEvily & Perrone, 1998; McEvily, Perrone & Zaheer, 2003; Fryxell et al, 2002; Lui & Ngo, 2004). A joint venture (JV) is a separate entity formed by two or more partner firms to do business together. Within the broad category of inter-firm alliances, joint ventures occupy a prominent place because they are a popular form of cooperative agreement between equity partners (Groot & Merchant, 2000). Firms team up with other firms by forming JVs in order to gain competitive advantage, diversify risk, and to gain access to markets and technologies (Ghoshal, 1987; Vryza, 1997). Scholars consider JVs as a form of strategic alliance (e.g., Das & Teng, Barney & Hansen, 1994; Das & Teng, 2001a, Langfield-Smith & Smith, 2004). Joint ventures can be identified as domestic or international. In domestic JV’s all partners are located in one country. In international joint ventures (IJV) at least one of the parent firms is headquartered outside the joint venture’s country of operation (Geringer, 1988; Geringer & Herbert, 1989). IJVs offer opportunities for firms to cope with increasing uncertainty in global markets (Mjoen and Tallman, 1997). From a controlling view point, what makes a joint venture distinct from other forms of relationships is the shared ownership and control. Cultural dissimilarity and transboundary complexities further characterize IJVs (Frxyxell et al., 2002). Although JVs offer many opportunities and flexibilities to cope with technological and market uncertainties, they make partners vulnerable to the uncertainties or risks pertaining to the relationship. These risks include the difficulties of gaining cooperation of partners and the potential for opportunistic exploitation of the relationship by the partners (Das & Teng, 2001a, 2001b). The failure rates of JVs are significantly higher than that of the single firm (Bleeke & Ernst 1991; Das & Teng, 2000). It has been conservatively estimated that 50% of all JVs fail (Young, 1994). According to some estimates, up to 70% of all JVs fail within two years of their formation (Geringer & Herbert, 1991; Parkhe, 1993). Partner opportunism associated with JVs has been recognised as one of the main causes of their failure (Das & Teng, 2001a). Accordingly, the question of how partner firms control opportunism among the JV partners and achieve shared performance objectives has become a main focus of the alliance control literature. A role for both trust and control in curbing opportunism and achieving JV performance has been widely acknowledged and discussed in the literature (Granovetter, 1985; Parkhe, 1993; Madhok, 1995b; Ring & van de Ven, 1994; Nooteboom, 1996; Uzzi, 1997; Das & Teng, 1998, 2001; Boersma, Buckley & Ghauri, 2003). Despite the contribution from these conceptual and empirical studies, whether interpartner trust influences control and performance of JVs (and in other forms of alliances) still remains theoretically ambiguous and empirically under investigated issue. To shed light on this issue, the broad phenomenon of trust, control, and performance in the context of alliances needs to be systematically examined. In order to facilitate examination of the above research questions, this paper develops a theoretical framework of joint venture control linking transaction hazards, interpartner trust, management control patterns, and performance. The rest of the paper is organized as follows. The theoretical framework is developed in the next section. This is followed by the research hypotheses derived from the theoretical framework. The total risk of an alliance (alliance hazards) emanates from two sources: the nature of the transaction (transaction hazards), and the nature of the partner (relational hazards). The nature of the business or the task for which the joint venture has been formed (i.e., the transaction) determines the severity of transaction hazards in a JV. Task complexity, asset specificity, difficulty in measuring partners’ contribution, and environmental uncertainty are main components of transaction hazards.
How can Village Banks Maximise their Strategic Role of Promoting and Developing Small Businesses: An Overview of Developing Nations
Kisembo .K. Deogratius, Breyer State University, London Centre, UK
Village banking which is also referred to as private or local or private banking by many authors has been a fast-growing business in many more developed countries, and now it is moving into less-developed countries such as Sub-Saharan Africa as well. It is with no doubt that village banks are one of the major sources of funding for many Small Businesses in many Developing countries. The winds of competition are blowing through the world's banking market, particularly at the top end. Local banks in many areas have seen their wealthiest clients take their funds to international private banks. Now some are responding by developing their own services. "The opportunities for domestic Village banks appear to be very great indeed," says David Gibson-Moore, executive manager of private banking and share trading at the Al-Rajhi banking and Investment Company of Riyadh. Gibson-Moore echoes the sentiments of many individuals throughout the world that see the growth and development of village banking – in many different countries. The concept of village banking will be discussed more fully throughout the paper. It is important to define it and analyze it, because many people are unfamiliar with the terms and ideas used in the village banking tradition, and they may become confused if these terms are not explained more fully to them. Village banking will also be discussed in the literature review, as will the banking concept in general, small businesses, and developing countries, as it is important to see how all of these issues come together. The study will highlight examples from the Islamic banking industry, because it is involved with many village banking ideals and these are spreading into both developed and developing countries. The paper will also bring out issues of personnel management, influence of diversity and conflict that are important for the promotion and development of Small Businesses. Finally the discussion will also focus on the impact of other sources of funding like FDIs in the development of small businesses in developing nations. Village banking is very similar to Islamic banking, which is a banking activity that is based on Syariah (Islamic law) principles. It does not allow the paying and receiving of interest and promotes profit sharing in the conduct of banking business. Islamic banking has the same purpose as conventional banking except that it operates in accordance with the rules of Syariah, known as conventional banking except that it operates in accordance with the rules of Syrian, known as Fiqh al-Muamalat (Islamic rules on transactions). The basic principle of Islamic banking is the sharing of profit and loss and the prohibition of riba' (interest). Amongst the common Islamic concepts used in Islamic banking are profit sharing (Mudharabah), safekeeping (Wadiah), joint venture (Musyarakah), cost plus (Murabahah), and leasing (Ijarah). It is true that there are many studies and articles that have topics regarding village banking. However, it is not enough, largely because the majority of the research focuses only on a specific bank and that does not include the issue as to what it would look like in developing countries in the near future. Information that is available is very limited. One area of the world where village banking is already somewhat popular is in Sub-Saharan Africa. This is because many traditional banks are very inadequate in responding to problems of small businesses and the people in that region are often more comfortable with the village banking system. This does not mean that all of the individuals that live there are pleased with this arrangement, however, or that no larger banks exist. The difference there is in the majority of the banking that takes place. The problem to be solved in this study is the success rate of village banking in developing countries. The researcher will give important information on the organizational culture of the respondents. For instance, the organizational culture of village banking, non-village banking, and the people in developing countries will be involved. All of these cultures and their differences will be analyzed and compared to determine the success of this system in developing countries. This type of study would be significant to the people in developing countries to make them aware of what the future of the village banking industry will look like. It will also seek to determine if this banking industry has the potential to sell its services and create demand for its products in developing countries to compete with the other non-village banking systems, and whether it can encourage developing countries' populations to use its services. It would also benefit professionals involved with finance as well as those who are new in the market. This includes treasurers, financiers, bankers (including those operating under Islamic windows), as well as end investors, Corporate Financiers, Investment Bankers, Corporate and Commercial Bankers, Private Bankers, Analysts, Portfolio Managers, Consultants, Auditors, Project Financiers, Lawyers, Investment Advisors, Regulators, Government representatives, Insurance specialists, and many others. Literature shows that village banking, as some instances in post-communist Russia have shown, can provide business solutions even in situations where the conventional banking techniques fail to accommodate. The conventional banking is mainly a business of financial intermediation between savers and entrepreneurs. It earns its profit by borrowing at one rate of interest from those who have surplus and lending it a higher rate to those who can use it profitably. Village banking and finance is a growing phenomenon which came into existence to satisfy the financial needs of other individuals that did not want to tackle the big banks or that found that they could not get what they needed from the larger banks. Furthermore, village banking is growing at a rapid pace of 12-15% per year and is estimated to be managing funds approximating US$300 billion. The village banking industry has not been confined to developing countries alone but has spread to important finance centres in Europe, the United States of America, Africa, and the Far East. It is different from other bank's systems because it has its own laws that are mostly being followed by the community of small business owners and developing nations.
Networked Organization and the Owner/Supplier Relationship
Dr. Deborah Hardy Bednar, Chevron, Houston, Texas
Dr. Lynn Godkin, Lamar University, Beaumont, Texas
This case study details the criteria for establishing a Networked Company. The criteria is then focused on the development and management of a Networked Company involving an environmental remediation project. The participants include an international manufacturer, an environmental engineering firm and a privately held construction company. Conclusions related to trust relationships in the face of project complexity are reported. The notion that individuals can come together in virtual organizations and networked organizations is being recognized in the literature. (Hedberg & Holmqvist, 2003, p. 735) Networked companies are typically associated with advanced information technology (Grenier & Metes, 1995) in partnerships (Davidow & Malone, 1992) and with temporary groupings. (Goldman, Nagel, & Preiss, 1995) However, Hedberg uses the term “imaginary organization” to discount the influence of information technology. (Hedberg & Holmqvist, 2003) He uses “virtual organization” to refer to those organizations which are temporary and “imaginary organization” to those with similar characteristics, but of longer duration. (Hedberg, 2002, p. 11) Otherwise, networked companies coordinate their activities through development of joint mission and vision statements. (Hedberg & Holmqvist, 2003) “Although they typically consist of a number of semi-independent legal units, they behave as one organization, and they exist and are manifested in the imagination of their leadership.” (p. 734) Businesses are seeking competitive advantage by working closely with suppliers (Galbraith,1998) relying on virtual and networked organizations to fulfill that purpose (Duarte & Snyder, 2001). Many have explored the challenges of forming self-directed work teams across geographic boundaries (e.g., Duarte & Synder, 2001; Galbraith, 1998; Galbraith, 2000; Galbraith, Lawler & Associates, 1993). Organization design experts, including Drucker, Miles, Naisbitt and Savage, anticipate that networked companies will be the wave of the future (Cohen, 1993). It is the purpose of this paper to present a case study illustrating how one networked company was formed consisting of a major, international manufacturer and two substantial, but smaller firms. The Networked Organization is basically an arm’s length, contracted appearing between two extremes. At one point is the Shared Responsibility or Collaborative Team relationship in which long-term plans are shared between partners. Sourcing agreements, alliances (Galbraith, 1998), and self-directed work teams (Senge, 1990) are all labels given by various industries for this organizational form. At the other extreme, the Operator or Joint Venture relationship includes the exchange of equity. Here the responsibility for success or failure falls upon all partners in proportion to their investment in the enterprise. Equity exchanges, the most common organizational form in the oil industry (Lipnack & Stamps, 2000), are generally harder to dissolve than alliances and usually signal a long-term commitment. (Galbraith, 1998) Beyond the Operator or Joint Venture is the Wholly Owned Subsidiary relationship representing 100% equity control. The Networked Organization falls in the center of this continuum. The advantage of using the networked organization approach is access to the size, competencies, and resources of each network partner to better meet customer needs. The disadvantage is a loss of control over both personnel decisions and proprietary knowledge. The network partner plays two basic roles as network integrator and network specialist. (Galbraith, 1998) As integrator, the partner coordinates the activities performed by the many firms involved. As network specialist, technical expertise, execution know-how, and scaleable implementation are provided. The network specialist performs one or more activities such as product design or manufacturing that provide a relevant service to the networked organization (Lawler, 2000). In a networked organization, the integrator benefits from patents, licenses, or intellectual property that the specialist brings; the specialist benefits from an opportunity to demonstrate and field-test those same intellectual assets in the marketplace (Galbraith, Lawler & Associates, 1993). There are two basic factors distinguish the networked company from other organization models. First there is a basic internal alignment between the integrator’s company goals and project objectives which is determined through value chain and core business criteria analyses (Galbraith, 1998). Where there is low internal alignment, the integrator is more likely to initiate a networked company. Second, the integrator’s parent company has a need to build relationships with and coordinate work with network specialist or suppliers at the project level. In order to meet the project objectives, the network integrator must leverage the technical expertise and execution know-how that the network specialists bring to the organization. The integrator resorts to the Networked Organization out of a need to leverage outside expertise and experience to meet organizational goals. (Galbraith, Lawler & Associates, 1993). In February 1995 Chevron strategically decided to exit the southeast Texas refining market by selling the Port Arthur plant to Premcor. However, the sales agreement called for Chevron to characterize environmental impact and perform any remediation required by the United States Environmental Protection Agency (U.S.EPA) or the Texas Natural Resource Conservation Commission (TNRCC) after the sale was complete. The Port Arthur refinery was built in 1901 following the discovery of oil at Spindletop (Gulf Oil History, 2003) and had remained in constant operation since that date. Subsequently, Chevron was required to “make good” on the agreement and remediate much of a 4, 000 acre refinery site in constant use for ninety-four years! In this section we will describe the networked organization which was ultimately formed to perform the work and which met the letter and intent of the agreement. Chevron is a well-established, integrated oil company focusing on capital asset project development.
Personal Traits and Leadership Styles of Taiwan’s Higher Educational Institutions in Innovative Operations
Dr. Jui-Kuei Chen, Tamkang University, Taiwan
I-Shuo Chen, Graduate Student, National Dong Hwa University, Taiwan
With increasing numbers of higher educational institutions in Taiwan, how to become more efficient through innovative operation has become a critical issue. This paper studies the “Big-5” personal traits, leadership styles, and their relationship to innovative operation. Conducted with a sample of universities in Taiwan, the study analyzes 194 professors and lecturers from three universities by means of a questionnaire. The dimensions are divided into three parts: personal traits, leadership styles, and innovative operations. The study utilizes factor analysis, variable analysis, and correlation analysis. The two main findings are, first, that traits of extraversion and agreeableness have a positive relationship to higher perception of innovative operation in the university. Second, transformational leadership should combine with transactional leadership without management-by-exception passive (active participant style) for more efficient innovative operation. A discussion of the key research findings and some suggested directions for future research are provided. Because of Taiwan’s joining the WTO and an increasing number of universities, innovative operations have become a crucial issue for survival in a competitive higher-education market. Extant research has indicated that organizational operations involve primarily top managers and their subordinates (Beng & Robert, 2004), although some studies have shown that institutions of higher education often fail to implement innovative operation (Glower & Hagon, 1998; Cuban, 1999) because of a lack of participation by teachers (McLaughlin, cited in Rudduck, 1991). Because of this finding, understanding the personal traits of teachers and managers’ leadership styles will be crucial for universities which seek innovative operation. The literature has defined leadership styles in numerous ways but, currently, leadership style is described as the process that managers use to influence subordinates to work toward organizational goals. Wu (2006) indicated that the top manager is the helmsperson of the organization, but some top managers do not understand how to lead their teams to efficient and innovative operation (Shally & Gilson, 2004). Therefore, understanding the best leadership style for innovation will help top managers to lead the organization to innovative operation more easily and more successfully. Personal traits refer to characteristics, enduring patterns of thought, emotions, and behaviors that are stable over time and across different situations (Funder, 2001). Thus, understanding subordinates’ traits will be an important factor in gaining the participation in innovation from teachers that is necessary in universities in Taiwan today. The literature has found and summarized many kinds of leadership styles (e.g., Davis, 2003; Spears & Lawrence, 2003; House et al., 2004; Hirtz, Murray, & Riordan, 2007). Transformational leadership is the most frequently researched (Judge & Bono, 2001) for increasing motivation (Charbonneau et al., 2001) and operational performance (Barling et al., 2002). Transformational leadership is marked by sympathy and creativity (Popper, Mayseless & Castelnovo, 2000) and can be divided into four dimensions: idealized influence, inspirational motivation, individualized consideration, and intellectual stimulation. Transformational leaders aim to respect subordinates’ abilities and need for rewards (Scott, 2003) and emphasize improving subordinates’ view of the value of their work in order to motivate them to accomplish higher performance (Sivanathan & Fekken, 2002; Miia, Nicole, Karlos, Jaakko, & Ali, 2006; Bass & Riggio, 2006). However, the leadership in universities in Taiwan is still unclear. Some studies on educational institutions have shown that leadership styles have significant relationship to operational performance (Nidiffer, 2001; Davis, 2003). More than 30 percent of research studies on the relationship between transformational leadership and operational performance, using different analysis criteria and variables (e.g., Dvir, Eden, Avolio, & Shamir, 2002; Shin & Zhou, 2003; Pillai & Williams, 2004), have found that transformational leadership will result in high motivation, identification, high innovation, and high performance (Scott, 2003; Bass et al., 2003). Transactional leadership in organizations plays an exchange role between managers and subordinates (Jung, 2001). The transactional leader will first confirm the relationship between performance and reward, and then exchange it for an appropriate response that encourages subordinates to improve performance (Scott, 2003). In this way, transactional leadership emphasizes reinforcement and exchange (Jung & Sosik, 2002; Gregory, 2006). Transactional leadership is divided into three dimensions: contingent reward, management-by-exception active, and management-by-exception passive. Some research has indicated that transactional leadership will not make subordinates work beyond established standards or innovations (Scott, 2003), but leaders always encourage innovation and high performance by reinforcement or reward (Jung & Sosik, 2002; Gregory, 2006). For the purpose of this study, the transactional and transformational leadership style is adopted in the Multifactor Leadership Questionnaire. There is a good deal of literature about the relationship between personal traits and performance (e.g., Barrick, Mitchell, & Stewart, 2003; Hough, 2003; Judge, Kristof- Brown, 2003). Evidence is accumulating which suggests that virtually all personality measures can be reduced or categorized under the umbrella of a 5-factor model of personality, which has been labeled the “Big Five” (Timothy, Chad, Carl, & Murry, 1999). The “Big-Five” personal traits model is divided into the five dimensions of extraversion, conscientiousness, agreeableness, neuroticism, and openness to experience, which are related to an organization’s operational performance (Judge, Heller, & Mount, 2002). The neurotic trait is marked by vulnerability to pressure and illness (Suls, Green, & Hills, 1998; Shifren, Furnham, & Bauserman, 2003) and is related to negative work performance (Barrick et al., 2001; Hogan & Holland, 2003).
Constructing Financial Ratios to Evaluate Technical Corporations
Chun-Huang Liao, National Chiao Tung University of Management Science, Hsinchu & Lecturer, Chungchou Institute of Technology, Yuanlin Changhwa, Taiwan, R.O.C.
This article constructs cross-sectional data from financial ratios to evaluate technical corporations in Taiwan. Via establishment of financial constructs, 560 listed companies were segmented into four groups, each with its own financial situation. Financial ratios—including pre-tax EPS, net worth per share, cash flow per share, after-tax ROE, revenue per share, quick ratio, current ratio, and net worth ratio—are found to be significantly related to the factor of corporate market value. Reducing eight variables to two financial constructs—Operating Efficiency (OE) and financial Physique Structure (PS)—provides an overall financial picture and differences among listed companies. The findings should prove valuable in portfolio management and corporate financial decisions. Revealing the financial quality of tech corporations, with financial constructs as a filter, can help to reduce risk and raise the performance of investment capital. Although the scientific and technological industry in Taiwan and many advanced countries is flourishing, not all tech corporations are at the same stage of development: Some are at the growth stage of the industry, some are in the plateau period, some are in the decline phase, and a few face bankruptcy. Filtering financial constructs helps to divide firms into different groups in order to benefit investors and layers of management. This paper presents four types of financial situations. First, firms with relative advantages in both Operating Efficiency (OE) and Physique Structure (PS) deserve better ratings, and investments in these firms should be maintained because their credit risk is relatively low. Second, firms with relative advantage in OE and relative disadvantage in PS can expect their future financial situations to improve, and investments in these firms should also be maintained for potential future capital gains. The third type is firms with relative disadvantages in OE and relative advantages in PS. These firms’ future financial situation can be expected to worsen in the future, and the risk of operations to increase. Management of these firms should be working hard to improve the situation, and investments in these firms should be maintained with caution. The fourth type is firms with relative disadvantages in both OE and PS, which shows that the enterprise's financial situation is bad, the investors' risk is increasing, and the company’s credit risk is relatively high. Management of these firms should be working extremely hard to improve the financial conditions of their companies, and investors should beware. Many scholars have studied the related topics of financial ratios, stock price and returns. For example, Nissim and Penman (2003) presented a financial statement analysis that distinguished leverage that arises from financing activities from leverage that arises from operations. Their financial statement analysis explained cross-sectional differences in current and future rates of return as well as in price-to-book ratios. They concluded that balance sheet line items for operating liabilities are priced differently than are financing liabilities. Fama and French (1992) researched and documented a significant relationship between firm size, book-to-market ratios, and security returns for non-financial firms, while Barber and Lyon (1997) documented that the relationships between firm size, book-to-market ratios, and security returns are similar for financial and non-financial firms. Yiu-Wah Ho et al. (2000) studied the Hong Kong stock market and found that beta, book leverage, earnings-price ratio and dividend yield are not considered in pricing, whereas significant book-to-market equity, market leverage (absorbed by book-to-market equity), size, and share price effects are. Danielson and Press (2003) derived a new model of the ARR-IRR relationship and demonstrated that economic returns can be estimated from accounting numbers for many firms and, thus, contribute to understanding why accounting information is value-relevant. Rakesh Bali (2003) documented significant drift in stock returns following announcements of changes in cash dividends, drift which was robust and was not explained by beta changes; while prices react to further announcements, dividend decreases exhibit weak auto-correlation. Lewellen (2004) predicted returns with financial ratios by showing how dividend yield predicted market returns during the period 1946-2000, and book-to-market and the earnings-price ratio predicted returns during the period 1963-2000. Bagella et al. (2005) built a discounted cash flow (DCF) model and tested it on a sample of high-tech stocks to determine whether the strong and the weak version of the model were supported by data from the U.S. and European stock markets. Empirical results showed that fundamental earnings price ratios explained a significant share of cross-sectional variation of the observed E/P ratios. Robertson and Wright (2006) provided new evidence of the power of dividend yields for U.S. aggregate stock returns by showing that cash-flow yield has strong and stable predictive power for returns. Hwang et al. (2006) studied the Dividend Pricing Model and presented powerful panel tests of the dividend-pricing relationship using a unique data set, finding broad support for the dividend pricing model during periods both before and after the Asian Financial Crisis of 1997-1998. Sadka (2006) studied the role of earnings using accounting earnings instead of dividends as a measure of cash flows and showing that as much as 70% of the variation in the dividend-price ratio could be explained by changes in expected earnings. Jiang and Lee (2006) developed the loglinear cointegration model that explained future profitability and excess stock returns in terms of a linear combination of log book-to-market ratio and log dividend yield. The loglinear cointegration model performed better than either the log dividend yield model or the log book-to-market model in terms of cross-equation restriction tests and forecasting performance comparisons. Finally, Clubb and Naffi (2007) suggested that book-to-market would be positively related to returns if market value of equity equaled future expected cash flows discounted at the expected return and book value proxies for future cash flows. Other studies of asset prices include Caginalp et al. (1998), who researched initial cash/asset ratio and asset prices and suggested that fundamental value is approached belatedly, offering some consolation to the rational expectations theory. Plerou et al. (2002) used cross-correlations between price fluctuations of different stocks, using methods of random matrix theory (RMT), and found that the largest eigenvalue and its corresponding eigenvector represented the influence of the entire market on all stocks. Park (2005) studied stock return predictability and dispersion in earnings forecasts using monthly data for earnings forecasts by market analysts to show that the dispersion in forecasts had particularly strong predictive power for future aggregate stock returns at intermediate horizons. He also suggested that the dispersion in analysts’ forecasts could be interpreted as a measure of the differences in investors’ expectations, rather than the risk. The existing research on ratios in financial statements is significant in its prediction of stock returns, no matter how or what research methods were adopted into models or what data was used from different periods; all confirmed the usefulness and validity of financial ratios.
Skill Requirements for Software Developers: Comparisons between U. S. and Taiwan
Jui-Hung Ven, China Institute of Technology, Taiwan, R.O.C.
Chien-Pen Chuang, National Taiwan Normal University, Taiwan, R.O.C.
The job advertisements for software developers were collected from web recruiting services in the U. S. and Taiwan. Skills requirements were gathered via a semi-supervised program based on the information competency ontology created by the authors. All skills were classified into six categories: operating system skills, programming language skills, markup language skills, database skills, distributed technology skills, and other skills. The proportion of skill requirements for each individual skill and matched pairs in every skill category were calculated. The most common skills needed were reported and compared. The result shows that Windows, Java, SQL Server, and .NET were the most common needed skills in the four main categories. The average numbers of skill requirements were 9.16 and 7.69 in the U. S. and Taiwan, respectively. A software developer should have many facets of competencies in order to complete the design, development, installation, and implementation of information systems. Competency is a set of KSA, which is the acronym for knowledge, skill, and ability. From an employment’s point of view, a software developer should know the knowledge related to software development such as programming techniques, data structures, relational database concepts, object-oriented concepts, software development life cycles, and etc. A software developer should also have technical or hard skills such as Windows, Java, VB (Visual Basic), SQL Server, and etc. Abilities or the so-called soft skills such as listening, speaking, reading, writing, information gathering, and etc. are needed in all workplaces. However, skills are specific to different job areas (LaDuca, 2006). Hence, of the three areas of competence--knowledge, skills, and abilities--skills are recognized as the most important for software developers (Bailey & Stefanik, 2001). In the 1970’s, a software developer with skills in one operating system and one programming language were enough to seek employment. However, the average numbers of skills requirements increased from 2.2 in 1970 to 4.3 in 1990 (Todd, McKeen & Gallupe, 1995), almost a two-fold increase in 20 years. The trend continued and became even more rapid afterwards. An updated report (Surakka, 2005) indicated that the average number increased from 3.57 in 1990 to 7.66 in 2004 or an increase of over 200% in 14 years. With these rapid changes, educators and trainers should understand very well the skills needed by various enterprises in order to keep their curriculum up-to-date. Students and job seekers must also be aware of these trends to better prepare themselves for employment. Current IT workers related to software development should also realize the changing trends in order if they want to keep their jobs (Emigh, 2001; Lethbridge, 1999; Trower, 1995). Hence, the present study collected the job advertisements posted on the human resource web pages from both the U. S. and Taiwan. The reasons for this are because the U. S. has been a long time an information technologies powerhouse and can be used as the basis for comparisons. The selected job titles include software developers and all types of programmers. We then wrote an information-gathering program which uses a semi-supervised method to gather the needed skills based on the information competency ontology created previously by the present authors (Ven & Chuang, 2007). The ontology classifies skills into six categories: operating system skills, programming language skills, markup language skills, database skills, distributed technology skills, and other skills. With the help of the information competency ontology, we wrote another program to calculate the proportions of skills requirements for every skill and matched pairs in each category. The structure of the paper is as follows. Section 2 reviews the related work. Section 3 describes the research methods. Section 4 gives the results and discussions. Section 5 presents the conclusions. There are many related studies identifying the skills needed by students, enterprises, and employers. The research methods used can be roughly divided into three different types: content analysis, surveys, and both. The data source can be from current IT workers, managers, consultants, employers, and job advertisements posted on newspapers, professional magazines, or web pages. The type of analysis can be either one year of cross-sectional analysis to understand the current status, or several years of longitudinal analysis to understand the trends of change. This section is a literature review according to the type of analysis. Beise, Padget, and Canoe (1991) surveyed 924 software developers concerning the skills required for job success. Approximately 68% of the respondents who work as programmers for one to three years after graduation responded that programming skill is the most important because it is basic for all types of IT jobs. Another study (Haywood & Madden, 2000) surveyed 22 software developers and about 64% of the respondents indicated that programming skill is the most important. A more recent study (Bailey & Stefanik, 2001) surveyed 227 programmers in the different areas of knowledge, abilities, and skills. The majority of them replied that the most important skill is to be able to read, understand, and modify programs written by others as well as to encode new programs. Among the least important skills were RPG and Novell NetWare. All of the results of the above-mentioned studies (Bailey, 2001; Beise, 1991; Haywood, 2000) emphasized the programming language skills because any software development requires such skills. From the viewpoint of programming, it is easy for a programmer to write a new program by using his own thought and logic. However, it is more difficult to modify someone else’s program because one has to first understand the other’s thought and logic. As for the RPG and Novell NetWare, a traditional programming language and a file sharing local area network, respectively, they can be replaced by more powerful programming languages, databases, and local area networks. A survey of 45 companies was conducted by (Knapp, 1993) regarding the programming languages used. Based on their rank order, they were as follows: COBOL (84%), Assembler (31%), C (20%), Fortran (13%), RPG (7%), and Pascal (2%).
Accounting for Impairment Test of Investments in Subsidiaries and Associates
Mauro Romano, Ph.D., University of Foggia, Italy
This study examines the two principal items that need to be treated in the impairment accounting of investments in subsidiaries and associates: goodwill and minority interests. The specific attempt of this study is to concentrate the attention on the determination of recoverable amount, defined as the higher of fair value less costs to sell of the asset and its value in use. The impairment test of investments in subsidiaries and associates may be approaching in different way in separate and consolidated statements. In the separate statement, the carrying amount of an investment in subsidiaries or in associates is compared with recoverable amount to determine impairment loss; in the consolidated statement, the impairment test regards identifiable assets and liabilities of the subsidiary or associate and, then, as residual value, the “goodwill on consolidation” resulting from the consolidation techniques. The proposed amendments of IFRS 3 (June 2005) introduces the ‘full goodwill’ method in the consolidated statement, with interesting reflects on the impairment test accounting. The standard applies to an entity that makes an explicit and unreserved statement that its general purpose financial statements comply with IFRS. The standard sets out the procedures that an entity must follow when adopting IFRS for the first time. Its objective is to ensure that financial statements contain high quality information that is transparent, comparable, and can be generated at a cost that does not exceed the benefits to users. The general principle is that first-time financial statements should be prepared on the basis that the entity had always applied IFRS. This is the so-called “retrospective application.” Since it may be difficult, expensive, or even impossible to rigidly apply this general principle, the standard provides some important exceptions and exemptions to the basic measurement principles of IFRS. IASB’s International Financial Reporting Standards introduces significant innovations in the recognition, measurement, presentation, and disclosure of financial investments and in the impairment test approach in observance to IAS 36 “Impairment of Assets.” The version of IAS 27 “Consolidated and Separate Financial Statements,” effective for an annual period ending 31 December 2005, was revised most recently in December 2003 and amended in March 2004 by IFRS 3 “Business Combinations” and IFRS 5 “Non-current Assets Held for Sale and Discontinued Operations.” According to this standard, consolidation is based on control, which is the power to govern the financial and operating policies of an entity so as to obtain benefits from its activities. Consolidated financial statements should include all subsidiaries of the parent, without exception. The logic of impairment tests and international accounting standards does not differ significantly with the precepts of art. 2426 of the Italian Civil Code. The substantial difference is represented by the details of indications included in IAS 36, with the aim to indicate possible methods of determining corporate values to be considered in the process of measurement and disclosure of impairment losses. There are several steps in the process of transition to IFRS. We emphasize that in the phase of reclassification of assets and liabilities, certain intangible assets recognised on a business combination would need to be reclassified as goodwill if their recognition does not meet IAS 38 Intangible Assets criteria. It is also possible that the reverse could occur: previous business combinations occurring before the opening balance sheet do not have to be restated to comply with IFRS. Mergers do not have to be re-accounted for as acquisitions, previously written off goodwill does not have to be reinstated, and the fair values of assets and liabilities may be retained. However, an impairment test for any remaining goodwill after reclassifying any necessary intangibles must be made in the opening balance sheet. IAS 36 covers the impairment of all non-financial assets except for investment property, inventories,; biological assets, deferred tax assets, assets arising from constructions contracts, assets arising from employee benefits, non-current assets (or disposal groups) classified as held for sale, deferred acquisition costs, and intangible assets arising from an insurer’s contractual rights under insurance contracts. It does not cover financial assets but does apply to investment in subsidiaries, associates, and joint ventures. The impairment test on the book value of investments in subsidiaries, associates, and joint ventures is comparable to the impairment of a cash-generating unit (CGU). It may not be possible to assess a single asset for impairment, because the asset generates cash flows only in combination with other assets. Therefore, assets are grouped together into the smallest group of assets that generates cash inflows from continuing use and are largely independent of the cash inflows of other assets or groups thereof (e.g. a plant or division). The identification of a CGU requires judgement, which is one of the most difficult areas of impairment accounting. In identifying whether cash inflows from assets or CGUs, it is necessary to take into account a number of factors, including the manner in which management monitors operations and makes decisions about continuing or disposing of assets and/or operations. However, the identification of independent cash inflows is the key consideration. Two items must be addressed in the impairment accounting of investments in subsidiaries, associates, and joint ventures: goodwill and minority interests. Goodwill does not generate cash inflows independently of other assets or groups of assets and is not tested for impairment independently. Instead, it should be allocated to the acquirer’s CGUs that are expected to benefit from the synergies of the business combination, irrespective of whether other assets or liabilities of the acquiree are assigned to those units.
The Choice of Housing Location
Hsiu-Yun Chang, Ph.D. Candidate, National Taiwan University, Taiwan
& Lecturer, Chungchou Institute of Technology, Yuanlin Changhwa, Taiwan
This study attempts to explore the influence of house demanders on choice of housing location. The neighborhood’s effects have been investigated for a long time, and they are very important factors in the real estate market. However, more and more literature has recently focused on asymmetric information, and indirectly has found that a local home bias phenomenon exists in the choice of housing location. For example, Lerner (1995), Coval and Moskowitz (1999, 2001) and Garmaise and Moskowitz (2004) contribute to the literature by investigating the relationship between the distance and the degree of information asymmetry. Their investigation indirectly injects the conception of the local home bias phenomenon into the real estate market. This study models the spatial home bias effect and spatial neighborhood effects that arise from distinct community components and tries to explore whether the contextual interactions within the community influence the choice of housing location by the household members. Finally, the results prove that both effects influence the location choice of housing buyers. For further investigation, this study suggests that researchers can collect real estate data to test the magnitude of both these effects within each jurisdiction of a country, and thus forecast immigration phenomena. Housing purchase behavior consists of dual motives: consumption and investment (Henderson and Ioannides, 1987; Arrondel, 2001). For consumption demand, houses are durable consumption goods and yield tangible services each period. The intensity of consumption services at any point in time is hard to measure but is strongly influenced by the housing environment. For investment demand, there are many distinctive features in the housing market. First, the value of houses, unlike the value of stocks, account for a very large portion of the wealth of a household; thus, it’s difficult to trade houses very frequently. Second, it’s immovable. Therefore, whether the motivation of the housing purchase is for consumption or for investment, people will choose the house cautiously, especially regarding the location. A phenomenon is observed: If there exists a distinct socioeconomic growth rate or certain living standard levels among communities, the discrepancies in the employment opportunities and individuals’ feedback will exist among communities as per neighborhood effects. Then, an implicit power is facilitated for people in a low growth rate community moving to high growth rate community. The regional neighborhood effects, together with the forces of urban growth and decay, facilitate the spatial arbitrage that is crucial for the valuation of residential capital (Ioannides, 2002; Dietz, 2002; Manski, 1993, 2000). In reality, the housing market is full of information asymmetries. Positive neighborhood effects can raise the valuation of a property. However, in high-asymmetric information environments, the valuation of a property will be overpriced. Therefore, the response of the price and result of neighborhood effects are not in direct ratio. Lerner (1995), Coval and Moskowitz (1999, 2001) and Garmaise and Moskowitz (2004) investigated the relationship between the distance and the degree of information asymmetry. Their investigations indirectly inject the concept of the local home bias phenomenon into the real estate market. Neighborhood effects, explained as community influences on individual/ household socioeconomic outcomes, are interdisciplinary terminology. Many sociologists and economists have discussed this topic in many theoretic and empirical papers as in Manski (1993, 2000), Dietz (2002), Haurin, Dietz and Weinberg (2003) and so forth. Empirical research conceptualizes social interaction processes only in broad terms that lack the clarity of markets and games. Realizing this, Manski proposes three hypotheses that empirical researchers have sought to distinguish between: endogenous interactions, contextual interactions, and correlated effects. Currently, we use housing demand behavior of household to explain the concepts of these three types of neighborhood effects. An endogenous interaction is present within a neighborhood if, all other things being equal, the behavior of a household’s housing purchase tends to directly vary according to the average behavior of other households’ housing purchases in the community. A contextual interaction is present within a neighborhood if a household buys another house in the local community because of exogenous characteristics of the household’s neighbors or the socioeconomic composition of this community—such as education, the crime rate, or the living standard of this community. A correlated effect is present if households in the same neighborhood tend to have similar characteristics because they have the same characteristics or face the same exogenous characteristics of the neighborhood. These three social interactions discussed above lie within neighborhood effects. Moreover, Dietz (2002) and Haurin, Dietz and Weinberg (2003) induce among neighborhood effects the factor of external neighborhood effects—that is, the neighborhood possesses spillover characteristics. An external neighborhood effect is present if a characteristic of a neighborhood affects households in other neighborhoods. Many papers have captured some characteristic factors of the socioeconomic composition of every community to investigate the external neighborhood effects. They have investigated the socially positive behaviors that promote the urbanization and growth in the community and encourage households to immigrate within the region.
Long-Run Share Prices and Operating Performance Following Share Repurchase Announcements
Dr. Chaiporn Vithessonthi, University of the Thai Chamber of Commerce, Bangkok, Thailand
This study uses a new data set to assess whether our understanding of share repurchase is portable across countries with different institutional settings. This paper presents the empirical results regarding long-run stock price and operating performance following share repurchase program announcements of listed firms in Thailand between 2001 and 2005. The results show that there is some evidence of long-run abnormal stock returns following announcements of a share repurchase program, implying that the market fails to incorporate the valuation effect of share repurchase information in a short period. The findings provide no evidence of the long-run operating performance improvement following the share repurchase program announcements. It has long been documented that the initial stock price reaction to share repurchase announcements is positive (Comment and Jarrell, 1991; Dann, 1981; Peyer and Vermaelen, 2005; Lakonishok and Vermaelen, 1990; Rau and Vermaelen, 2002; Vithessonthi, 2007). However, several studies have questioned whether the market fully incorporates the valuation effect of share repurchase information in a short period. Some conclude that stock prices underreact to share repurchase announcements, and that the investors fully incorporate the effect of a share repurchase announcement in subsequent periods. The correction of such underrections may explain the long-run abnormal returns that share repurchase announcing firms experience. Empirical research based on U.S. data provides evidence of the positive stock price reaction to the announcements of a share repurchase offer (Comment and Jarrell, 1991; Dann, 1981; Lakonishok and Vermaelen, 1990; Peyer and Vermaelen, 2005; Stephens and Weisbach, 1998; Vermaelen, 1981). Empirical studies based on non-U.S. data also find the positive stock price reaction to share repurchase announcements (e.g., Ikenberry et al., 2000; Jung, Lee and Thornton, 2005; Rau and Vermaelen, 2002; Vithessonthi, 2007; Zhang, 2002), confirming that investors of a firm that announces a share repurchase program earn, on average, abnormal stock returns around the announcement date. To the extent that investors do not fully incorporate the effect of the share repurchase announcements in a short period, the investigation of the stock price performance over a long period should reveal the true effect of the share repurchase announcements on stock prices. Recently, Zhang (2005) finds no evidence on the long-run abnormal stock returns of Hong Kong firms following actual share repurchases. However, Zhang (2005) reports that the three-year buy-and-hold abnormal returns for high-book-to-market-value firms that make actual share repurchases are positive, and that the three-year buy-and-hold abnormal returns for low-book-to-market-value firms that make actual share repurchases are negative. I examine long-run stock returns following share repurchase announcements to determine whether managers’ ability to exploit undervaluation opportunities is a market phenomenon. According to the signaling hypotheses, share repurchase program announcements may reveal favorable information about the firm’s future value and performance to investors (Dann, 1981). Thus, managers can deliberately attempt to convey new information about future earnings improvement to the market and exploit undervaluation when repurchasing their shares (Hertzel, 1991; Peyer and Vermaelen, 2005). Overall, there is little evidence in share repurchase literature that share repurchases benefit long-term investors. If share repurchase program announcements are used as a signal for a firm’s future value, firms that announce a share repurchase program will have long-run post-announcement operating performance improvements. Lie (2005) reports evidence that U.S. firms announcing open-market share repurchases tend to exhibit operating performance improvements over two subsequent quarters. Nonetheless, Grullon and Michaely (2004) report no evidence that U.S. firms that announce open-market share repurchases exhibit an operating performance improvement following the announcement year. Thus, I examine long-run operating performance of firms announcing a share repurchase program by employing the methodologies adopted by Grullon and Michaely (2004), Kadapakkam, Sun and Tang (2004) and Lie (2005) to determine whether a share repurchase program announcement reveals a firm’s future operating performance improvement. The current study examines share repurchasing program announcements and differs from other studies in several aspects. First, I provide new evidence and an analysis of long-run common stock returns following share repurchase program announcements. Second, I examine the long-run operating performance following announcements of share repurchase program in Thailand. Since my focus is on an emerging market economy, my sample is independent of the sample studied by Grullon and Michaely (2004), Lie (2005) and others. Moreover, evidence on long-run performance following share repurchase announcements in Thailand seems to be nonexistent. The main contribution of the present empirical study is that it provides complementary international evidence regarding the implications of share repurchases in a different regulatory environment. More specifically, it will suggest whether there are long-run abnormal stock returns and operating performance improvements following share repurchase program announcements in Thailand. Thailand, as many other countries in Asia, have recently allowed firms to make tender offers for their own shares. Since December 3, 2001 the Board of Governors of the Stock Exchange of Thailand (SET) allowed listed firms in the SET to make share repurchase programs. Under the current regulation, the firm’s board of director can approve a share repurchase program whose amount is less than 10 percent of its shares outstanding, and an offered price must not exceed the average 5-trading day close price prior to the repurchase plus an amount of 15 percent of such an average price. The notion that share repurchases convey information about a firm’s future value was discussed by Easterbook (1984), Miller and Rock (1995) and Jensen (1986). This framework assumes that there exists asymmetric information between managers and investors: That is, managers are better informed about the firm’s true value than outside investors and have an incentive to convey favorable information to the market. Easterbrook (1984) and Jensen (1986) suggest ideas that when firms have excess cash, it is essential for firms to return free cash flow to shareholders and thus lower the agency costs associated with negative net present value investments. Thus, firms facing limited investment opportunities should return excess cash to shareholders by paying dividends or repurchasing shares, so that it can reinvested in other assets (Baker, Powell and Veit, 2003; Grullon and Ikenberry, 2000; Lie, 2000). Empirical studies provide evidence supporting this argument. For instance, Grullon and Ikenberry (2000) finds that the market reacts positively to share repurchase programs announced by firms with declining investment opportunities. In addition, Lie (2000) finds that the amount of excess cash held by the repurchase-announcing firms is positively related to the stock price reaction.
International Dual Listing: An Analytical Framework Based on Corporate Governance Theory
Dr. Yun Chen, Huazhong University of Science and Technology, Hubei University of Economics,
Recent studies have found corporate governance is a better explanation to understand the international dual-listing phenomenon. Based on reviewing conventional theory of international dual listing and its limitations, this paper explains international dual listing theoretically from corporate governance perspective. At the same time, it develops a basic analytical framework of relationship among international dual listing, external and internal corporate governance. The internal corporate governance mechanism includes ownership structure, shareholder base and the board of director. The external corporate governance mechanism includes legal system, disclosure requirements, monitoring from reputation intermediaries and market for corporate control. International dual listing will change external and in turn stimulate the company upgrade its internal governance mechanism. The globalization in equity market has accelerated stock trading from around world. Up to now, tremendous competition has arisen among major stock exchanges around the world to attract listings and trading volume and to stoke capital-raising activity by overseas companies in their markets. According to the report of World Federation of Stock Exchanges in 2005, the number of foreign companies with shares dual listing and trading on major exchanges outsides of their home markets reached 2300, including the companies not only from developed countries but also from emerging countries opening up their stock markets to foreign investors for the first time. In traditional view, global equity markets are segmented because of the barriers presented by the regulatory restrictions, costs and informational problems (Stulz, 1981). When companies reside in a closed equity market with high investment barriers, the high price of market risk translates into high cost of capital. This provides a strong incentive for companies to mitigate investment barriers e.g. by international dual listings. Empirical studies indicate that international dual listings result in lower cost of capital and higher market value (Karolyi, 1998). In 1997, the number of international dual listing reached as high as 4700. Since then however, the number has been decreasing at a considerable rate. In Europe, until 1980s, European exchange were the main destination of both intra-Europe and intercontinental dual listing, since then the major dual listing flows followed the route from Europe to the U.S. market (Pagano et al., 2002). The new trend of practices leads to dozens of new academic studies of international dual listing. Many recent studies suggest list shares in an equity market with fuller information disclosure and stringent investor protection laws will bond behaviors of dominant shareholders and management and lead to more effective corporate governance (Karolyi, 2006). Dual listing refers to the situation in which a firm had its stock listed on more than one exchange. Dual listing may occur within a country (intranational dual listing), but more frequently dual listing occurs across national boarders (international dual listings) (McConnell et al., 1996). An example of the latter is a firm opts to list its shares on both the New York Stock Exchange and London Stock Exchange. There are two types of international dual listing. One is to list its stock directly on an exchange of another country, and the other is to indirectly make use of a Depository Receipt (DR). After World War II, many countries imposed strict restrictions on capital flows. For example, after the war, almost all countries had strict controls on currency exchange, which meant that outside investors could invest in foreign markets only if they could get scarce foreign currencies. Furthermore, most countries had explicit restrictions on foreign investment. Some countries prohibited their own citizens from buying foreign shares and, foreign investors were forbidden to buy local shares either. And in countries where foreign investors were allowed to buy local shares, the shares often carried lower voting rights and there were typically limits to the percentage of a firm’s shares that could be owned by foreigners. Thereby, there are a lot of investment and ownership barriers on share among countries. In addition to these legal and regulatory constraints, there are also information asymmetry, investors’ preferences and discrepancy in liquidity to international investment, so that global equity markets are segmented indeed which in turn lead to risk premium increase (Stulz, 1999). In the view of the traditional market segmentation hypothesis, companies in a segmented market will list their stock to more developed exchanges, increase liquidity and investor recognition so as to overcome the barriers of markets segmentation (Karolyi, 1998). Whereas, the past a few years there is a significant slowdown in the pace of new international dual listings, the world of international dual listings has changed and become more complex, and a number of challenges to the market segmentation hypothesis have arisen. Firstly, there are limitations in methodology of markets segmentation hypothesis. Almost all of the empirical support for this hypothesis relies on event-study tests of the equity market reactions to listings and listing announcements.
Performance of Suppliers’ Logistics in the Toyota Production System in Taiwan
Nelson N. H. Liao, Chihlee Institute of Technology, Taipei, Taiwan
The present paper referred to the model of Dong, Carter and Dresner (2001) and sampled the supervisors of Toyota automobile suppliers in Taiwan to examine whether the supply chain integration, just-in-time (JIT) purchasing and JIT manufacturing can benefit the logistics performance of suppliers. The results indicated that supply chain integration, JIT purchasing and JIT manufacturing had direct and significant benefits to logistics performance. The implementation of just-in-time (JIT) purchasing systems can result in reduced inventory costs, shorter lead times, and improved productivity for buying organizations (Shingo, 1981; Schonberger, 1982; Hall, 1983; Ansari and Modarress, 1987; Tracey, Tan, Vonderembse, and Bardi, 1995). A buyer’s inventory costs may be reduced because costs are transferred to suppliers after implementation of JIT (Romero, 1991; Fandel and Reese, 1991; Zipkin, 1991), so suppliers’ inventory costs are less likely to decrease (Dong, 1998). Dong, Carter and Dresner (2001) developed and tested a practical model to determine whether the use of supply chain integration, JIT purchasing and JIT manufacturing could reduce logistics costs for both suppliers and buyers. The result of their test for suppliers is shown in Figure 1; although the extent of supply chain integration and JIT purchasing have no direct benefits to logistics costs, these two dimensions have indirect benefits to logistics costs through JIT manufacturing. This paper refers to the theoretical model of Dong et al. (2001) and samples the supervisors of Toyota production system suppliers in Taiwan to examine whether the extent of supply chain integration, JIT purchasing and JIT manufacturing can directly benefit suppliers’ logistics performance. Dong et al. chose a sample of 131 suppliers in the field of electronics and other electrical equipment industries (SIC 316) in the U.S. Most of the samples in the present study, on the other hand, are from Toyota automotive suppliers in Taiwan in the automotive components industry, who are fully influenced and have been well trained by Japanese Toyota production system practices. Thus, it is possible that the results of an empirical study using this sample may differ from those of Dong et al. The purpose of this paper is to: (1) explore whether there is significant and direct influence among supply chain integration, JIT purchasing and JIT manufacturing; (2) investigate whether supply chain integration, JIT purchasing and JIT manufacturing have significant and direct benefits for suppliers’ logistics performance; and (3) examine the explanatory powers and influences of supply chain integration, in comparison with JIT purchasing and JIT manufacturing, for suppliers’ logistics performance. Supply chain integration includes using electronic data interchange (EDI), integrating management teams in product design, information-sharing, and working with suppliers to improve the management performance of their (second-tier) suppliers (Ellram and Cooper, 1990; Scott and Westbrook, 1990; O’Neal, 1992; Dong et al., 2001). The purpose of integration of all functional areas is to improve communication and cooperation in JIT purchasing. Gunasekaran (1999) proposed a framework of supply chain integration to improve the overall effectiveness of the JIT purchasing function and the whole organization. Suggestions for integrating various functional areas in manufacturing organizations have included improving communications and, hence, improving the material flow. A number of other aspects can also improve functional areas integration, such as (1) support from top management, (2) proper organizational structure, (3) appropriate management control systems, (4) effective incentive and merit systems, (5) encouragement by all management levels, and (6) leadership by all management levels (Pegels, 1991). Areas for integration include improving partner relationships, sharing information and crossing organizational activities (Wang et al., 2004). JIT purchasing goes against most of the traditional ideas held by manufacturing, purchasing, and materials management. The fundamental aim of JIT purchasing is to ensure that production is as close as possible to a continuous process, from receipt of raw materials/components through the shipment of finished goods (Gunasekaran, 1999). The characteristics of JIT purchasing are few suppliers, nearby suppliers, frequent deliveries in small lot quantities, long-term contract agreements, and close relationships between buyers and suppliers (Schonberger and Gilbert, 1983). In addition to these, Tracey et al. (1995) added perfect quality, and effective, efficient transportation and material handling systems. The benefits of JIT purchasing for buyers include reduction of the costs for carrying parts inventory, transport, and rework, as well as ease of expediting, fewer suppliers to contract, fast detection of defects, less need for inspection (of lots), quick response to engineering changes, and so on (Schonberger and Ansari, 1984). A JIT purchasing strategy is aimed at a synchronized and timely product flow from supplier to buyer. Therefore, the basic elements of a JIT purchasing strategy include: (1) reduction in order sizes; (2) reduction in order lead times; (3) quality control measures, including supplier quality certification and preventive maintenance programs; and (4) supplier selection and evaluation (Dong et al., 2001). In this connection, Martel (1993) stressed the role of purchasing in a world-class manufacturing firm and the importance of developing a reliable supplier base through supplier partnering and certification. Suppliers have traditionally been selected based on the best bid, with unit cost as the primary criterion (Tracey et al., 1995), but partnerships are critical to the success of the JIT purchasing on which competitive ability so often depends (Herbig and O’Hara, 1994).
The Dynamic and Full Significance of Macroeconomics’ Main Equation
Jose Villacís, Ph.D., Universidad San Pablo-CEU, Madrid, Spain
Germán Bernácer (Alicante, 1916-1926) laid the foundations of macroeconomics between 1916 and 1926. In his first book, “Society and Happiness. An Essay on Social Mechanics,” (Sociedad y Felicidad. Un Ensayo de Mecánica Social, 1916) Bernácer explains a macroeconomic production design, the origin of income and interest, and the monetary market. In 1922, he publishes an article under the title, “The Theory of Liquid Assets,” (La Teoría de las Disponibilidades) wherein he expounds the questions of money demand and the origin of interest. Liquid assets and production funding are two issues stressed throughout Bernácer’s whole work that the author believes are yet to be discovered by macroeconomics; these two concerns are the starting point for this paper. Savings originate from income and revert into the production circuit in the form of demand for capital equipment – we call this operation “investment.” Actually, there is a portion of savings that are liquid assets, which are neither capitalized nor amassed, but rather, circulate and are used to repurchase secondary financial assets. This means, in the first place, that liquid assets entail a lack of demand. Secondly, that there is a part of the financial market that diverges, and therefore, such part is not neutral, nor does it function as a simple bridge between savings and investment. In addition, where there are liquid assets, then the portion of income that is not saved should be divided into two factions: the portion of savings to be capitalized on the one hand, and liquid assets on the other hand. Thus, macroeconomics’ main equation should be at least completed with regard to its financing aspect – one side of the identity. Another particular aspect of the theories of Germán Bernácer, who taught Physics, is the way in which he deals with working capital, divided in two parts: one is the sum of added values or domestic product, the other is the total working capital. The basic idea is that a dynamic, temporal and developing economy needs new money in order to grow. In other words, the idea is that savings are not sufficient to finance the increase of production. The main equation should consider funding sources – savings, new money and liquid assets – in relation to their allocation – investment, working capital and financial assets. An integral version of reality is thus suggested in this paper, wherein liquid assets and working capital are not simply an identity, but a functional and financial equation. Chapter VI of the book “Society and Happiness. An Essay on Social Mechanics,” titled The Statics of Wealth, describes several economic operations that comprise allocating individual assets/ in kind, in cash or in credit, depending on their use. Those operations are: squandering, investing in land and financial assets, loan and capitalization. With regard to the latter, Chapter VI specifies that individual assets – that is, the income that has not been spent – will be higher than its use: investment. This explanation is Bernácer’s core idea, which he will not give up for the rest of his life, and which leads to his theory of liquid assets. In the TYPES of economic operations he rigorously explains the concepts of capital and wealth, as well as the significance of economic operations. Bernácer builds his theory very accurately, and so he reaches the central concept of his theory: what he calls liquid assets. The following chapter of Bernácer’s work is a continuation of the previous one, function- and time-wise. It is titled The Dynamics of Wealth. A dynamic economic operation is one that generates demand for production, for income, for employment, and consequently, an operation that excites work. There are other uses of individual assets that entail infertile operations because they generate neither production nor income, and therefore, do not raise employment. This means that, in the economic system, there will be a portion of individual assets that originate from production and that do not return to that production as demand. Such are the speculative operations, which are financed with a residual monetary fund. As Bernácer states in his book, “The Functional Doctrine of Money,” (La Doctrina Funcional del Dinero, 1945) there will generally be two types of operations: active operations and passive operations. The former makes the generation of income and production possible; the latter are infertile operations from a macroeconomics point of view that entail a change of hands. The terms “individual assets,” “use,” etc., as they were used in 1916, have undergone a semantic evolution (except for the term “capital”) – a natural circumstance if we take into account the fact that macroeconomics was at an early development stage at the time. In October 1922, in Alicante, Bernácer finishes his article, “The Theory of Liquid Assets as an Interpretation of Economic Crisis and the Social Problem” (La Teoría de las Disponibilidades como Interpretación de las Crisis Económicas y del Problema Social). This work is basically a modern macroeconomics work, even today, wherein he explains the circular flow of income, its origination and its allocation. He separates from Say’s Law concept of a full economy that is permanently insured against depression. In particular, he explains a fraction of income, liquid assets, which play an important role within his macroeconomics design. In Bernácer’s article, liquid assets appear as the cause that determine crisis, and therefore, his theory develops as a theory of crisis. In order to produce, production factors must co-operate. In return, production factors receive money revenues called income.
Promptness: A Teaching and Evaluation Model
David L. Russell, Ph.D., Western New England College
Promptness is a desirable behavior in students and an expected behavior in professionals. Promptness can also be viewed as a surrogate variable of the larger concept of engagement with the course. Here, a model to integrate the evaluation of promptness into the usual pattern of teaching a college-level course is presented. The life cycle of a specific assignment is presented and broken down into measurable intervals. Classroom management software provides the key tool to perform the analysis. Other forms of evaluation of promptness are presented. Conclusions are drawn, focusing on increasing learning by maximizing the time students have available between the opening of a posted assignment and its submission. Professors of Information Systems (“IS”) are expected to teach our classes in an interesting and informative matter, using technologies selected for their currency or for which there is demand in the labor market. Professors are also expected to themselves reasonably well informed about developments in the field and to conduct research in ways that meld with the mission statement of the professor’s institution. In a more diffuse manner, professors are expected to serve as role models, thus helping students to evolve into working professionals. Some of the skills needed in this evolution can be taught explicitly: communications skills in Communications course and team interaction skills in Organizational Behavior classes are examples. Other skills can be taught by correction: for example, the student who uses coarse language in a class presentation can be counseled about the inappropriateness of that behavior. In addition, most professors correct, or at least point out, errors in the written English. Many other desirable skills and behaviors, however, are not taught explicitly. Students are expected to learn these prior to their college career or, lacking that, to learn them by observation during their college career. Examples include the proper use of titles when addressing superiors and the proper handling of oneself at a business-related social function. Even something as simple as proper table manners are often lacking in college students. A prime source of such learning is observation of the desirable behaviors in others, particularly by instructors. One thing is certain, however: if students graduate lacking these skills and behaviors they will be at a distinct disadvantage in their professional careers and will reflect poorly on their college or university. One such behavior is promptness, which will serve as an example in this paper. Here the term “promptness” is used in a specific manner: it expresses the alacrity with which students respond to assignments by accessing the assignment and submitting the completed assignment. It should not be mistaken for “punctuality”, that is, the practice of arriving on class on time, or simply the submission of required assignments prior to a deadline. Although in the past it might be argued that computer anxiety might generate anxiety which in turn would inhibit promptness in student response (Marcoulides, 1988), it would be hard to see that as a factor in the early years of the twenty-first century. Instructors must be cautious, however, that short-term anxiety should be expected as students come down the learning curve on a new piece of software, specifically including class management software. Therefore, analysis of this type suggested here should be not be applied to assignments very early in the semester. Promptness is one of several critical skills for success in college and, more importantly, for subsequent success in the student’s career. An employee who submits required material before it is required, or who quickly alerts his or her superiors to an undesirable situation, or who expeditiously executes a complex task is a valued employee indeed. Sociologists, among others, are well aware of this: for example, “The attributes…used to identify as the hallmarks of a professional, such as education, vocation, esoteric knowledge, self regulation and civility, have been replaced, or at least augmented, by an interpretation that stresses punctuality, style, dynamism, financial success and entrepreneurialism" (Cooper et al., 1996, p. 631; emphasis mine). Nowhere is this more true than in the Business professions, where a significant part of one’s work is done on one’s own, mostly in one’s mind, and in a manner not amenable to direct supervision. For a Business professional to attend to tasks and requirements promptly generates a positive perception of the employee, undoubtedly leads to career advancement, and reflects positively on the institution from which he or she graduated. Promptness also aids the learning process itself. The many examinations, quizzes and other graded work found in Business School courses exist for one fundamental purpose: for the student to learn by feedback and, where necessary, self-correction. This is an internal process driven by the student’s motivation to learn. Instructors, at best, can only motivate this self-direction.
The Evaluation of Advertising Costs Within the Accounting Recording System
Dr. Fatma Ulucan Ozkul, Bahcesehir University, Besiktas, Istanbul, Turkey
Advertising basically aims at leaving an impression on the mass of consumers and directing this mass to buy by effecting the ideas and habits of them and increasing the benefits of the companies. The aim of this study is, rather than to put emphasis on advertisements’ being important and necessary for the companies and the consumers, to include to make necessary improvements with the idea that under today’s conditions the companies’, who reserve high budgets on advertising, reflecting the expenses of advertisements to financial statements by seeing them only as a mean of sale increasing gives missing information to the users of reports and misdirects them. In this study, the importance of dealing with advertisements’ expenses as expenditures which should be traced within the frame of matching principle because they affect the profits of companies just not in the current year but in the following years. In this line, advertising should be evaluated as a means of investment in long term besides being evaluated as an activity of sale increasing in short term. Also, in the study, the expenses of advertising, which confirms the image of the business, increases its respect and makes it a brand, are dealt with not just as profits of current year but also the necessity of amortization in the period of getting benefited by capitalizing them. As a result of rapidly changing market conditions and increasing of competition day by day, marketing interaction and the people’s, who work in these areas, effective and rational usage of them have become important in the success of the many companies which produce similar commodities. Marketing which is one of the basic functions of business is a process which provides communication between the company and the consumer and gives benefits to the business by putting value on consumers and directing the consumer relations. Advertising is a communication media which gives great benefits to the companies and the consumers. In terms of companies, advertising is a media which gives support in finding effective markets and which leads them to invest their capitals in effective areas. In terms of consumers, it is a media which serves to determine the most suitable one among the similar ones which addresses to their demands in the market and directs them to make a rational choice and which defines, by introducing various commodities and services, where, how and to how much money to provide them and how to use them and which provides benefit of both place and time. Advertising which is the basic component of integrated marketing communication makes the company a brand by making its communities and services different from its competitors becomes an advertising tool by contacting wide number of aimed people and gets a role which includes a big part of the business budget. The advertisements’ expenditures are descended to profit/loss at the end of the year by writing them down in accounting records as recent expense in our accounting system. These expenses should be activated directly before taking them to the income statements because they will give benefit as a result of their property in the following years as well; in other words they will affect the profits of the following years. The ambiguity of how much the expense with this property will give benefit to the business in the future and the ambiguity of the period of profit make capitalization and calculation of amortization share difficult. This difficulty causes companies’ recording advertising costs as expenses of a time period. Although a big advertising campaign which a company commits, affects the profits of the following years, the decrease of all expenses from the revenue of that current period damages the results of the period to a great extent. The reflection of advertising costs as expenses of a period to the financial statements have not received enough criticism from national publications and any investigation in this direction has not been made. However, it is a fact that advertising which has a great effect on national income today affects the firms financially in addition to its being an activity of marketing. In international publications, some investigations have been conducted on this issue for past years and satisfactory articles have been written. The advertisement of Mark Hirschey has been assessed as an intangible capital. According to Hirschey, the consumers have a tendency to forget the brands in time and so companies should make advertisements with the aim of continuing a stable selling rate. So, he advocated the idea that the advertising costs are the capitals which become amortized in time and which require repair and maintenance. Economists investigated this issue, intangible capitals, by a means of a marketing assessment model. This model revealed that advertising has important effects on the marketing value of the firm in the future (Hirschey 1982). Assessing advertising cost as an intangible asset brings a question with it. It is what the advertising costs’ amortization shares will be. The period of benefit from advertising differs in time and in different sectors. For example, to use the same rates in cigarette sector and automotive sector may cause to wrong results. According to Telser, the amortization rate should be 15-20 per cent in cigarette sector. According to Palda, with a different perspective, advertising is an asset which is peculiar to amortization and 95 per cent of advertising costs should be amortized in 7 years (Peles, 1971).
A Typology of Co-branding Strategy: Position and Classification
Wei-Lun Chang, Tamkang University, Taiwan
As many companies seek growth through the development of new products, co-branding strategy provides a way to develop new products. However, combining two brands may cause brand meaning to transfer in ways that were never intended. The present paper advances research on co-branding strategies by proposing a conceptual framework of co-branding through a typology with three concepts: co-branding aim, category, and effect. The typology framework not only provides a roadmap of co-branding strategies but also illuminates issues related to co-branding for related research.As many companies seek growth through the development of new products, co-branding strategy provides a way to develop new products as successful brands provide signals of quality and image. Co-branding involves combining two or more well known brands into a single product. A successful co-brand has the potential to achieve excellent synergy that capitalizes on the unique strengths of each contributing brand. In the last decade, co-branding and other cooperative brand activities have seen a 40% annual growth (Spethmann and Benezra, 1994). Companies form co-branding alliances to fulfill several goals, including: (1) Expanding their customer base, (2) achieving financial benefits, (3) responding to the expressed and latent needs of customers, (4) strengthening competitive position, (5) introducing a new product with a strong image, (6) creating new customer-perceived value, and (7) gaining operational benefits. One industry in which co-branding is frequently practised is the fashion and apparel industry (Doshi, 2007). The basic principle behind co-branding strategies is that the constituent brands assist each other to achieve their objectives. Utilizing two or more brand names in the process of introducing new products offers competitive advantages. The purpose of the double appeal is to capitalize on the reputation of the partner brands in an attempt to achieve immediate recognition and a positive evaluation from potential buyers. The presence of a second brand on a product reinforces the reception of high product quality, leading to higher product evaluations and greater market share. Co-branding may also affect the partner brands negatively. James (2005) showed that combining two brands may cause brand meaning to transfer in ways that were never intended. Thus, the potential benefits and risks associates with co-branding strategies must be explored and carefully examined. However, little research has addressed co-branding strategies, examined the factors to determine a successful strategy, or assessed the impact of two or more merged brands. Currently, a conceptual framework of co-branding is still lacking. This framework may offer researchers the freedom to study co-branding phenomena from various perspectives and provide guidelines that will help highlight similarities and differences among various co-branding strategies. The present paper advances research by proposing a typology using co-branding aim, category, and effect to research co-branding strategies. The typology indicates the importance of co-branding strategy and furnishes a starting point for future research. Following this introduction, Section 2 surveys the literature for co-branding research, Section 3 provides a typology for co-branding strategies in terms of three sub-sections, Section 4 analyzes and evaluates existing co-branding cases, Section 5 discusses the managerial implications for future co-branding, and Section 6 furnishes a conclusion. Co-branding is a strategy of brand alliance. In the marketing literature, co-branding has been used interchangeably with labels such as brand alliance and composite branding. Grossman (1997) broadly defined co-branding as “any pairing of two brands in a marketing context, such as advertisements, products, product placements, and distribution outlets”. More narrowly defined, co-branding stands for the combination of two brands to create a single, unique product (Levin et al. 1996, Park et al. 1996, Washburn et al. 2000). Co-branding is a special case of brand extension in which two brands are extended to a new product. In a co-branding alliance, the participating companies should have a relationship that has potential to be commercially beneficial to both parties. Various theories have been used to explain how consumers reconcile their attitudes towards co-branded products. For example, cognitive consistency suggests that consumers will seek to maintain consistency and internal harmony among their attitudes (Anderson 1981, Simonin et al. 1998). Similarly, the theory of information integration suggests that, as new information is received, it is processed and integrated into existing beliefs and attitudes (Schewe 1973). Empirical research on co-branding is limited to relatively few studies that have usually examined product concepts or fictitious products rather than real instances of co-branding. Park et al. (1996) examined the effects of product complementarily to evaluate co-branded product. The results revealed that product complementarily is the key appeal in co-branding because it allows the co-brand to inherit the desirable qualities of each brand. The pairing of high-quality or high-image brands is another area that has received attention in co-branding literature (Washburn et al. 2000, McCarthy et al. 1999, Rao et al. 1999). Brand alliance is a branding strategy used in a business alliance. Brand alliance, which has become increasingly prevalent, is defined as a partnership or long-term relationship that permits partners to meet their goals (Cravens, 1994).
Leadership, Knowledge Sharing, and Organizational Benefits Within the UAE
Dr. Mohamed H. Behery, University of Dubai
The study is about an exanimation of the relationships among transformational and transactional leadership, knowledge sharing, and organizational benefits in Dubai. Leadership behaviors, knowledge management, and organizational effectiveness are considered major business topics today. There has been no previous direct empirical evidence to examine the relationships among transformational and transactional leadership, knowledge sharing, and organizational effectiveness in Dubai. To fill this research gap, this study focused on examining this relationship with an additional emphasis on professional service firms. Using a sample of 560 employees from different business services sector, the significant findings of this study are: (1) transactional and transformational leadership were positively related to knowledge sharing in these organizational settings (2) knowledge sharing was considered as a valid predictor of the organization’s organizational benefits and effectiveness (3) transactional and transformational leadership were positively related to the organization’s organizational benefits effectiveness. (4) Unexpected neutral effect of demographic variables, such as gender and citizenship, upon the study’s variables has been detected. Limitations of this study and recommendations for future research are also provided. Although leadership has been considered an important factor in the success of knowledge-based organizations (Hefner, 1994), prior related research has studied: (1) the relationships between leadership behaviors and knowledge management (Politis, 2001, 2002; Ribiere and Sitar, 2003), (2) the knowledge-based approach in strategic alliance settings (Dyer and Nobeoka, 2000; Parise and Henderson, 2001), and (3) the relationships between leadership behaviors and organizational benefits (Rodsutti and Swierczek, 2002; Avery, 2001; Pounder, 2001). There is almost non-existent research on the relationship between transformational leadership behavior, knowledge sharing, and organizational benefits in the UAE, specifically Dubai. Volumes of literature exist on the topic of leadership; researchers have found that leadership behaviors are an important determinant to business success (Burke and Day, 1986; Bass. 1990; Ulrich, Zenger, and Smallwood, 1999). Rost (1991) has found that most leadership literature focused on leader ability, traits or behaviors. Additionally, Yukl (1989) has defined leadership in terms of traits, behavior, influences, role relationships, interaction patterns, and occupation of an administrative position. Leadership has been studied in different ways, depending upon the researchers’ methodological preferences and definition of leadership. According to Bums (1978 cited by Bass, 1995), the leadership process can occur in one of two ways, either transformational or transactional. The transformational leadership concept was originally proposed by Burns (1978, cited by Bass, 1995) from descriptive research on political leaders, and then expanded by Bass (1985; 1990). However, Bass (1985) was the first to apply transformational leadership theory to business organizations. The theory of transformational leadership simultaneously involves leader traits, power, behavior, and situational variables (Yukl, 1989). Thus, transformational leadership theory is viewed as a hybrid approach as it gathers elements from these major approaches (Yukl, 1998). Transformational leadership is defined in terms of the leader’s effect on followers: followers feel trust, admiration, loyalty, and respect toward the leader, and they are motivated to do more than they originally expected to do (Yukl, 1998). Thus, transformational leaders “set more challenging expectations and typically achieve higher performances” (Bass and Avolio, 1994, p. 3). However, today’s researchers have recognized the importance of transformational leadership. For example; Cascio (1995) stated that “today’s networked; interdependent, culturally diverse organizations require transformational leadership” (p. 930). Additionally, Tichy and Devanna (1998) believed that the power of transformational leadership is the visualization of the organization. Kuhnert and Lewis (1987) stated that transformational leadership “originates in the personal values and beliefs of leaders, not in an exchange of commodities between leaders and subordinates” (p. 649). Followers trust transformational leaders because such leaders always show concern for the organization and followers (Podsakoff et al., 1990). Such leaders encourage followers to seek new ways to approach their jobs resulting from inspirational motivation and intellectual stimulation (Bass, 1985). Thus, such leaders are able to generate greater creativity, productivity, and effort, exceeding expectations. The transformational leader “provides followers with a cause around which they can rally” (Bass, 1995, p.467). Transformational leaders change organizational culture and focus more on long-term rather than short-term goals (Avolio and Bass, 1988). They can transform the organization by defining the need for change, creating visions, and mobilizing commitment to these visions (Tichy and Devanna, 1990).
Social Participation and Life Satisfaction: From Youth’s Social Capital Perspective
Jui-Kun Kuo, Ph.D., National Sun Yat-sen University, Taiwan
Cheng-Neng Lai, Ph.D., Shih Hsin University, Taiwan
Chun-Shen Wang, Doctoral Student, National Sun Yat-sen University, Taiwan
Through the participation of empowerment mechanism, community youths are guided to develop abilities such as autonomy, independence, and self-discipline. With these abilities, the youth could utilize social capital to impel community growth and group cooperation. The social capital theory is adopted in this research to explore the cause-effect relationship between trust, network interaction and social participation together with family interaction and life satisfaction. The object of the study is high school students. There are 2,757 effective questionnaires from the total number of 2,880, providing an overall response rate of 95.73%. Data are then analyzed through Structural Equation Model (SEM). With the multi-variant analysis that bases on regression, combining the path analysis, the research tries to develop the mode of the youth’s social participation and life satisfaction. At the end, suggestions are provided for the concept of strategy and further research directions. The results are summarized as follows: fine family interaction would result in high community trust cognition; high community trust cognition would lead to fine community network; high community trust cognition would also elevate the level of social participation; fine community network would uplift the willingness of social participation; fine family interaction would increase life satisfaction; fine community network would boost life satisfaction; high social participation would encourage life satisfaction; community trust has no influence on life satisfaction. The purpose of community building projects is to identify the feature of each community and distinguish it best element for development. With the integration of community organization, residents are encouraged to care and participate in public affairs. Through the integration of public’s concepts, common envision is lined out to display vertical multi-values. The current participants of community’s public affairs in Taiwan are mostly retired persons, especially in county areas. The major difficulty to exercise community building projects is the low participation of the youth. The inheritance of development experiences is then impeded which results in the gradual regression of community activities. Hence, adopting community empowerment mechanism to attract the youth to join in and expressing their power and creativity would be of help to stably uplift community’s living quality and continued development. Putnam(1993, 2000) considers a society that has abundant social capital, such as trust and volunteer group networks, would upgrade the quality of local government’s operation, the willingness of citizens’ participation in public affairs, and the mature development of civil politics. In the process of utilizing social capital perspective on community building, five interactive relationships are revealed, which are: integration of concepts, compromise with negotiation, conflicts settlement, rational dialogue, and convincible communication. These relationships are in favor of actual supports and real devotion. There are many studies about community building and development in recent years. Most of these studies are focused on sense of community, environment and education. This study, with the subject of the youth of community, tries to explore the relationships between family interaction, community participation and life satisfaction through social capital perspective. While this kind of research is scarce, hence present the necessity of this study. The interaction between parent and children is the first interpersonal relation that a person develops in his life. It is also the foundation of all the relations. According to the Attachment Theory and Object Theory, in the early stage of a person life, his relation with his caretaker would transform into the cognition of emotional schema. The content and structure of this cognition of emotional schema would influence individual’s anticipation, feelings, together with the behavioral mode of interpersonal relationship (Levy, Blatt & shaver, 1998). The quality of parent-child interaction can be observed from the way that the parent teach their kids. It can also be identified through the mode that they get together, negotiation, contact, and intimacy. Further, the quality can also be explained with physical interaction and psychology perspectives (Gongla & Thompson, 1987). Physical interaction refers to the way that the parent communicate with children; while psychology is the content of interaction that refers to the status of the parent or the intimacy and identity between parent and children. Parent-child interaction is part of the interpersonal interaction. Reich and Zautra (1981) confirm that interaction could bring happiness. Those who are unhappy are mostly resulted from the insufficiency of human contacts. Agryle (1987) also mentions about the impact of social interaction and network on happiness, finding that the greatest happiness usually comes from the strongest relation, meaning that a person’s interaction with his spouse and family would cause the greatest impact on his emotion. Further, Bar-tur & Levy-Shiff (1988) discover that the stronger the correlation between an individual and those who he considers important emotionally, the happier he may be. Therefore the hypotheses are as follows: Hypothesis 1: the more the family interaction of the youth in the community, the higher level of the life satisfaction. Hypothesis 2: the more the family interaction of the youth in the community, the higher level of the trust to the community. The topic of social capital has drawn wide discussion recently in studies of social development, mostly exploring the impact of the elements such as human network, social norm, community participation and cognition on community’s living standard or health.
Governance Effect of Capital Structure: An analysis of Chinese Listed Companies
Dr. Zhang Zhaoguo, Huazhong University of Science &Technology, Wuhan, PRC
He Weifeng, Huazhong University of Science &Technology, Wuhan, PRC
Liu Xiaoxia, Huazhong University of Science &Technology, Wuhan, PRC
Capital structure effect on corporate governance is ultimately expressed as the changes of corporate performance. Based on analysis of the data from Chinese listed companies（1992-2004）,we find that state-owner shareholding ratio has less and less effect on corporate performance; corporate shareholding ratio and the ratio of debt to financing are weak positive correlated to corporate performance; ownership concentration and managerial shareholding ratio have a positive relationship with corporate performance, and there is a significant negative correlation between retained earnings and corporate performance. These conclusions indicate that perfecting capital structure is one of the important approaches to optimize corporate governance and improve corporate performance. The governance effect of capital structure refers to the capital structure effects on corporate governance. Economists and financial researchers have focused on the field since the 1970's.The basic theory of the governance effect of capital structure is the capital structure contract theory first established by Jensen and Meckling (1976), which consists of three parts: the incentive-based models. such as the model of Jensen and Meckling (1986), Grossman-Hart (1982), Harris-Raviv(1990), Stutz(1988),which argues that capital structure influences the endeavor level and behavior choice of manager; The signaling models, such as the model of Ross (1997), Leland-Pyle (1997), Myers-Majluf (1984), which argues that capital structure can transfer some interior firm information to market and thus influences investors; And the corporate control-based models, such as the model of Grossman-Hart(1986), Harris-Raviv(1988), Stulz(1988), Hart-More(1990), Aghion-Bolton(1992),which argues that capital structure determines not only the distribution of residual claims, but also the distribution of control. All of the three models contact the capital structure to corporate governance together, and analyze how capital structure influences corporate value and then influences corporate governance. It is obvious that capital structure is an important part of corporate governance; the efficiency of corporate governance depends on by the rationalization of capital structure to a certain degree. No matter how capital structure influences corporate governance, the result will reflect on the change of corporate performance. So the empirical study of the governance effect of capital structure equals to how capital structure influences corporate performance. Previous study focused on three aspects: First comes, the accounting profit ratio. Demsetz and Lehn(1985), Holdermess and Sheehan(1988) etc discovered that there is no significant relation between ownership concentration and accounting profit ratio through empirical study; but Thomsen and Pedrsen(2000) discovered that there is nonlinear relation between them. The second one is agency costs, Agrawal and Mandelker(1987), Jensen(1993) from the perspective of management shareholding, Jensen(1986), Stulz(1990), Harris and Raviv(1990), Phillips(1995) from the perspective of debt financing, Perderson and Thomsen(2001) from the perspective of ownership concentration, respectively demonstrated that it is helpful to reduce the agency costs of equity through increasing management ownership, debt financing and ownership concentration. Smith and Warner(1979), Mikkelson(1981), Green(1984) from the perspective of option debt (such as convertible debt, preemptive right debt etc.),Chang(1992)、Barclag and Smith(1995)、Guedes and Opler(1996) from the perspective of liability term, respectively demonstrated that it is helpful to reduce the agency costs of debt through issuing option debt financing and choosing reasonable liability term. The third aspect of study focuses on company value, Morck(1988) argued that there is significant monotonous relationship between management ownership and company value. Through empirical study, McConnell and Servaes(1990) argued that there is a curve relationship between the insider shareholding and Tobin’s q. In China, because of lack of data and difficulties to calculate the agency costs and corporate value, most of the scholars focus their empirical study of governance effect on corporate governance on the accounting profit ratio, and only a few scholars’ study turns to agency costs or corporate value. Through empirical study, Zhou Yean（1999）argued that state ownership and corporate ownership ratio are significantly positive with return over equity, Wei Gang（2000）discovered that there is no “interval effect” between management shareholding and return over equity. Chen Xiaoyue and Xu Xiaodong(2001) discovered that the relationship between state ownership and operating income to asset ratio becomes less significant than before; the positive effect of corporate shareholding ratio on operating income to assets ratio is still insignificant, but the tendency is growing more significant; the relationship between the ratio of tradable share and operating income to assets ratiochanges from significant negative to insignificant.
Global Fattening: Designing Effective Approaches to Reducing Obesity
Dr. Michelle Neyman Morris, California State University, Chico
Dr. Shekhar Misra, California State University, Chico
Dr. Scott Sibary, California State University, Chico
The obesity epidemic is drawing increasing attention in the professional and academic press. Most of the literature describes the trend, and suggests policies or actions to reverse it. This article focuses on criteria that should be used not only to evaluate, but to generate policies to address obesity. The criteria are explained and discussed in light of factors that have led to obesity, and those that limit or influence the practical choices of remedies. Because we believe that those marketing food are responding to a prevalent change in attitude regarding diet, activity and weight, we do not believe a complete reversal of the epidemic is likely. Rather, our work is premised on the belief that it is critical for those wanting to mitigate the public health issue of obesity to focus efforts on approaches that are most likely to be effective within limited financial and political resources. Since 1980, obesity rates among U.S. adults have doubled, while the number of overweight adolescents has tripled. Currently, two-thirds of American adults are overweight or obese leaving just one-third at a healthy weight, far below the Healthy People 2010 objective of 60% (Hedley 2004; www.surgeongeneral.gov). Analyses from recent population surveys suggest that these trends continue despite numerous public health efforts to reverse them. Recent analyses suggest that the methodology previously used to estimate the number of deaths attributable to overweight and obesity may have overstated the problem (Flegal et al.2005; Mokdad et al. 2004; Mokdad et al. 2005), yet it remains clear that extreme obesity is associated with an increased risk of coronary heart disease, stroke, hypertension, type 2 diabetes and certain cancers (Kuchler and Bellenger 2002; Pi-Sunyer 1993). In addition, direct medical and lost productivity costs attributable to obesity were estimated at $117 billion in 2000. Obesity related chronic disease is no longer an adult-only public health concern. Twenty years ago, 5% of U.S. children were overweight. Today, 31% are overweight or are at risk of becoming overweight and their rates of developing type 2 diabetes continue to rise, thereby placing them at risk of becoming the first generation to have a shorter life expectancy than their parents (Ebbeling, Pawlack, and Ludwig 2002; Lemonick 2004). We propose here several specific criteria that should be used in the design and evaluation of policies to remedy obesity. The increase in obesity results from a variety of causes, and the mix of causes varies among individuals and groups of individuals. It is therefore evident that the design of programs to address obesity take into account the causes relevant to the target group. Similarly, the situations or environments in which obese individuals operate will differ, and considering these factors (if they are not already considered as causes) is also critical to program design. Additional design considerations must include pragmatic considerations or constraints, and hopefully, current research relevant to the target group or type of program. Given the complex nature of the problem, innovative changes involving multiple stakeholders from the individual to community level would be required in order to effectively address obesity (Koplan, Liverman, and Kraak 2004). We propose that one should first consider the causes of obesity when designing new programs. The following causal factors and pragmatic considerations do not constitute an exhaustive list, but are the ones we believe to be the most prominent and relevant. As explained below, ignoring any of these would likely result in missed opportunities, an ineffective program and a waste of resources. American consumer food choices have been found to be driven primarily by taste, cost and convenience (Drewnowski and Darmon 2005). Data from the National Health and Nutrition Examination Survey (NHANES) indicate that energy dense yet nutrient poor food choices high in added fat and sugars are prevalent in the American diet. On average sweets, desserts, soft drinks and alcoholic beverages account for 25% of all calories consumed (Block 2004). To some extent, Americans are responding to what may be natural taste preferences, now more readily available than before. To counter this emotional preference, a policy should offer some alternative emotional satisfaction. It is obvious that increased efficiency in production can result in decreasing costs to the consumer. But, another major player in food consumption has been the government. A century ago, the problem of starvation or disease from malnutrition was far more prevalent than today (Schaffer 2002). A simple market mechanism for dealing with inadequate supply of a commodity is to subsidize its production, so that it can be sold at a lower cost than it would be otherwise, and thereby be affordable to a larger number of people. At the beginning of the twentieth century, agricultural subsidies helped to create an increasing bounty of food in the United States (Schaffer 2002). The political beauty of such an approach is that it satisfies two different constituent interests: producers who enjoy the subsidies (and, theoretically, greater total profit) and consumers who enjoy the resulting lower prices. If a little is good, perhaps more is better, and lobbying from agricultural interests for continuing, or increasing, subsidies is a natural political consequence, even though it may create surpluses. But surpluses indicate that the subsidies are excessive, that is, greater than necessary to meet the demand at the target price level. This has a biasing effect on the quality of food produced, since small farms (into which category organic farms predominantly fall) tend not to qualify for subsidies. Although one approach to the artificially low cost of some foods is to reduce the subsidies, such a strategy can be politically difficult. Taxpayers might possibly benefit slightly, but strong opposition could be expected from those who would lose the subsidies. A recent study conducted by Cogent Research on behalf of Nickelodeon, a children’s network, indicated that most parents are working longer hours than ever. Therefore, they are often left feeling overscheduled and overworked and with less time to spend with their children, much less prepare healthy meals (Smalls 2005). In order of preference, food related decisions were found to be based on the following options: what makes their life easier, what makes their kids happy, what raises their kids to be “good” people, and what stays within their financial means.
Impact of Student Attendance on Course Grades
Dr. Peter J. Billington, Colorado State University, Pueblo, Colorado
The undergraduate required operations management course is mathematically challenging for many students and attendance in class may help students master the material. In recent years we have seen a trend of more students missing classes due to a number of reasons, some legitimate, some not. Attendance was recorded for 6 class sections for a 15 week semester class, three one hour sessions per week. Course grades are then correlated with absences, GPA, and several other factors. High GPA student grades were not affected by attendance, but lower GPA student grades did fall as class absences increased. The undergraduate required operations management course is mathematically challenging for many students, even after completion of the required prerequisites of college algebra and business statistics. Many students manage to “get by” those courses and still have weak mathematical capabilities. The operations management course requires analytical skills in many topics. The best approach to mastering these skills is to attend class, do homework problems, and then compare solution methods in subsequent classes. Many faculty believe that attendance in classes is important to learn the material. While some students may be good at reading the text, getting lecture notes from friends, or other means of learning the material but not attending class, other students do not fair as well if they miss class. The field of operations management is changing rapidly, with the introduction of practitioner-driven topics such as Six Sigma and Lean. Often, textbooks are several years behind and the instructor uses other sources to supplement the text material in these topics. A student that does not attend the sessions discussing and reviewing these topics may not be able to pick up the material merely by reading the text. In this class, attendance is not mandatory but highly encouraged. Since some topics have no corresponding text material, missing those classes would be problematic in that the student would not be able to find material in the text to study for the exam. Students miss class for a variety of reasons. Often these reasons are legitimate: it is not unusual, for example, to have specific university level policies in place that allow student-athletes to miss class for legitimate sport activities. Many students have medical reasons or family emergency reasons to miss class. On the other hand, many students are just not motivated to attend class, figuring that they can pick up the knowledge by copying a friend’s notes and trying homework problems outside of class time. Without specific attendance motivators built into the grading scheme, students have become more lax in the last few years regarding attendance. This research looks at attendance and course grades for a junior level required operations management class in a business management program. This paper provides the results of a study of 175 students to determine if class attendance is a factor in the final course grade. At the time of this research, the university enrolled approximately 4000 undergraduate students with about 700 in the School of Business. The School is accredited by the AACSB at both the undergraduate and MBA levels. One early work on class attendance was by Van Blerkom (1992) who indicated that even then students were missing classes and faculty were complaining about students missing classes. This is still a faculty complaint today. Van Blerkom reports that class attendance decreased during the semester, and attendance displayed moderate correlations with course grades. Economics faculty Durden and Ellis (1995) collected data from students in the Principles of Economics courses over several years. One weakness of their study is that they surveyed the students at the end of the semester asking how many classes were missed. Rather than taking attendance and collecting the data themselves, the authors may have introduced some bias error. Regardless, an interesting result was that grades were not affected if students missed up to about 4 classes. Five or more classes missed, however, had a significant impact on the course grade. Devadoss and Foltz (1996) studied attendance in agricultural economics courses and found that encouraging students to attend class resulted in fewer absences. They found that motivation had a positive influence on course grades, as did attendance. The authors also discovered that students that worked at jobs had lower average scores. Durden and Ellis (2003) extended their research by introducing motivational aspects to class attendance. They conclude that motivation is an independent factor with regard to average course scores. Lai and Chan (2000) take the research on attendance and grades and discuss whether attendance should be mandatory. They collected data from two sessions of the same course, one in which attendance was mandatory and the other in which it was not. They found that the mandatory attendance class had a higher average score. Cohn and Johnson (2006) study attendance in principles of economics classes, but extend the research to determine whether low test scores influence later attendance. They conclude that low test scores did not result in more absences, but did show that class attendance does have a positive impact on grades. Attendance records were kept by the professor for each class session, assuring accuracy of the attendance data. Some students may have had legitimate reasons to miss a class.
Effects of Corporate Governance on Indirect Costs of Financial Distress in China’s Distressed Companies
Huang Hui, Huazhong University of Science and Technology, Wuhan, China
Zhao Jing-jing, Huazhong University of Science and Technology, Wuhan, China
The environment, both inside and outside, are very different for companies experiencing financial distress. In addition to affecting the company’s value, financial distress leads to a change in the influence of corporate governance, an indirect cost of financial distress. Based on panel data of 193 financially distressed companies in China from 2000 to 2006, this paper examines the empirical relationships between corporate governance characteristics and indirect costs of financial distress. We find that ownership balancing has a positive influence on indirect costs of financial distress, while the proportion of state-owned shares, the proportion of independent directors and the percentage of overhead costs all have negative relationships with indirect costs of financial distress. Our findings can help distressed companied in China and elsewhere improve their corporate governance and become financially healthy. Even companies that generally run smoothly commonly experience periods of financial distress. Financial distress is usually regarded as the embarrassing situation of not being able to pay mature debts or expenses because of liquidity problems, insufficiency of equity, defaults on’ debts and lack of current assets. The costs of financial distress are both direct and indirect. Direct costs include diminishing assets caused by conflict between the owners and the creditors, legal costs and other administrative costs. Indirect costs are losses resulting from the potential for bankruptcy, which include a decreasing client base; the decreasing company value caused by short-sighted, self-protective actions; increased cost of credit; and lost opportunities. Some scholars (e.g., Branch, 2002) have also regarded the losses of creditors and stakeholders as indirect costs. Recent study on corporate financial distress and its indirect costs has been restrained in prediction of distress and calculation of indirect costs, so there is little analysis of the factors and mechanisms of indirect costs or the influence of corporate governance characteristics on indirect costs. Theoretically, good corporate governance has a strategic financial structure which plays a part in preventing financial distress and avoiding bankruptcy. Therefore, it is reasonable to think that there is a relationship between the probability of and the indirect costs of financial distress, on one hand, and the characteristics of corporate governance on the other. However, the environment inside and outside the company may change with the advent of financial distress, so some corporate governance characteristics that have a positive effect on firm value when firm is healthy can have the opposite effect when the firm is in distress. Particularly in China, characteristics like unbalanced shareholder structure, governmental interference and insider control can have a significantly negative influence on a company in financial distress. This paper will analyze the relationship of corporate governance characteristics and the indirect costs of financial distress using 832 groups of panel data of 193 listed companies in 2000~2006, which can be helpful for both domestic and foreign companies experiencing financial distress. In studying the costs of financial distress, a definition of the condition of financial distress would be useful. However, the extant literature does not hold a unified definition. Beaver (1966) defined bankruptcy, default on preferred dividends or default on debts as financial distress, while Deakin (1972) contended that companies with financial distress should include only those which are already bankrupt or debt-insolvent, or which have had to liquidate to pay creditors. Some empirical studies in China regarded only ST-listed companies as financially distressed, which is not very appropriate, and some scholars, like Lu Chang-jiang (2004), have made a clear demarcation on this topic. While the definition of financial distress remains in dispute, it has been common to measure the costs of financial distress from two perspectives: operating performance and market value. The former measures the company’s operating loss from the firm’s performance, and the latter measures the loss of the company’s market value from the investors’ standpoint. Altman (1984) applied the operating performance model to estimate the indirect costs based on the loss of profits for the 3 years prior to bankruptcy and found indirect costs of 4.5% for retail companies and 10.5% for industrial firms. However, this approach failed to distinguish the influence of adverse economic shock; it is difficult to tell whether a loss in profits is caused by financial distress or whether financial distress is caused by a loss in profits resulting from some other cause. Andrade & Kaplall (1998) applied both the model of operating performance and that of equity value to measure the indirect costs in terms of the percentage change in operating margins, capital expenditure margins and net cash flow margins. They examined the impact of financial distress on operating income for 31 highly leveraged transactions and found that the indirect costs of financial distress may be in the range of 10–17%, although they argued that these numbers could be biased upward. For their part, Opler and Titman (1994) reported that highly leveraged firms in financial distress tend to lose substantial market share, and Chen and Merville (1999) found that firms going from healthy to distressed experienced an average annual market value decline of 8.3% of their total assets. Thus, despite some differences in specific findings, the existing literature has suggested that the average indirect costs of financial distress are substantial. Many scholars have already made empirical analyses of the correlation between characteristics of corporate governance and the probability of financial distress. Chaganti (1985) found a low probability of bankruptcy among those companies with a large board of directors. Daily and Dalton (1994) found that firms with CEO duality (CEOs who also serve on the board) and a lower proportion of independent directors are more likely to go bankrupt. Elloumi and Gueyie (2001) reported that firms with one or more block-holders and a larger proportion of outside directors are less likely to enter into conditions of financial distress.
Promoting Development Potentials with Web Applications: An E-Marketplace for Horticulture Businesses in a Developing Country
Sajjad Zahir, Ph.D., University of Lethbridge, Alberta, Canada
We propose a promotional framework for e-commerce in developing countries and illustrate the concept with a prototype e-marketplace for horticultural trades in light of new realities. Trading information was gathered from a key horticultural trading centre during a field study. Various socio-economic-technical issues were considered in the design as per the framework concepts. E-commerce (Turban, King, Lee, & Viehland, 2003) and other uses of the Internet have been promoted for social uplift in developing countries by various international agencies (UNCTAD, 2001). Ngai and Wat (2002) identified 275 articles on e-commerce in leading information systems journals. These articles deal with myriad e-commerce related topics including conceptual frameworks (Wigand, 1997; Zwass, 1996), e-commerce practice issues (Vadapalli & Ramamurty, 1998) and e-commerce strategies (Javalgi & Ramsey, 2001). Zwass (1996) presents a framework consisting of seven layers tailored mostly to the developed countries. The seventh layer of this proposed hierarchy refers to electronic marketplaces. The Global Diffusion of Internet (GDI) Project made extensive investigation of the global spread of Internet in terms of a six-dimensional framework (Wolcott et al., 2001). Travica (2002) discussed a framework for e-commerce in developing countries and listed various infrastructural conditions for e-commerce success. Okoli and Mbarika (2003) developed an integrated framework for assessing e-commerce in Sub-Saharan Africa. While all these frameworks are mostly conceptual in nature, very few articles mention design and development issues about e-commerce applications. In this paper we discuss the actual development and design of an e-commerce application (specifically an e-marketplace) within the context of the conceptual frameworks. Although there is a diverse range of definitions (Malone, Yates, & Benjamin, 1987; Bakos, 1998; FTC, 2000) for e-marketplace, we define it as a B2B information system that uses the Internet for communications and trade, allowing the participating buyers and sellers to exchange information about prices and product offerings and facilitate transactions. A recently published report on “E-Commerce Trade & B2B Exchanges” mentions that worldwide B2B e-commerce is expected to reach as high as $1.4 trillion in 2003 (Global Information, 2003) and the growth in trade through e-marketplaces has been a major contributor to B2B e-commerce (Stockdale, & Standing, 2002). At the height of the dot com euphoria, some reports estimated the number of globally operating e-marketplaces to be around one thousand or more (Hurwitz, 2000; Karpinski, 2000; Tadesci, 2001). Such expectations, however, faced the realities during the dot com downturn and both buyers and sellers expressed reluctance to participate in e-marketplaces due to the complexities and novelties of electronic trading (Deeter-Schmelz, Bizzari, Graham, & Howdyshell, 2001; Wise, & Morrison, 2000). Recently, however, some segments of the e-marketplaces have made a comeback. Quadrem has doubled the revenue from the industrial products for metal and mining industries in the early part of 2004 (Schwartz, 2004). In spite of this volatility, e-marketplaces continue to hold promise for future global transactions affecting not only the developed world, but also developing countries. The e-marketplace being proposed here must take into consideration the limitations and other realities for e-marketplaces in developing countries (Humphrey, Mansell, Pare, & Schwartz, 2004). How these realities can be incorporated into the design of an e-marketplace and make it functional as much as possible by circumventing limitations is the focus of this paper. We further demonstrate the ideas by developing a prototype system for the sake of illustration and initiating discussions among academics and information and communication technology (ICT) practitioners. This study will help us understand the issues and challenges while enabling us to explore and promote the prospects for e-commerce diffusion in developing countries. In addition, the paper will lead to an innovative design approach that will best suit the incremental development requirements of the system satisfying socio-economic realities of developing countries such as Bangladesh. Volumes of publications promoting the Internet for developing countries can be sourced to organizations such as UNCTAD, OECD, WTO and the World Bank, among others, and can be obtained from their respective Web sites. Similar studies are also reported from academics, other development partners and organizations, and freelance reporters. We can classify them into three broad categories: a) promotional, b) perception of realities and c) conceptual. Most of the studies from development agencies under the United Nations and the World Bank promote e-commerce for development while addressing various issues that confront its implementation in the developing countries. Goldstein and O’Connor (2000) analyzed several potential benefits of e-commerce for the developing countries. They promoted the idea, and at the same time identified several issues for consideration and means to overcome barriers. Mann (2000) noted e-commerce as an increasingly important economic activity for development as it merges domestic and international marketplaces. Mann also outlined suggestions how developing nations should approach negotiations in the World Trade Organization (WTO). Concerning perception of realities, some studies shed an optimistic light and others cast a shadow of skepticism. For example, Odedra-Straub (2003) questions the optimism of UNCTAD’s promotional reports by emphasizing that most developing countries still lack e-readiness due to poor infrastructure, lack of education and a weak legal framework. Sulaiman (2000) investigated the status of e-commerce applications in Malaysia and reported that communication via e-mail was the most widely used application (70%). Goldstein and O’Connor (2000) also noted that 82% of usage in Bangladesh was related to e-mail. On the other hand, applications such as those for coordinating procurement, monitoring trade, and tracking shipment of goods were not widely used. These applications required a substantial amount of financial investment and most organizations could not afford it. These findings also indicated that security concerns were the main barrier to e-commerce implementation. But, these refer to the year 2000 and things have changed tremendously since then. Pitfalls in the early stages of adoption of any technology are not surprising and with time, favorable pictures emerge. In a recent publication, Meera, Jhamtani and Rao (2004) studied three projects related to ICT in agricultural development in India and came to the following conclusions- 1) Efforts should be made to incorporate ICT in all endeavours related to agricultural development and 2) Organisations and departments concerned with agricultural development need to realise the potential of ICT for the speedy dissemination of information to farmers. Recently, McMaster and Nowak (2006) compared and examined advancement and evolution of trade-facilitation and promotion via trade portals in the Pacific Islands Countries (PIC) and noted numerous benefits from ICT-enabled applications. They recommended establishment of regionally integrated single window portals for maximum benefit.
Copyright 2000-2017. All Rights Reserved