The Business Review, Cambridge
Vol. 8 * Number 2 * December 2007
The Library of Congress, Washington, DC * ISSN 1553 - 5827
Most Trusted. Most Cited. Most Read.
All submissions are subject to a double blind review process
The primary goal of the journal will be to provide opportunities for business related academicians and professionals from various business related fields in a global realm to publish their paper in one source. The Business Review, Cambridge will bring together academicians and professionals from all areas related business fields and related fields to interact with members inside and outside their own particular disciplines. The journal will provide opportunities for publishing researcher's paper as well as providing opportunities to view other's work. All submissions are subject to a double blind peer review process. The Business Review, Cambridge is a refereed academic journal which publishes the scientific research findings in its field with the ISSN 1553-5827 issued by the Library of Congress, Washington, DC. No Manuscript Will Be Accepted Without the Required Format. All Manuscripts Should Be Professionally Proofread Before the Submission. You can use www.editavenue.com for professional proofreading / editing etc...The journal will meet the quality and integrity requirements of applicable accreditation agencies (AACSB, regional) and journal evaluation organizations to insure our publications provide our authors publication venues that are recognized by their institutions for academic advancement and academically qualified statue.
The Business Review, Cambridge is published two times a year, December and Summer. The e-mail: firstname.lastname@example.org; Website: BRC Requests for subscriptions, back issues, and changes of address, as well as advertising can be made via our e-mail address.. Manuscripts and other materials of an editorial nature should be directed to the Journal's e-mail address above. Address advertising inquiries to Advertising Manager.
Copyright 2000-2018. All Rights Reserved
Degree of Complementarity Among Off Balance Sheet Items: The Empirical Evidence
Dr. Vassilios N. Gargalas, Herbert H. Lehman College, Bronx, NY
This paper tests the theory that loan sales without recourse and standby letters of credit are activities undertaken jointly by commercial banks in order to avoid the "regulatory taxes," imposed on loan sales with recourse. The implication of the theory is that the production structure of the two instruments is determined mainly by the supply side, in such a way that, on average, for each loan sold without recourse, a standby letter of credit, guaranteeing an amount equal to the loan sold, is issued. Therefore, the hypothesis tested is that loan sales without recourse and standby letters of credit are complements. The alternative hypothesis is that the production of the two instruments is mainly driven by the demand side making the two activities unrelated. This could also be viewed as a test of whether banks have been transformed to something new or they maintain their traditional function of credit analysis and loan origination, as well as risk bearing. Data from ninety banks are used to perform a number of empirical tests. A series of tests that involve loan sales without recourse and standby letters of credit is conducted. Other variables that are “naturally” involved in these tests are the three “regulatory taxes,” reserves requirements, FDIC premiums, and capital requirements. Considered in the tests are a number of variables identified as having a potential impact on loan sales without recourse activity and standby letters of credit issuance. Finally we test whether the hypothesized relationship stands in the presence of non-stationariry, by testing for co-integration of the time series of the two off balance sheet instruments. Loan sales with recourse can be easily seen as instruments of traditional banking, since banks both perform the credit analysis and also undertake the risk of lending. A close inspection of the data, however, shows that the vast majority of loans sold are sold without recourse. In doing so, banks appear to have become simple loan brokers and thus deviate from their traditional role. However, an even closer look will show that standby letters of credit (SLCs) have also undergone a parallel increase. The explanation put forward and tested in this paper is that banks maintain their traditional role but in order to avoid the “regulatory taxes” incurred, they “repackage” their products by substituting portfolios of loan sales without recourse combined with standby letters of credit for loans sold with recourse. Indeed, according to Regulation D, when banks sell loans with recourse they have to maintain additional reserves with the Federal Reserve, face increased capital requirements, and also pay a higher FDIC premium because the proceeds of the sale are treated as deposits. We call those three types of cost “regulatory taxes.” These “regulatory taxes” can be avoided, if banks, rather that selling loans with recourse, replicate the cash flows that those sales would have generated by selling loans without recourse and simultaneously issuing standby letters of credit. In this context, banks are expected to sell portfolios of loans with recourse and at the same time issue portfolios of SLCs, so that on average the volume of loans sold and of loans guaranteed are the same. Since the last two are off balance sheet instruments, the “regulatory taxes” are circumvented. In the process, however, loan sales without recourse and standby letters of credit become, from the banks’ perspective, complementary activities. Goldberg and Lloyd-Davies (1985) address the empirical issue of whether the increased activities of commercial banks in the issuance of standby letters of credit have an impact on their overall perceived risk, where the interest rate differential between large CD's and the risk free rate serves as a measure of risk. The results indicate that changes in loan volume have dominated changes in standby letters of credit in explaining the premium in excess of the risk free rate. In addition a higher capital ratio appears to accompany a higher level of standby letters of credit, only in the case of banks with total assets, plus standby letters of credit, less than $100 million. Goldberg and Lloyd-Davies conclude that standby letters of credit have no impact on overall bank riskiness. Pavel (1988,) investigates loan sales without recourse and concludes that they have no impact on overall bank riskiness. Benveniste and Berger (1987) compare securitized assets that pay off the securitized lender first, and multi-class securities with sequential claims, that are issued against the same collateral pool (practice not permitted to commercial banks). The model concludes that the payoffs, as well as the risk sharing achieved by securitization, is similar to those achieved by sequential claims. Pennacchi (1987,) concludes that it is profitable for banks to sell loans because the credit analysis ability of banks is superior to that of the public, and also have higher cost of capital than other non-regulated institutions. Pavel (1988) suggests that banks sell loans for three reasons. First because of funding; that is, some banks may not want to keep a loan on their books. For example, the loan may be less risky than the bank itself, but the bank may still want to originate it in order to maintain good relations with the client. Empirical results indicate that there is a statistically significant difference between the ratio of loan sales to assets for the thirty riskiest banks when compared to the analogous ratio of the thirty least risky banks for 1985. The difference in the change of the risk of the two groups between two consecutive years is not statistically significant, indicating that funding is a reason for selling loans. The strategy of using loan sales as a funding device seems to have little impact on bank risk. Loan sales have been identified as a means to alter the diversification of the bank's loan portfolio. Pavel concludes that the banks that were least diversified in 1984 sold more than twice as many loans (as a percentage of assets) in 1985 than bank holding companies that were the most diversified in 1984. The difference was statistically significant. As before, however, the change in the riskiness of the two was not statistically significant. Capital constraints is identified as another reason for loan sales. Firms that increased their primary capital ratio over the 1984-85 period, were compared to those of bank holding companies that decreased their capital ratio. The difference between the two groups, in loans sold, was not statistically significant. Loan sales do not seem to be used by banks in order to increase their primary capital ratio. Even if one is willing to assume that loan sales are used by banks in order to increase their primary ratios, according to empirical tests banks do not alter the their riskiness any more than bank holding companies that increase their primary capital ratios, through some other means.
An Economic Analysis of Tax Reform in Texas
Dr. Michael F. Williams, Prairie View A&M University, Prairie View, TX
This paper examines the economic impacts of changes to the Texas tax system enacted in 2006. We contend that property tax relief, combined with an increase in tobacco taxes and changes to the business franchise tax, enhance the productivity of the Texas economy while reducing the progressivity of the tax burden. Reform of the tax system in Texas—a topic which had considerable urgency due to an adverse ruling by a Texas district judge—culminated with the signing of House Bills 1-5 by Texas Governor Rick Perry on May 18, 2006. Support is strong among Texans for this tax reform that not only reduces local property taxes but also increases total education spending. Achieving both of these objectives required an increase in the state cigarette tax and an extension and modification of the Texas business franchise fee. These sweeping tax changes may have substantial economic effects in Texas; economic analysis suggests that the changes, by shifting the tax burden away from capital owners and toward laborers, consumers, and smokers, will increase the productivity of the Texas economy while reducing the progressivity of the tax burden among Texans. Aggregate expenditures on K-12 public education in Texas were $45 billion for school year 2005-2006 (Perry, 2006). Approximately $4.5 billion of this total was funded with federal dollars. The remaining portion was funded through a system commonly known as “Robin Hood,” first implemented in 1993. Prior to the 2006 tax reform, approximately half of Robin Hood funding was generated by property taxes levied by the 1,037 school districts in Texas. These districts faced a state-mandated tax rate limitation of $1.50 per $100 of assessed property value. (This cap relates to “Maintenance and Operations” expenses. Property taxes levied for capital expenditures were not subject to this cap.) The remainder of public school expenditures were funded from state revenues, including state sales taxes and business franchise taxes. There were three state funding systems for public schools—a “foundation” system, a “guaranteed revenue” system, and a “recapture” system. The Robin Hood name derives from the recapture system, under which property rich districts—13% of all districts in school year 2004-05—must surrender a portion of their property tax revenue to the state government, whereupon it is redistributed to property-poor districts. (The wealth of each district is determined by its assessed property value per “weighted average daily attendance pupil”—roughly speaking, by the total value of taxable property in the district, divided by the number of students that the district serves. If the district’s property value per student is above the state-determined “recapture threshold” then it must surrender some of its property tax revenue to the state.) In September 2004, state district judge John Dietz ruled in favor of the plaintiff school districts in their lawsuit filed against the state, and declared the Robin Hood system of funding unconstitutional (Elliot, 2004a). He cited two features of the current system that violate the Texas constitution. First, he declared that the average level of expenditures per public school student was insufficient to ensure students their constitutional guarantee of an “adequate suitable” education (Texas Constitution, Article VII, Section 1). Second, he declared that the $1.50 cap on local property tax rates was tantamount to a constitutionally-prohibited statewide property tax (Id., Article VIII, Section 1e), since 98% of school districts were applying tax rates at or near the $1.50 limit. (Interestingly, the recapture system itself was not deemed unconstitutional.) Judge Dietz strongly suggested that the state abandon the recapture system, effectively compelling legislators and Governor Perry to devise a new system to fund public schools. Subsequent to Judge Dietz’ decision, political leaders in Austin spent many months crafting a series of reforms to the current tax and expenditure system, culminating with House Bills 1, 2 ,3 ,4 , and 5, all signed into law by Governor Perry in May 2006. The following reforms are included in House Bills 1-5: An increase in expenditures per pupil above 2006 levels. A reduced reliance upon the recapture system, with fewer property-rich districts reallocating fewer property tax dollars to property-poor districts. Substantial reductions in property tax rates. An increase in tobacco taxes, including an increase in the cigarette tax by $1 per pack. A change in both the tax rates and the tax base associated with the business franchise tax, with more types of business incurring tax liability but at rates reduced from their pre-reform levels. Let us examine the each of the above reforms in greater detail and consider the economic consequences of each reform. House Bill 3 mandates a minimum $2000 annual salary increase per school teacher and additional funding for 3% annual increases in K-12 public education expenditures in each school district. Each district also has discretion to increase spending above the 3% minimum as long as the district adheres to limits on property tax rates (Texas Tax Reform Commission, 2006). It is clear that the intent of an increase in public school expenditures is to improve the education garnered by students. Among other things, an improved level of education would increase each student’s endowment of “human capital,” increasing her worth to employers (her ‘marginal revenue product,’ in economics parlance); this improved quality of labor would drive up wages and labor incomes of Texans over the long term. As Lufkin State Representative Jim McReynolds stated, “Education is the very best economic development tool that a society can have” (Bass, 2004). Indeed, there is overwhelming evidence linking education levels with earnings levels. (For an excellent survey of much of this evidence, see Harmon et al , 2003.) The evidence linking education expenditures and educational attainment, however, is quite mixed. Those who argue that higher expenditures lead to improved outcomes include: Krueger (1999), who found that increased expenditures in Tennessee, enabling smaller public school class sizes, resulted in increased student test scores; Deke (2003), who estimated that the twenty percent increase in Kansas’ public school expenditures during the 1990s resulted in a five percent increase in the likelihood that a student would attend college; and Hedges et al (1994), who in a meta-analysis finds a positive relationship between spending per pupil and student performance. Those who find no link between increased education spending and performance are led by Hanushek (1989, 1997, 1998). (Apart from the controversy surrounding a link between public education spending and student performance, there is also a more bureaucracy-centric literature which tries to measure the level of public education spending that is deemed “adequate” for satisfactory student performance. See Baker et al, 2004.) Given the lack of consensus surrounding this issue, it cannot be claimed with certainty that the increase in Texas public education expenditures will have any positive influence on students’ performance (or, in their post-education years, on their earnings).
Advertising Strategy and Returns on Advertising: A Market Value Approach
Dr. Peggy Choong, Niagara University
Dr. Greg Filbeck, Schweser Study Program and University of Wisconsin, La Cross
Dr. Daniel L. Tompkins, Niagara University
Marketing managers are today increasingly pressured to provide evidence of their advertising strategies. Targeting specific premier television programs has become such an expensive strategy. There is some anecdotal evidence of the successful outcomes of this strategy in the form of improved sales, phone inquiries or hits on the web sites. However, there is no research that has provided credible evidence of how the market views this strategy. Using event study methodology, this paper evaluates the returns on advertising in major event television programs. Three types of television programs are investigated. The first is a one-shot final episode of popular sitcoms, the second and third are recurring annual television programs namely the Academy Awards and Super Bowl. The results indicate that the effectiveness of the actual advertising strategy depends on the specific program the firm chooses to purchase advertising slots. Reaching a broad audience has always been a difficult problem for advertising executives. It doesn’t get easier with new technologies that enables television viewers to avoid these advertisements altogether. Throw into the mix the many new cable channels and other micro-targeting media, and a very fragmented audience topography results. Members of the American Advertising Federation have identified audience fragmentation as one of the biggest issue the advertising industry faces and will continue to grapple with for the “next five years” (American Advertising Federation, 2003). And, as if reaching the largest proportion of prospective buyers wasn’t problem enough, advertising executives are also plagued with the reality of audience indifference and inattention. In response to these issues, marketers have felt compelled over the last two decades to concentrate on major event television programs. These are the events that consumers highly anticipate, are enthusiastic about the outcomes and are most likely to watch live. One of the highest rated anticipated events on television each year is the Super Bowl. In 2007, it drew an average of 90 million people. Audience receptivity during the program is also reported to be significantly higher than during normal programming. Unlike regular programming where commercials are often viewed as intrusions, Super Bowl audiences are actually paying attention to the commercials. A research conducted by SAA/Research reported that 48 percent of respondents listed seeing the commercials as a reason they watch (Elliott, 1999). Commercial watching has been so embedded into the experience that reviewing the commercials the day after have become an event in itself ranging from the informal office conversations to the annual consumer survey know as the Super Bowl Ad Meter conducted by USA Today. The Academy Awards is another highly anticipated program audiences are more likely to watch live. It is one of the most prestigious television events and in 2007, the 79th annual Academy Awards drew an audience of more than 40 million (htpp:// www.latimes.com/ entertainment/news). The Academy Awards has also managed to draw one of the largest percentages of women viewers annually and has earned the designation of the Super Bowl for women. Finally, the cliffhanger final episodes of favorite sit-coms have also brought in broad audience viewership that marketers target. There has been extensive research in marketing about advertising and its effects. Market response models form a significant segment of these studies on how advertising works (see Demetrios et. al, 1999 for a discussion of the taxonomy of advertising effects). Generally, this genre of studies examines the relationship between advertising and some measure of behavioral response. Aggregate level studies typically use sales or market shares as proxies for the market response (Bass and Clarke, 1972; Little, 1979; Blattberg and Jeuland, 1981; Rao, 1986; Hanssens, Parsons and Schultz, 1990; Duffy, 2003). However, measuring the overall effect of advertising expenditure on sales and profits is fraught with problems. One significant problem is that the duration of advertising effects uncovered in many studies clearly shows that the effects of advertising accrue over time, thus making current profits and sales less useful measures of advertising effectiveness (Winer, 1979, 1980; Dekimpe and Hanssens, 1995; Leone, 1995; Lodish, 1995; Mela, Gupta and Lehmann, 1997). These problems are exacerbated when attempting to measure the effects of a single advertising strategy. Thus to circumvent these problems and to measure the effects of a single advertising strategy, another method of measurement is required. The event study methodology is a useful method that has often been used to measure the direct effects of a strategy. This methodology captures the market’s valuation of a management strategy by measuring the abnormal returns associated with the announcement of that strategy. It has been used in the field of marketing to examine strategies such as product innovation, change in a company’s name, bad publicity associated with a product introduction and recall, announcements of green activities, advertising agency terminations and the introduction of e-commerce (Horsky and Swyngedouw, 1987; Mathur and Mathur, 2000; Subramani and Walden, 2001). When applied to the measurement of advertising effectiveness, this method is able to capture the abnormal returns of advertising in specific programs such as the Super Bowl, Academy Awards or the final episodes of favorite sitcoms. Miyazak and Morgan used this methodology to investigate corporate Olympic sponsorships (2001). The limitation of the study is that only the data from the 1996 Summer Olympic held in Atlanta, Georgia was used thus resulting in a data set consisting only of 27 firms. The purpose of this paper is to evaluate the returns to management’s strategy to advertise in the Super Bowl, Academy Awards and the last episodes of the popular sitcoms Cheers and Seinfeld. These events were chosen because they represent the highest rated annual sporting event, a highly prestigious annual entertainment event and two highly popular and successful primetime series respectively. This kind of measurement is particularly relevant when marketing managers are increasingly pressured to provide evidence of economic returns to their strategic decisions. American of Advertising Federation members have also cited the necessity of demonstrating returns on investment in advertising as one of their significant concerns (www.aaf.org/news/press20030918_01.html). Thus the result of this analysis would enable managers to better allocate their advertising resources among the array of media and media events.
Estimation of Derived Demand for and Supply of Better Education in Louisiana
Dr. Donald R. Andrews, Southern University and A&M College, Baton Rouge, LA
Dr. Sung C. No, Southern University and A&M College, Baton Rouge, LA
Dr. Ashagre Yigletu, Southern University and A&M College, Baton Rouge, LA
The paper conducts a market analysis of demand for and supply of better education for the state of Louisiana. The 2SLS estimates indicate that as class size increases, school performance significantly deteriorates and better education is on demand and that as a school’s performance score improves, more academically oriented parents seek improved schools and their class size increases, resulting in a positively-sloped-supply curve for better education. Based on parameter estimates in the derived demand and supply, the study finds that the state average class size (K-8) is well above the desired class size and the average test score is about half of the desired test score. As a result of Hurricanes Katrina and Rita, Louisiana is in the process of rebuilding its economy. The New Orleans area has suffered a major disaster and the recovery effort has been slow and bureaucratic. Even before the storms, the schools in the state and especially the New Orleans area were performing well below national and state standards. One of the major concerns as the state rebuilds is the provision of the incentive to improve performance in primary and secondary education. A high performing educational system is a prerequisite for the state to make the transformation from a low income natural resource dependent area to a more entrepreneurial knowledge-based globally focused economy. Investments in human capital in the form of education and training will be a determining factor in the ability of this region to recover and move forward. It is a widely held belief that education provides the foundation for students to acquire many other skills required to achieve their life long goals ; moreover, “quality” education enhances the possibility for students to excel in competitive markets. For this reason, the state of Louisiana has strived for better education by investing millions of dollars in its education system. However, preliminary literature reviews indicate that despite rising importance of better education in the state, little research has been conducted using a market analysis of the demand and supply for better education in the state of Louisiana. The lack of research in this area is one motivator for the current study. The purpose of the paper is to empirically estimate demand for and supply of better performing educational outcomes in Louisiana. A conceptual model embodying supply and demand characteristics of better education is developed and estimated. More specifically, this study will examine the impact that class size has on school performance of children from grades K-8. This represents the demand side for better education. It is hypothesized that as class size increases, school performance as measured by school performance score significantly deteriorates and that demand by affected parents for better education rises. On the supply side, as a school’s performance score improves, more academically oriented parents seek the improved school. This dictates the supply relation between school performance and class size. Research on educational production functions has provided a conceptual model for not only demand for, but also supply of better education. Class size and performance scores are jointly determined endogenous variables as suggested by Hoxby (2000). Datcher and Loury (1989), using data from the ETS-Headstart Longitudinal study on low income black children find that differences in family behavior and attitudes have large and important long term effects on performance. Their findings suggest that as school performance deteriorates, more academically oriented parents demand better education for their children. Andrews et al. (1991) also suggest that school, family, and community inputs are significant in the educational process and should be considered in any attempt to explain the demand for better educational performance. Perl (1973), Summer and Wolf (1977), and Hanushek (1986) used micro level data to analyze the impact of teacher and school characteristics on performance. These studies suggest that teachers and school inputs are important in academic achievement and that as a school’s performance score improves, more academically oriented parents seek improved schools and thus their class size will increase, resulting in a positively sloped supply curve for better education. School size is used as a proxy of teacher and school characteristics in this study. where Poverty is percent of students on the state funded lunch program, Missdelta is school being located in the Mississippi Delta region of Louisiana, School size is the number of students from K-8, Scores are test scores, Class size is the ratio of students to teachers, Location is school located in small towns, midsize, or large cities, e1 and e2 are error terms. The paper hypothesizes that as class size increases, school performance significantly deteriorates and better education is demanded by more academically oriented parents. Thus, the parameter estimate on Class size in Equation (1) is considered the coefficient estimate for the derived demand relation and is expected to have a negative sign. Other demand characteristics in Equation (1), such as Poverty, Missdelta, and Location are included as control variables that are suggested in previous studies. Furthermore, the paper hypothesizes that as a school’s performance score improves, more academically oriented parents seek improved schools and thus their class size will increase, resulting in a positively sloped supply curve for better education. Therefore, it is expected that the sign for β3 > 0. Other supply characteristics in Equation (2) are included to provide appropriate specification. Class size and Scores are jointly determined endogenous variables. Thus ordinary least squares (OLS) is inappropriate for estimating Equations (1) and (2) because Class size and Scores are correlated with e1 and e2, which violates the standard assumptions of the OLS model and leads to inconsistent parameter estimates. To avoid these biases, two-stage least square estimation is used to provide efficient estimates. The state of Louisiana is located in the lower Mississippi delta. A comprehensive assessment of the Delta region was reported in The Mississippi Delta: Beyond 2000 Interim Report published by the U.S. Department of Transportation. The Lower Mississippi Delta is defined as consisting of 219 counties in Louisiana, Mississippi, Arkansas, Tennessee, Missouri, Kentucky and Illinois. This study focused on transportation, human capital development (including education, community development, job training, health, and housing); natural and physical assets (agricultural, natural resources, and the environment); and business and industrial development (technological and entrepreneurial enterprise, small business development, and tourism).
Takeovers and Agency Problems: A Reexamination of the Pre-Acquisition Operating Performance of Targets
Dr. Rupendra Paliwal, Sacred Heart University, Fairfield, CT
Both the issue of agency problems in corporate takeovers and the role of takeovers as an external control mechanism have been addressed extensively in previously published empirical literature. This existing literature suggests that removal of inefficient management to improve operating performance is one of the key underlying motives for takeovers. However, the results of the analyses of the pre-acquisition operating performance of targets have not been conclusive concerning the efficacy of this motivation to improve underperformance in target firms. I propose that the existing research fails to adequately account for other factors that may also act as control mechanisms, such as managerial ownership, institutional holdings and leverage, which should also be considered when analyzing the pre-acquisition operating performance of targets. These alternative means of controlling agency problems may prevent managers from wasting resources. In this paper, I have sought to contribute to the debate on the inefficient management hypothesis. I do so by examining the pre-acquisition operating performance of targets in the presence of alternative control mechanisms such as insider holdings, institutional holdings and leverage. I have also investigated whether the takeover announcement abnormal returns are higher for targets with poor performance and potentially higher agency costs. I found that target firms are characterized by higher operating expenses compared to control firms. The results of my analysis suggest that targets with entrenched managers and low external monitoring have significantly higher operating expenses. I also found weak evidence that the announcement period abnormal returns are higher for targets with poor operating performance. It has been widely recognized that separation of ownership and control results in agency problems. For example the free cash flow theory of Jensen (1986) argues that managers have incentives to expand firms beyond their optimal size to increases resources under their control and also because managerial compensation is often tied to firm size. Takeovers are often thought of as a primary mechanism through which agency problems can be alleviated. Therefore, managers who are not acting in the interests of shareholders may be more likely to see their firms become takeover targets and may even be fired. This motive of takeovers to improve the performance of firms by removing poorly performing managers is referred in the literature as the inefficient management hypothesis. There is extensive empirical literature addressing the role of corporate takeovers in alleviating agency problems. Although the majority of the empirical research to date has focused on comparing pre and post acquisition stock performance of acquirers, targets and merged firms, a few papers have also looked at the pre-acquisition operating performance of takeover targets. However, the results on the inefficient management hypothesis based on the analyses of the pre-acquisition operating performance of targets are not conclusive. While Agrawal and Jaffe (2003) do not find any evidence of underperformance in the targets, Trimbath, Frydman and Frydman (2001) concluded that cost inefficiency is a determinant of the risk of being a takeover target. These previously published papers do not take into account alternative control mechanisms such as managerial ownership, institutional holdings, external monitoring by blockholders and debt holders and equity based compensation for managers which may exist in the target firms. These alternative means of controlling agency problems may prevent the managers from wasting resources. Agrawal and Jaffe (2003) did not rule out the possibility that some takeovers are carried out to remove inefficient management. They also noted that external control mechanisms (such as the threat of a takeover) may facilitate internal mechanisms (such as boards) in disciplining bad managers. Given the lack of conclusive evidence concerning the disciplinary nature of takeovers in the empirical literature, I believe the level and type of agency costs at target firms needs to be investigated further. In this paper, I seek to contribute to the debate on the inefficient management hypothesis for target firms by investigating their pre-acquisition operating performance in the presence of alternative control mechanisms such as insider holdings, institutional holdings and leverage. Most of the empirical research on the inefficient management of takeover targets looks at the stock performance of target firms before and after the acquisitions. Some studies have looked at the possibility of overinvestment by managers to test the excess free cash flow hypothesis proposed by Jensen (1986). Specifically, these studies have looked at the target firm’s capital expenditure compared to an industry benchmark to determine any potential overinvestment problem. Servaes (1994) did not find any increase in capital expenditure by target firms before their acquisitions. He suggests that target firms might be overinvesting in other assets such as inventories or employees. However, existing empirical literature provides very limited evidence about overinvestment in other assets; see Kaplan (1989) and Smith (1990). Hendershott (1996) argues that overinvesting would make firms more attractive takeover targets, but it is possible that takeovers might not be successful. Therefore, he suggests that tests for target overinvestment should also include unsuccessful takeovers. He documents evidence of target overinvestment in a sample of firms that used highly leveraged transactions to avoid takeovers. Healy, Palepu and Ruback (1992) did not find any post-acquisition change in capital expenditures and R&D expenses in a sample of 50 acquisitions. However, they observed a significant improvement in industry-adjusted asset productivity for the combined firm, which led to higher operating cash flow returns. Based on both operating performance and stock returns for a large sample of over 2000 takeovers during 1926-1996, Agrawal and Jaffe (2003) found little evidence that target firms were performing poorly before being acquired. However, Hasbrouck (1985) concluded that the average Tobin’s q of acquired firms is significantly below the average Tobin’s q of the control groups matched by size or industry, thereby indicating underperformance by target firms. Similarly, Trimbath, Frydman and Frydman (2001) concluded that cost inefficiency is a determinant of the risk of being a takeover target. In this paper, I argue that tests for inefficient management at target firms should take into account the presence of alternative control mechanisms in these firms. Jensen and Meckling (1976) argue that if managers hold a large fraction of outstanding shares of a firm then the agency problems will be less severe. However, Morck, Schleifer and Vishny (1988) and Stulz (1988) suggest a nonlinear effect of managerial holdings. They argue that higher levels of managerial holdings entrench management’s corporate control, protecting them from the discipline of the market. Song and Walkling (1993) document that takeover targets have low managerial ownership than the control sample. However they did not look at the impact of low managerial ownership on operating performance of target firms. Existing empirical literature also suggests that institutional ownership in a firm might provide additional monitoring mechanisms. Jensen (1986) argues that when managers issue debt in exchange for stock, they are bonding their promise to pay out future cash flows that cannot be accomplished by simple dividend increase. Thus additional debt reduces the agency costs of free cash flows by reducing cash flows available for spending at the discretion of managers. Safieddine and Titman (1999) found that on average targets that terminate takeover offers significantly increase their leverage ratio subsequently. They also document that targets which increase their leverage ratio also reduce their capital expenditures, sell assets, reduce employment, increase focus, and realize cash flows and share prices that outperform the benchmark in five years following the failed takeovers. Thus the existing evidence suggests that higher levels of leverage may reduce agency costs for shareholders. Therefore, in this paper I test for inefficient management at target firms by taking in to account managerial ownership, institutional holdings and leverage. I expect that target firms with low or high insider holdings and low external monitoring by institutional shareholders and debtholders will be characterized by poor pre-acquisition operating performance.
Application and Enforcement of Two Specialized Lease Provisions: Radius Clause and Continuous Occupancy
Dr. J. Bruce Lindeman, University of Arkansas at Little Rock, Little Rock, AR
The radius clause is a lease clause that sometimes is found in a lease between a shopping center and a tenant. It states that the lessee merchant agrees not to open a similar store within a radius of x distance from the lessor shopping center. While radius clauses could appear in any lease, by any lessor, to a retail tenant, these clauses are most common in shopping mall leasing. However, enforcement is a different matter: courts usually require the shopping center prove specific damage from an offending tenant’s violation of the radius clause provision. This generally is very difficult to do with respect to a small tenant. However, this paper describes a situation in which enforcement of the radius clause against a small tenant was successful. Leases require the tenant to pay rent, but do they always require the tenant to occupy the rented space? A continuous occupancy clause requires that the tenant not only pay rent, but also remain in and actively use the leased premises. This paper presents a case in which continuous occupancy was enforced against a tenant even though there was no specific mention of continuous occupancy in the lease agreement. The radius clause is a clause that can appear in a lease between a shopping center and a tenant. It states that the lessee merchant agrees not to open a similar store within a radius of x distance from the lessor shopping center. While radius clauses could appear in any lease, by any lessor, to a retail tenant, these clauses are most common in shopping mall leasing. The reason for the radius clause is to protect the marketability of the shopping mall; close by duplicate stores will “cannibalize” sales from the mall. Requiring duplicate stores to be at least a prescribed distance from the mall, the mall management can “assure” that the competition from these stores is either outside the mall’s predominant market area or, at least, no closer than the periphery of the market area. (A shopping center’s market area is the geographical space within which the shopping center is the predominant shopping destination.) The mall’s interest is to attract as many shoppers as possible, since this will maximize rentals from the percentage leases that prevail among the mall’s numerous small tenants. (A percentage lease is one in which the rent is determined, at least in part, by the amount of the tenant’s gross sales; it is, therefore, a significant objective of mall management to increase tenant sales, which is best accomplished by increasing customer traffic to and within the mall.) Thus, the purpose of the radius clause is to prevent tenants from locating branches close enough to the lessor mall that they would drain away shoppers (and sales). Radius clauses are fairly common in mall leasing, but enforcement of them is a different matter: courts of equity will not allow enforcement unless the mall can prove damage. While it may not be all that difficult to prove such damage against an anchor tenant, it is rare that such proof can be amassed against a small tenant. This paper describes a situation in which a mall did successfully apply the radius clause against a tenant which occupied less than 1% of the mall’s gross leasable area (GLA). One could assume that a typical mall retail tenant might feel the same way as the mall management: that another of its stores too close by might cannibalize (drain sales from) the mall store. However, the tenant’s definition of “too close by” might be quite different ( and closer) from the mall management’s perspective. Also, a particular store chain might believe that its business might be enhanced by a nearby branch – especially if its management thinks that cannibalized mall sales will be greatly outweighed by additional trade attracted by a new location. Perhaps in anticipation of this, one remedy that the radius clause often provides to the mall is that the offending close location’s sales be added to the mall store’s sales when calculating the percentage lease rent for the mall store. Thus, the tenant may not actually prevented from opening a new location: rather, if an offending location is opened, the mall benefits from all its sales as well. On the other hand, it should be noted also that in a given contract the mall’s remedy can be more draconian: closing the offending new site. Regardless of the remedy, it is, in fact, rare that a mall can enforce the clause because of the necessity to prove actual damage. That is, only if damage can be proven can the mall actually go ahead and require the closure of the offending store, or that its sales be included in the mall store’s rent calculation. Additionally, the courts require not just that damage be show, but that the damage to the mall is sufficient to warrant application of the penalty to the tenant. In making this decision the court also considers the damage to the tenant if the penalty is applied, and then weighs the “relative” damage to each litigant. This puts the mall in a tough spot: it must prove damage and also must show that its damage outweighs the damage the penalty would cause to the offending tenant. Even though civil suits require only “a preponderance of the evidence” it is difficult even to prove damage. The offending store is one small tenant among 100 or more other small tenants and several huge ones. To show damage requires demonstrating that, because of the offending new location within the “radius”, sales of the mall store were lower than they otherwise would have been, and that the reduction can be directly attributed to the opening of one new offending store by one small tenant. Describing such a situation usually engenders too much statistical noise and, therefore, it becomes very difficult to come up with a confidence interval, so to speak, that does not include 0. If this approach is successful in demonstrating measurable damage, it still is required that the damage to the Mall be successfully measured as well, and that it is shown to be sufficient to warrant application of the penalty to the tenant. These are difficulties encountered once a tenant actually has opened an offending location. Even more difficult would be proving damage before the fact – that is, proving, before an offending location has even opened, that the opening of such a location would cause measurable damage sufficient to warrant remedy. Nonetheless, this paper describes a situation in which a mall prevailed in court against a small tenant that had not yet opened its offending branch. Needless to say, there are special circumstances with regard to this case; one cannot use it as a model for enforcement against a “typical” small tenant. (Also, because the eventual settlement was sealed and is not public knowledge, pseudonyms will be used in the discussion that follows.) These events occurred in City, one of the country’s largest metropolitan areas. Mall is the lessor; Merchant is the tenant. The lease between Merchant and Mall includes a radius clause; within the radius is Othermall. Both Mall and Othermall are upscale shopping malls; in fact they are, by a considerable margin, the two “uppest-scale” malls in City. Competition between the two is fierce, although their markets are somewhat different. Mall is a nationally well-known shopping venue and hosts an unusually large number of shoppers from outside City and, even, the State. Othermall’s customer base is much more predominantly local; however, it is located in and near some of the nations’ wealthiest neighborhoods. Mall’s radius clauses are specifically written so as to make Othermall off-limits to Mall’s tenants.
The Future of Taiwan Depends on Relationships of Taiwan, China, and United States of America
Dr. Raymond S. Chen, CPA, California State University Northridge, CA
Dr. James S. H. Chiu, CPA, CMA, California State University Northridge, CA
Economic development in Taiwan over the past forty-seven years has been truly miraculous. Per capita gross national product (GDP) increased from US$154 in 1960 to US$14,216 in 2000, which translates to an increase of over 92 times. Per capita GDP decreased in recent years reflecting the negative and low growth rates and the devaluation of Taiwan’s currency. Although there are many factors attributing to the economic growth in Taiwan, this paper identifies the major economic, educational, and taxes policies that assisted in propelling Taiwan’s prosperity. This paper also identifies the political and economic factors that will challenge the future of Taiwan. The future of Taiwan depends on the delicate relations amongst Taiwan, Peoples Republic of China, and United States of America. This paper will also discuss strategies that the Taiwan government can implement to further promote political stability and economic growth. About sixty years ago, Taiwan was basically a rural and insulated society, when Taiwan was returned to China from the Japanese occupation after Japan was defected in World War II. Even though industrialization had started during the period of Japanese rule, it had been pretty much crippled by the bombing of the United States during World War II. When the Nationalist government of the Republic of China lead by Chiang Kai-shek moved its government to Taiwan in 1949 at the time of the Communist takeover of mainland China, economic development in Taiwan was at a virtual standstill due to civil war. The population of Taiwan had increased significantly when a huge number of mainlanders migrated to Taiwan with the Nationalist Chinese Government in 1949. Currently, the population of Taiwan is estimated to be about 23 million. The distribution of the population is influenced by the island’s terrain. The coastal plains and basins in the west are agriculturally cultivated areas where the population is dense as a result of transportation and industrial development. The population density in this land reaches over 2,500 per square kilometer, one of the highest in the world. But vast amount of the land in Taiwan is mountainous without much of nature resources. Therefore, the economic development in Taiwan depended on sound governmental policies that focused on development of its most precious resource: human resources. In addition, the government possessed the foresight to realize that, ultimately, education was the most important factor in the development of these human resources. In the 1950’s, the first major economic and social reform enacted by the Nationalist Chinese government was the forced redistribution of land from major landowners to farmers. Shares of government-owned enterprises’ common stock were issued to compensate the major landowners. Although these shares were considered close to worthless at that time, some of these major landowners have now realized a substantial appreciation in stock value as Taiwan’s industrial development materialized and a capital market developed over the past forty-seven years. The government, realizing the importance of inducing investment from foreign countries for their technologies and capital, took an unprecedented step by enacting the Statute for Encouragement of Investment. This statute, enacted and promulgated on September 10, 1960, simplified time-consuming governmental procedures for business activities and stimulated industrial development through tax and other incentives. An example of these tax incentives was a 5-year exemption of income tax for certain targeted industries. This measure, coupled with the Regulation for Income Tax Relief Standards of 1966, has stimulated significant economic development by promoting foreign investments in Taiwan. As a result of these governmental policies, along with the highly educated skilled labor available through Taiwan’s educational system, many foreign investments entered into Taiwan. These foreign investments created job opportunities for the increasing population with labor-intensive industries. During this period, many multinational corporations, especially consumer electronics and toys, established low-tech assembly-line productions in Taiwan. In the 1970’s, the government undertook the “Ten Construction Projects” to upgrade the infrastructure of Taiwan. The construction projects included railroads, highways, harbors, an international airport, and nuclear power plants. These projects paved the road for further economic development. Continuing improvements on infrastructure were reflected in 1986’s economic planning for another twelve construction projects and a six-year national development plan launched in 1992. With the improvement of transportation systems and the sufficiently educated and skilled labor supply, Taiwan has become one of the best environments for manufacturing activities by multinational corporations. In 1999, Fortune magazine ranked Taipei, Taiwan the top 5th city in Asia for business . The criteria used by Fortune in ranking the best cities for business included caliber of the local work force, good transportation networks, pro-business legal systems, and a generally high quality of life. The average annual industrial growth rates of the private sector in 1960’s and 1970’s were 23 and 17 percent, respectively . The manufacturing output peaked at 39.7 percent of gross domestic product in 1986. These growth rates were impressive in comparison with the growth rates of other industrialized nations. The initial direct benefit of multinational corporations’ manufacturing operations was the creation of job opportunities. Their operations have impacted and transformed the practices of production, marketing, management, control, and financial reporting. The impact of the practical training of personnel was even more profound than that of the manufacturing process. Local business enterprises as well as the government have observed and learned much from these multinational corporations. With increasing labor costs in recent years, Taiwan has lost its appeal with respect to the labor-intensive industries. Many of Taiwanese companies have established manufacturing plants in China. For instance, Taiwan’s companies today produced two-thirds of the world’s notebook PCs and many components for desktop PCs. However, Taiwan’s companies in China make most of these machines and parts. Taiwan now targets high technology industries for future economic development and actively encourages foreign investments in these areas. Operations of high technology multinational corporations require highly educated personnel, research and development facilities, and the environment of incentives.
The Two-stage Optimal Matching Loan Quality Model
Chuan-Chuan Ko, National Chiao Tung University, Taiwan
Dr. Tyrone T. Lin, National Dong Hwa University, Taiwan
Chien-Ku Liu, Jin-wen University of Science & Technology, Taiwan
Hui-Ling Chang, Ming Chuan University, Taiwan
This study attempts to optimize the loan quality requirement objective of the depositor, financial institution and investment agent in a two-stage loan market. Assuming that the financial institution may completely or partially fail to discharge his/her responsibility for liability in which a loan claim occurs following each stage, mathematical analysis is employed to identify the threshold of required loan quality and optimize the allocation of loan amounts in this two-stage loan market. This study defines the financial institution as the enterprise that is heavily reliant on manipulating financial leverage via minimum capital investment, and whose operating profit mainly derives from the interest spread of making loans with deposit volume; meanwhile, the depositor makes all deposits to obtain a steady stream of interest income. However, because of different lending criteria between the financial institution and the depositor, they have conflicting interests with each other. The financial institution wishes to increase loan credit, but loan volume is actually the balance held by the depositor. Therefore, the depositor asks the financial institution to rise up the loan credit to better guarantee his/her deposit. Furthermore, the securitization of financial assets has also provided the investor with an alternative financial commodity. The manner in which the financial institution re-packages and offers this financial asset securitization and the manner in which the investor purchases this commodity will also generate different perspectives regarding the loan quality of assets securitization subsequently represented by the investment agent among the financial institution, the depositor, and the investor. Lockwood et al. (1996) found that when enterprises begin asset securitization, the wealth of automobile manufacturers is increased after securitization, whereas the wealth of banks is decreased, and the financial institution should improve its capital structure before securitization and promote its financial health. The financial institution attempts to offer secured loans to protect creditors. Dietsch and Petey (2002) designed an optimized capital placement and lending portfolio by calculating the value of small loans in the investment portfolio risk with the internal credit risk of loan model of medium and small enterprises in France. Stiroh and Metli (2003) identified a recent deterioration of loan quality in the US financial industry, mostly being restricted of loan volume by the borrower and lending of large scale banks and industries whereas credit defects are focused on small scale borrower industries. Lin and Lo (2006) provided three credit risks (deposit account, financial institution, and rating organization) for evaluating different roles considering single term loans, and the required and matching loan quality models explain that developing a method of improving the risk management mechanism is the key point for the financial institution in controlling loan quality under the supervision of rating organization and depositors. Lehar (2005) modeled the measurement method and banking system risk, and estimated the dynamics and correlations among bank asset portfolios. The bank asset portfolios, including loans, tradable securities, and numerous other items, are refinanced by debt and equity. Capitalized banks increase equity capital and thus substantially reduce systemic risk. Stein (2005) designed the quantitative method as simple cut-off approach to make more flexible and profitable in lending decision. The framework can be used to optimize the cut-off point for lending decisions based on the cost function of the lender. Instefjord (2005) investigated the phenomenon of financial innovation possibly increasing bank risk in the credit derivative market, despite the importance of credit derivatives for hedging and securitizing credit risk. Commercial success determines the overall success of new credit derivative instruments. This study extends the model of Lin and Lo (2006), describes the credit risk for the single term evaluated model and discusses the required loan qualities with multiple objectives for the deposit account, financial institution, and rating investment agent with the two-stage loan market. Suppose that the financial institution may be cleared, partially cleared, and impossible to be cleared for debt at the end of each stage, and that the most suitable loan models are being sought for the participants in two-stage only. During the numerical analysis, designing a two-stage loan ratio and initiating a discussion of the loan placement which is most suitable for two-stage loan market are also key points. One single financial institution exists in the loan market, one investment agent (the successor of financial asset securitization commodity) operates in this market, and a single depositor provides deposits in this financial institution. Loan decisions of portfolios held by the financial institution comprises two stages (assuming a fixed period in each stage), and the financial institution equity is not permitted to provide financing during the second stage of the loan market, but the loan operation may be completely executed for the first stage after the financial institution provides the deposit reserve. The interest rate for the depositor during the two stages remains unchanged, and the depositor receives fixed deposit interest. The investment agent who purchases the financial asset securitization commodity (issued by the financial institution to guarantee loan credit) may obtain part of the warrant provided by the financial institution. The definition and symbols of relative parameters are as follows: A=E+D, A: the total assets at the beginning of the first stage; D: the total deposits in the financial institution; E: the equities held by the financial institution.: the deposit reservation rate applied by the financial institution. : the fixed lending rate of the financial institution; : the fixed deposit interest rate for the depositor; : the estimated rate of equity return of the financial institution; : the estimated profit rate of the financial institution which issues the financial asset securitization commodity at the end of each stage; : the estimated profit rate required for the financial institution to invest in the financial asset securitization; : the interest rate paid on the financial institution deposits with the central bank (represented by the overnight call loan rate of the central bank); : the discounted rate (treasury bond rate plus risk premium). the guarantee ratio provided by the financial institution to issue the asset securitization commodity. : the chances of successfully recovering loans for the financial institution and matching the depositor,; : the chances of achieving successful recovery under the financial institution matching the investment agent,. The size relation of relative profit rate is supposed to be based on the sequence. 1. The loan qualities of three participants. The discharge rank the financial institution for the depositor and investment agent; The process of asset securitization. Assuming the total amount lent by the financial institution to the borrower is, represents the estimated loans during the first stage and represents the quota of second stage loans extended to the borrower after the amount loaned during the first stage has been recovered successfully; the quota for the second stage is cancelled in the event of the retrieval failing. In case the balance of the loan is fully extended to the borrower after the financial institution has deducted bank deposit reservation as appropriate, the total size of the loan at the beginning of the stage is: From Eq. (1), the available quota of loan in the first stage will be. If the loan is successful at the end of the first stage, total financial institution assets will be: Furthermore, in the beginning of the first stage after the financial institution packages the whole debt as a financial asset securitization commodity, the funds gathered may be represented by. If the loan has been recovered successfully, will be paid to the investment agent at the end of the first stage; if it can not be recovered successfully, the most that will be paid to the investment agent is . When the estimated loan amount of the second stage and the financial institution deposit reservation have been deposited at the central bank with an interest rate of for bank overnight loans, can be obtained at the end of the term. If the financial institution has successfully recovered the loans, the total assets at the end of the first stage will be:
Worldwide Sourcing Practice of Malaysian Electrical and Electronics Companies
Dr. Abdul Latif Salleh, University of Malaya, Malaysia
The advent of globalization and the increasing intensity of competition have put immense pressure on companies to intensify their worldwide sourcing activities. However, it is questionable whether these companies understand the extent of commitment and scope of operations, and possess the resources and capabilities to coordinate and handle complex and sophisticated worldwide sourcing activities. These issues are particularly relevant in the case of companies in developing countries. Hence, the primary objective of this study is to explore the extent of worldwide sourcing and the practice of supply chain management among Malaysian electrical and electronics firms. Specifically, this paper examines the benefits, challenges, and critical success factors of worldwide sourcing as perceived by these companies. Organizational success is often attributed to the ability of business firms to develop or acquire competitive advantage. Acquiring competitive advantage is becoming increasingly difficult but vital to the survival of any organization operating in today’s global environment. Thus, understanding how to achieve competitive advantage in today’s fast changing and often unpredictable environment should be a major concern for companies that manufacture and market their products or services all around the world. To that end, understanding the benefits and challenges of globalization is crucial and is particularly critical to firms originating from or operating in developing countries. One area where companies can begin to capture the benefits of globalization is through global sourcing. With the lowering of trade barriers, the survival of a company depends heavily on its ability to compete globally and global sourcing has become a prerequisite in venturing to compete in global market. As suggested by Kotabe (1998), the ultimate objective of a company’s global sourcing strategy is to exploit both its own competitive advantages (e.g. R&D, manufacturing, and marketing skills) and the comparative location advantages (e.g., inexpensive labor costs, certain skills, mineral resources, government subsidy and tax advantages) of various countries in global competition. While firms that pursue global sourcing would be in a more advantageous competitive position than domestic-bound companies, it is questionable whether these companies understand the extent of commitment and the resultant scope of operations, and possess the resources and capabilities to coordinate and handle complex and sophisticated worldwide sourcing activities. This issue is particularly relevant in the context of business firms originating and traditionally operating in developing economies. This paper seeks to examine the benefits and problems faced by such companies in their practice of worldwide sourcing. The primary objective is to identify the benefits of worldwide sourcing, the challenges to successful worldwide sourcing, and the critical success factors in worldwide sourcing, among Malaysian electrical and electronics firms. The Electrical and Electronics industry is Malaysia’s leading industrial sector and the largest contributor to exports, output, and employment in 2004. The industry accounted for RM 241.5 billion in exports, which represent 64.1% of total exports of manufactured products. In terms of contribution in output and employment, the industry accounted for RM 183.1 billion or 44.9% of total manufacturing output, and provided 369,488 jobs or 36.6% of total employment in the manufacturing sector (MIDA, 2005). Today, Malaysia is among the leading exporters of semiconductor and room air conditioners, while the telecommunication equipments, computers, and computer peripherals continue to expand rapidly (FMM, 2004). The electrical and electronics industry can be divided into two sub-sectors, namely: The electrical sector, which is made up of electrical appliances, wire and cables, and electrical industrial apparatus; and The electronics sector, which includes computers and peripherals, semiconductors and components, telecommunication equipment, and consumer electronics. In 2001, the highest total export of electrical and electronics product group was exports of electrical machinery, apparatus, and appliances. This product group was valued at RM 79.4 billion from the total exports value of electrical and electronics products of RM 189.4 billion, which made up 41.9% of total electrical and electronics exports. Major traditional export destinations were United States, Singapore, and Japan, while China, Hong Kong, and Taiwan have emerged as the new import markets for Malaysia’s electronic products. As for imports, electrical and electronics accounted for RM 132.1 billion or 51.9% of Malaysia’s total imports of manufactured goods in 2001 (FMM, 2004). According to Monczka and Trent (2002), international purchasing refers to a commercial transaction between a buyer and supplier located in different countries. This type of purchase is typically more complex than domestic purchase. Global sourcing differs from international purchasing in scope and complexity where it may involve proactive aggregation of volumes and coordination of common items, practices, processes, designs, technologies, and suppliers across worldwide procurement, design, and operating locations. Further, global sourcing requires horizontal integration between product design and development groups, as well as between supply and demand planning activities. Additionally, vertical integration is also required with primary and secondary suppliers. Given these differences, the internationalization of sourcing process takes place as firms evolve or progress from domestic purchasing only to global coordination and integration of common items, processes, designs, technologies, and suppliers across worldwide locations (Monczka & Trent, 1991). Firms tend to evolve along a continuum as they pursue and mature within their sourcing efforts (Rajagopal & Bernard, 1993), and the progress happens slowly and companies do not move from stage to stage overnight (Kohn, 1993).
The Effect of Organizational Change Readiness on Organizational Learning and Business Management Performance
Dr. Chih-Chung Chen, Aletheia University (Matou campus), Tainan, Taiwan
The purpose of this research was primarily to explore the influence of employees' readiness for organizational change on organizational learning and business management performance when they are facing organizational change. A questionnaire was used for data gathering, and the samples were adopted from the top 500 business organizations ranked by the China Credit Service, Ltd. A total of 500 questionnaires were released and 175(35%) valid responses were received. This research revealed that the level of preparedness for organizational change influenced organizational learning and business management performance. A negative passive attitude among employees resulted in a negative effect on organizational learning and business management performance, while a positive aggressive attitude and collaborative coordination with relevant activities resulted in a positive effect on organizational learning and business management performance. Our conclusions also suggest that organizational learning can improve organizational business management performance, especially when there is an emphasis on the creation and distribution of internal knowledge of the organization. In a dramatically competitive and changing business climate, organizations must constantly adjust their organizational structures and strategies. However, organizational change, although vital for organizational development (Piderit, 2000), means ceaseless change. Past studies about organizational change have focused on ways to overcome resistance against change (Kotter & Schlesinger, 1979; Rosenberg, 1993), when organizational change occurs (Nadler & Shaw, 1995; Kanter, Stein, & Todd, 1992), and factors related to resistance (Clarke, Ellett, Bateman, & Rugutt, 1996), while others have focused on other organizational factors that influence organizational changes (Jermias, 2001). Whether the desired effect of organizational changes can be achieved depends on the collective behavior of organizational members in reaction to organizational changes (Kozlowski, Chao, Smith, & Hedlund, 1993). Kotter (1995) suggested that the core issues of organizational changes are not concerned with strategies, structures, cultures or systems, but with how to alter people's behavior. Thus, while an organization was adjusting its own structure, changes would also occur in the behavior of its employees, which would affect the success of the changes. Whether organizational changes could be smoothly promoted or materialized usually depended on the psychological reaction of employees and their process of adaptive behavior. Within this process, the employees’ readiness toward organizational changes was especially important (Malone, 2001). Thus, one of our research goals was to investigate employees' readiness toward organizational changes when they are facing such changes. Secondly, Helleloid & Simonin (1994) contended that an organization should learn continuously and implement knowledge management to improve the core competence of the organization, reducing the negative impact of external competition and creating more growth space and niches. Thus, the second goal for this research was to understand the influence of readiness for organizational changes on organizational learning when employees face organizational changes. Finally, one of the key factors of effective management is the ability to increase profit during changes in the business environment and to adjust an organization effectively in response to the changes (Sauser & Sauser, 2002). Ettlie and Reza (1992) also indicated that organizational change was the very essence of the life of an organization. New production techniques, new processing procedures and new organizational structures can all be used for creating an effective response to an increasingly dynamic competitive environment. Thus, the final goal for this research was to understand the performance of the business management when employees face organizational changes. Jones (2001) proposed that organizational change referred to the results created by the interaction between the internal and external environments, including the interaction between an organization itself and the macro environment and that between an organization and other organizations. Some researchers dealt with the necessity for organizational changes (Nadler & Shaw, 1995; Kanter, Stein, &Todd, 1992), referring to organizational changes as a series of systematic changes designed to improve organizational efficiency and to help an organization react to changes in the environment. Thus, definitions of organizational change have included a wide range of change activities and have covered individuals, groups and the overall organization. Cherinton (1989) proposed that organizational changes allow an organization to avoid decline, staleness and rigidity. Wong and Millette (2002) contended that organizational changes referred to a process of dynamic change for every unit in an organization and its surrounding environment so a re-adjustment of the organization’s current status is made possible. Thus, the definition of organizational change for the purpose of this research is a process by which an organization incessantly adjusts its behaviors in order to adapt to the environment. Psychological experts have suggested that the process of converting attitude into actual behaviors is influenced by many factors. However, employees readiness has been held as the key factor in influencing work behaviors and organizational interactions (Silverman, 1968). Armenakis, Harris, and Mossholder (1993) also indicated that the promotion of organizational changes and the readiness attitude of members were closely correlated. Thus, it is important to understand employee readiness for organizational change. Huber (1991) contended that organizational learning is a process through which an organization can cause behavior changes. Gao (1996) suggested that organizational learning meant the procedure by which to improve organizational behaviors by means of knowledge obtainment, share and use. This research defines organizational learning in terms of Gao’s approach. As to the types of organizational learning, Qui (2001) outlined three types: knowledge creation, knowledge storage and knowledge expansion. Marquardt (1996) described learning within learning organizations in terms of four types: knowledge obtainment, creation, storage conversion and application. Leavitt (1976) supposed organizational changes could be implemented through organizational structures, organizational members and the behaviors and techniques of organizational members. Helleloid and Simonin (1994) suggested organizations should implement continuous learning activities and knowledge management simultaneously to improve organizational core competence so that the external negative impact could be reduced by reaching more niches and growth space. Argyris (1993) indicated that organizational learning was a kind of effective mechanism to overcome the resistance against organizational changes. Thus, employees' achievement on organizational learning would be of significant help in moving readiness for organizational changes toward a positive attitude. Based on this, the following hypothesis was developed:
Applying Analytic Hierarchy Process to Evaluate the Development Strategies of Intellectual Capital for Fabless Integrated Circuit Design Houses in Taiwan
M. C. Kao, Yuan-Ze University, Taiwan
In the era of knowledge economy, it has become a key issue for business to develop intellectual capital. The study employs analytic hierarchy process (AHP) to evaluate the priority of development strategies of intellectual capital of fabless integrated circuit design houses in Taiwan. The results indicate that the most important construction of intellectual capital is human capital and the next is innovation capital. Furthermore, the highest weight of intellectual capital development strategy is to improve team quality and cultivate capability (13.8%), followed by encouraging the technological innovation (10.6%) and investing in R&D resources (8.5%).These strategies are consistent with the development direction of the industry. With the coming of knowledge economy, intellectual capital has become the main source of competitiveness and the key resource for wealth creation. Prior studies (Lev and Zarowinn, 1999; Lev, 2002) mention that nearly 80% corporate market value has not been reflected in financial reports. Kaplan and Norton (2004) also point out that about 75% of market value of U.S. firms comes from intellectual capital. This phenomenon is more obvious to knowledge-based enterprises (Edvinsson and Malone, 1997). Many studies have identified intellectual capital as the value driver of an enterprise (Amir and Lev, 1996; Edvinsson and Malone, 1997; Stewart, 1997; Ittner et al., 1997; Bontis, 1999; Sullivan, 2000). Intellectual capital thus can be considered the core competence in business. Fabless Integrated Circuit design (FICD) industry is knowledge-intensive and intellectual capital is the core element of creating value. Taiwanese FICD industry ranks second only to that of the U.S., representing 1/3 of the global market value. Among the numerous FICD houses, Media Tek, Novatek, VIA, Realtek, Sunplus and Mstar rank in the top 20. According to statistics from the IEK-ITIS Project of the Industrial Technology Research Institute (ITRI), Taiwan had a total of 268 FICD houses in 2005. The operating income of these houses during 2006 totaled NT$323billion, increased 13.5% from 2005 and the growth rate of total Taiwan FICD industry production value is expected to reach 14.3% in 2007. Apparently, intellectual capital exerts the key influence on FICD houses competitiveness. Hence, it is crucial to set up intellectual capital development strategies properly. Although the concept of intellectual capital is easy to understand, there are bottlenecks in its practical application. Prior studies (Kaplan and Norton, 1996; Edvinsson and Malone, 1997; Stewart, 1997; Johnson, 1999; Ramona, 2000; Deeds, 2001) focus on intellectual capital measurement and the relationship between intellectual capital and firms’ value. However, there is lack of empirical studies examining how to construct the intellectual capital development strategies. This is especially true for the FICD industry. This study applies the analytic hierarchy process to access intellectual capital development strategies for FICD houses in Taiwan. It is vital for enterprises to develop intellectual capital step by step and consolidate their core competence. Intellectual capital can be thought of as the total stock of capital or knowledge-based equity possessed by a company. Intellectual capital thus can be the end result of a knowledge transformation process or knowledge itself that is transformed into firm intellectual property or intellectual assets. Stewart (1997) proposes that knowledge is intellectual capital which is defined as the sum knowledge of the firm employees that provides the firm with a competitive edge. Hall (1992) classifies intangible resources and divides them into assets and skills, where assets include trade marks, patents, copyrights, registered designs, contracts, trade secrets, reputations and networks (personal and commercial relationships) and skills are comprised of know-how or culture. In a survey of 95 firms, Hall identifies company reputation, product reputation and employee know-how as the most significant contributors to overall success. Hudson (1993) defines intellectual capital as an individual asset and comprised of genetic inheritance, education, experience, and attitude about life and business. Brooking (1996, 1997) defines intellectual capital as market assets, human centered assets, intellectual property assets, and infrastructural assets. Edvinsson and Malone (1997) value the intellectual capital as the difference between a firm’s market value and book value. Moreover, they point out a firm’s intellectual capital is comprised of human capital, structural capital and customer capital. These three capitals capture a firm in motion as it transforms skills and knowledge into competitiveness. Nahapiet and Ghoshal (1998) move away from a personal definition and instead focused on organizational intellectual capital. They used the term of intellectual capital to refer to the knowledge and knowing capability of a social collectivity, such as an organizational intellectual community, or professional practice. The literature does not contain any clear definition of intellectual capital; however, most existing definitions utilize the same words: knowledge, skills, know-how, experiences, intangible assets, information, processes, and value creation. In a word, intellectual capital includes at least the following components: Knowledge and experience embodied in individuals, either formalized (patents, copyrights, brands, etc) or tacit (competences of individual employees); Organization systems and processes, such as internal processes, procedures and administrative systems; Innovation and technology; Business relationship, such as relationships with customers, suppliers and strategic partners (reputation and image, customer loyalty, coordination procedures with suppliers) Although previous studies have agreed on the significance of intellectual capital as a resource underpinning organizational performance, there is a lack of consensus on the precise definition of intellectual capital. To gain a common understanding of the terminology used in this study, intellectual capital is divided into four constructions: human capital, structural capital, customer capital and innovation capital.
Apply Delphi and TOPSIS Methods to Identify Turnover Determinants of Life Insurance Sales Representatives
Dr. Chiang Ku Fan, Shih Chien University, Taipei, Taiwan
The high sales representative turnover rate usually forces a life insurance company to face difficult dilemmas. The first purpose of this study is to identify the major turnover determinants of life insurance sales representatives in Taiwan by interviewing experienced human resource managers in life insurance companies. The second purpose of this study is to rank the turnover determinations. A Modified Delphi Study and a Mixed-Methods Approach were employed. The qualitative data were coded relative to themes explored throughout questions asked in each interview. The technique for order preference by similarity to the ideal solution (TOPSIS) method was conducted to rank the turnover determinants for life insurance sales representatives. According to the research results, emotional exhaustion or job stressors, manager mentoring process for career enhancement and psycho-social functions for subordinates, wage rate, career opportunities, length of working hours. Human resource managers can justify which turnover determinants are the prior ones should be moved out in the very short run. According to the data reported by the Taiwan Insurance Institute (2004), life insurance sales representatives’ average retention ratio of the 13th month from 1996 to 2003 in Taiwan was just 48.1%. This means that more than 50% of life insurance sales representatives terminate their job in the first year. The high sales representative turnover rate in the first career year usually faces life insurance companies with difficult dilemmas. On the one hand, an insurance organization may try to discourage turnover by designing competitive compensation packages, better benefits, and efficient training programs for their sales representatives. On the other hand, human resource managers may face the risk that the well-trained sales representatives will become an attractive potential workforce for other life insurance companies (Wong and Law, 1999). Thus, identifying employee turnover is an important organizational issue that merits thorough exploration. Unfortunately, to our knowledge, there have been very few studies of the determinants of turnover related to Taiwan life insurance sales representatives. The first purpose of this study is to identify the possible turnover determinants of life insurance sales representatives in Taiwan via Modified Delphi Study. The second purpose of this study is to rank and explore the turnover determinations. Accordingly, human resource managers can justify which turnover determinants are the prior ones that should be moved out in the very short run. This study was guided by two research questions: According to Delphi panelists, what are the possible determinants in the turnover of life insurance sales representatives in Taiwan? What are the rankings of turnover determinants for life insurance sales representatives? How to interpret the major turnover determinants for life insurance sales representatives? Employees are recognized as a very important organizational asset. This is because firms invest considerable capital in human resources. Researchers have argued that employee turnover is an important topic since such movements represent potential costs to organizations in terms of loss of valuable human resources and the disruption of ongoing activities (Cascio, 1991). Furthermore, organizational costs include employees quitting their jobs and the subsequent hiring of replacement personnel (Darmon, 1990), new-hire training (Smith & Watkins, 1978), and general costs for administration (Griffeth and Hom, 2004) which can be tremendous in terms of personal, work-unit, and organizational re-adjustment (e.g., Griffeth and Hom, 2004; Lee and Mitchell, 1994). Turnover intentions reflect the probability that an individual will change his or her job within a certain time period (Hartog et al., 1988; Hartog and Ophem, 1996). Many psychologists have analyzed turnover intentions (e.g., Cohn, 2000; Hom et al., 1992; Mobley, 1977; Sager et al., 1998; Wright and Cropanzano, 1998). Turnover intentions and actual turnovers were strongly correlated (Sousa-Poza and Henneberger, 2004). Such research result presented an interesting alternative for analyzing turnover determinants. In other words, predictors of turnover intention are similar to determinants of actual turnover. The prediction and understanding of employee turnover has been studied from many different perspectives. Based upon the theoretical perspective of economy theory and psychology theory, many studies have identified several determinants of job turnover. Gender has been shown to influence actual turnovers (e.g., Blau and Kahn, 1981; Royalty, 1998). In some countries, women have higher levels of job satisfaction, which generally reduce job-change inclinations (Sousa-Poza, 2000). On the contrary, Booth and Francesconi (1999) found no significant differences in job-to-job mobility between genders. Age has been found to be negatively correlated with the probability of changing a job (e.g., Campbell, 1997; Kidd, 1994), and marriage has been found to have a possible negative effect on the probability of changing a job, since it is usually more costly if a family has to move (Bates, 1997). The research results of Royalty (1998) showed that level of education has a positive effect on the probability of changing jobs, since higher education is often associated with better labor-market alternatives. Working time may influence job-to-job mobility in a positive manner since lower working hours could imply that a worker is less integrated in a firm (Garcia-Serrano, 1998). It is also conceivable that long working hours may also increase the desire to change one’s job. An inverse relationship is assumed between the wage rate and the probability of a job change, which has received the most attention in the literature (e.g., Hall and Lazear 1984; Mclaughlin, 1991). Other job and employer characteristics, such as fringe benefits, flexible working schedules, promotion expectations, firm-specific training, and firm size have been shown to be related to turnover (e.g., Idson, 1996; Winter-Ebmer and Zweimuller, 1999; Zweimuller and Winter-Ebmer, 2000). The psychological literature on the determinants of turnover intentions is extremely vast and multifaceted (e.g., Cohn, 2000; Sager et al., 1998). Sousa-Poza and Henneberger (2004) analyzed job-turnover intentions in 25 countries using data from the 1997 International Social Survey Program. Results showed that determinants of turnover intentions vary substantially among countries. However, job satisfaction, job security, and organizational commitment were found to be significant in most countries. Meanwhile, in several psychological models of turnover, factors related to career commitment, job commitment, organizational commitment (Cohn, 2000), and job satisfaction are considered to be determinants of turnover intentions. Another social work study of Freund (2005) found that the career commitment of social workers significantly influenced withdrawal intentions and thinking about quitting the organization.
The Outlook for Taiwan’s Domestic Air Marketing Strategy and Trend Development
Shyue-Yung Ho, China Institute of Technology, Taipei, Taiwan
Since the open sky policy in 1987, marketing strategy in the airline industry has been one of the major concerns for Taiwan’s domestic airliners. The deregulation policy led Taiwan’s domestic air market into a new era of perfect competition. The market characteristics of the airline industry, such as fleet plan, fare, route operation, frequent flyer program etc, has become the main factors in their marketing strategy, which has significantly impacted the domestic air market. Because of changes in the competitive air transport environment in Taiwan, there is a need to review marketing strategies in this particular field. With the advent of market recession and confrontation of the high-speed railroad launched recently, Taiwan’s domestic airlines are under a severe competitive market environment. Thus, this paper will investigate the competition among these four domestic carriers (Uni, Far Eastern, TransAsia and Mandarin Airlines), as well as analyze the future development of domestic air transportation from the perspective of the local carriers and the market itself. The extreme changes in Taiwan’s domestic air transportation marketplace were accelerated with the advent of the open sky policy in 1987. Deregulation was expected to result in a larger number of airlines competing for passengers and freight traffic. The number of Taiwan’s domestic airlines did increase initially, which also provided a clearer picture of entry into and exit out of the domestic airline market following the development of marketing strategy. It should be noted, however, that several strategic acquisitions formed resulting in different competing airline groups in the domestic air market. At the present time, there are four carriers still operating the domestic air transportation： Far Eastern (EF), TransAsia (GE), Uni Air (B7) and Mandarin (AE). Due to their unique features, Taiwan’s domestic airlines faced intermodal competition in types of aircraft, scheduling, and their duration. In addition, intramodel competition in rates and services presented a challenging marketing environment. The crisscrossing routes of domestic airlines in Taiwan result in three areas of operation – the Western island, the Eastern island and the Off islands. There are five different routes operating in the Western island, four different routes in the Eastern island and four different routes in the Off islands (see Table 1). The TSA/KHH route has the largest passenger market share and the TSA/TNN route is the second in Taiwan. Together the TSA/KHH and TSA/TNN routes occupy more than 43% (2006) of the passenger market share in domestic air transportation (see Table 2). The domestic airline market in Taiwan has been decreasing at a dramatic rate over the past ten years, getting even worse in the past five years (see Figure 1). Passengers have been declining in domestic air transport as the battle heats up in this specific market. Excessive capacity was endemic to the domestic airline market. The total supply exceeds actual demand by a wide margin which led to a lower and lower annual domestic load factor. This phenomenon prompted domestic airlines to reduce the frequency of flights from their hub (TSA) to connected networks. However, both the long market recession and financial deficit of the airlines are factors contributing to the four domestic carriers struggle for survival in this ill market environment in the future. The Taiwanese government’s deregulation of the domestic airline market and the open sky policy in 1987 stimulated air transportation not only in the growth in the number of passengers, but also in the increase of new carriers. These new carriers had developed a prosperous air transport market that accompanied growth in the economy of Taiwan for the last decade. In the deregulated sky policy, the Civil Aeronautics Administration (CAA) -Taiwan maintained its position as the managerial authority, but in addition, it overlooked the establishing of new carriers. As a result, the number of new airlines in Taiwan increased dramatically. According to the CAA’s-Taiwan historical report, at its peak, there were nine carriers simultaneously operating domestic air transportation. With so many carriers sharing the market in a small island, severe competition emerged. However, restrictions on fare adjustments and the scarcity of time slot allocations caused inefficient new entrants in the domestic air market either to exit, or to merge with other carriers. Deregulation of the domestic airline industry can only attract investment if there are sufficient prospects for long-term development and profitability. To clarify this, we will take a look at the carriers’ corporate objectives for a clue as to what really happened. All of the domestic airlines’ objectives could be organized into three stages: Short-term objectives focused on the air traffic demand in the domestic market; unfortunately, due to the heavy air traffic control and limited resources, CAA-Taiwan policies were to no longer release time slots or renew routes allocation. These policies put the new domestic airlines under a great disadvantage with regard to scheduling and route planning. This disadvantage was reflected in the frequencies of departure or routes allocation, especially departure from the hub airports TSA and KHH. In the most competitive markets, the TSA/KHH and TSA/TNN business routes, there was either unfair frequency or unfair competition among the different domestic airlines. Medium-term objectives strove for the opportunity to share international air traffic rights with China Airline or Eva Air – both of which represented the flag carriers of Taiwan. The names of the domestic airlines’ “Far Eastern” and “TransAsia Airways” represent well their corporate objective of international (regional air market) route development. Unfortunately, according to CAA’s-Taiwan regulations, domestic airlines only had the opportunity to serve secondary cities or routes on scheduled international flights. Domestic airlines were under a constraint since they were faced with unfair developments on international air traffic. In the long-term, it was expected that direct, cross-strait flights between Taiwan and Mainland China, suspended due to the political struggle since 1949, would be allowed. With the advent of Taiwanese’ business investments in Mainland China, cross-strait air transportation had recently become a golden route, with high passenger demand and high yield market. However, passengers were still required to change airplanes in Hong Kong or Macao since Taiwan’s airlines are not allowed to fly directly to Mainland China. Domestic airlines were overly optimistic about the political compromise between the two sides of the Taiwan Strait (Taiwan and Mainland China) causing a failure in its marketing strategy planning.
Marketing Ecological Communities: Experience from the Eco-Community Pilot Projects in Tainan of Taiwan
Dr. Kang-Li Wu, National Cheng-Kung University, Taiwan
With the promoting of the concept of sustainable development and ecological design, developing ecological community has become an important policy goal in Taiwan. However, how should the concept of ecological communities be promoted to the potential homebuyers remains an unanswered research question. This paper explores a marketing approach for promoting the concept of ecological communities. Through an examination of the pilot ecological community project in Tainan Salun High Speed Rail Station of Taiwan, this study identifies the core values, target markets, and the demand of services of ecological communities. By incorporating research methods involving interviews, field survey, STP analysis, and a questionnaire survey, this study finds that the demand of the facilities and services of ecological communities is related to the household income and the values of ecological communities perceived by homebuyers. Based on the results of the empirical studies, this study develops a set of marketing strategies for promoting the development of ecological communities. The concept of ecological community has received widespread attention in Taiwan in the past decade. This concept outlines a vision for building a living environment that integrates the considerations of ecological integrality, economic efficiency, and social equity. However, since the concept of ecological community has proposed a new type of community development and lifestyle that is different from the current housing market products, how this concept can be efficiently implemented in current community planning and real estate practices in Taiwan has become a critical issue. Employing research methods involves field investigation, in-depth interviews and the STP（segmentation, target market, positioning）analysis of the proposed pilot ecological community project around Tainan High Speed Rail station of Taiwan, this research attempts to explore two research questions: (1) How decision-makers in real estates and land development should identify the key elements and facilities in building eco-communities in Taiwan? (2) How should we develop suitable marketing strategies to promote eco-communities in the existing real estate market? Through our empirical study, a set of workable marketing strategies are suggested in order to promote the concept of ecological community in Taiwan. The concept of sustainable development provides a theoretical foundation of the construction of ecological communities. With the promotion of the concept of sustainable development and ecological design, the notion of ecological communities has received much attention in Taiwan. Sustainable development is a broad concept. According to the Brundtland Commission (1987); sustainable development refers to "development that meets the needs of the present without compromising the ability of future generations to meet their own needs" (WCED, 1987). This popular definition introduces the concepts of long-term environmental sustainability and inter-generational equity, but it also suffer from attacks because it over emphasize an anthropocentric approach. The other widely cited definitions of sustainable development include those proposed by the International Union for the Conservation of Nature (IUCN) and the World Conservation Union (WCU). The International Union for the Conservation of Nature (IUCN, 1986) pointed out that "Sustainable development should seeks to respond to five broad requirements: (1) the integration of conservation and development, (2) the satisfaction of basic human needs, (3) the achievement of equity and social justice, (4) the provision of social self-determination and cultural diversity, and (5) the maintenance of ecological integrity.” While the World Conservation Union in its 1991 report, Caring for the Earth, defined sustainable development as "improving the quality of human living within the carrying capacity of supporting ecosystems" (WCU, 1991). These popular definitions, together with related interpretations of sustainable development, provide a new framework for examining many critical housing and community development problems associated with current land development patterns and our behavior toward the use of natural resources. The concept of ecological community includes the meaning of ecology and of community. It was born out of a need to integrate the concepts of sustainable development and ecological design with the concept of community design and management. The notion of ecological community includes an important meaning of “succession,” which applies not only in ecology aspect, but also in economic and social aspects. As noted by Roseland (2000), a sustainable (ecological) community is a community that uses its resources to meet current needs while ensuring that adequate resources are available for future generations. US environmental protection agency defined the meaning of community from an eco-community point of view, and pointes out that the key idea is that the people involved in any “community” have a common interest in protecting an identifiable, shared environment and their quality of life (USEPA, 1999). In summary, an ecological community resembles a living system and community governing institution where human, natural, and economic elements are interdependent and draw strength form each other. In addition, rather than being a fixed community development pattern, an ecological community will continually adjust itself to meet the social and economic needs of its residents while at the same striving to preserve the ecological integrity of its supporting ecological system. This new approach of community design and governance provides an alternative of community development that is suggested by many researchers to be able to avoid many of the current planning problems, such as the destruction of natural capital, unmanaged urban sprawl, declining quality of life, loss of species, and increasing in social inequality. Since “community” is the key element of spatial planning and regional governance as well as one of the most important action units to promote sustainable regional development, implementing the concept of ecological community may help build a sustainable future that aims at achieving the goals of maintaining environmental integrity, promoting economic efficiency, and promoting social equity and environmental justice.
Using Grey Prediction Model to estimate Inbound Visitors to Taiwan
Dr. Ching-Yaw Chen, Shu-Te University, Taiwan
Dr. Pao-Tung Hsu, Shu-Te University, Taiwan
Chi-Hao Lo, Shu-Te University, Taiwan
Yu-Je Lee, Tak-ming College, Taiwan
Che-Tsung Tung, Tak-ming College, Taiwan
This research uses the Grey Prediction Model to estimate the possible number of visitors to Taiwan subsequently by using the official figures on visitor arrivals to Taiwan published annually. After formulating the most ideal forecast model to estimate the number of inbound tourists, it will then be compared to annual visitors to Taiwan for accuracy and error value comparison. It is intended that the research findings will not just provide the related governmental departments and industry with a foundation to base their decision-making on, but also act as a reference point for conducting further research by fellow academicians. In recent, the vibrant growth of the tourism industry has driven up the number of tourists in the world, and brought about a great increase in total economic production. Facing such bright prospects and great potential, satisfying and accommodating to Tourism Attractiveness conditions by finding a way to blend nature and heritage to develop the industry is a great challenge. We can see from here that the development of tourism holds the balance to the economy of not just an individual country, but also the world. In addition, due to the perishable ability of tourism products, which causes the peak and low seasons, or seasonal fluctuation of travel demands, This results in a stable cyclical pattern on the number of tourists to Taiwan. Under such productivity constraints, it will be more advantageous for countries to, through forecast methodology, to estimate the number of tourist arrivals and then establish the necessary policies to ensure growth in the industry. Therefore, it is essential that Tourism Demand Forecasting be not just a necessary area of tourism research, but a key area of tourism research. In view of the fact that the progress of tourism has by now become a guideline on the economic development of a country, Tourism Demand Forecasting has indeed already developed to become a key area of research. However, research on an efficient and accurate forecasting method on tourist arrivals to ensure adequate supply and future tourism trends should not be overlooked, and this is also the first motive of this research. Traditional methods such as the Regression model, Econometric model, ARIMA or REGARIMA model, Artificial Neural Network and the Box-Jenkins model require huge amounts of historical data and also have to satisfy certain statistical hallmarks. On the other hand, the Grey System Theory can bypass system model uncertainty and data incompleteness, and it requires a minimal data period sequence to construct a system model. It uses forecasting and decision-making to explore and understand the system in a way that is more convenient and simplified. This reason is the second motive of this research. From the above motivations, we can make use of minimal data to conduct statistical forecasting and avoid constructing complicated equations and formulas to reach an accurate forecast and reduce costs. Hence, this research attempts to use the most suitable model “Grey Model with first order and one variable GM (1,1)” derived by Grey system theory (Deng, 1989) to assess numbers of visitor arrivals to Taiwan, and uses related forecasting methods to determine the model’s level of accuracy and error values. It is hoped that the research findings will not just provide the related governmental departments and industry a foundation to base their decision-making on, but also act as a reference point for conducting further research by academicians. In Taiwan, knowledge pertaining to forecasting techniques has gradually matured, and discussions by academicians are frequent and fruitful. Research direction has slowly shifted from discussions of substance to making use of scientific methods to forecast more accurately. Construction of forecast models were also changed due to the change in research methods, developing from simple linear regression models to the more complicated techniques of Neural Network, REGARIMA model, etc. As for the forecast methods of tourism demand, through a literature search on local and foreign publishing, tourism demand is usually forecasted using the more scientific fixed forecast methods. To quote several examples: Witt and Martin (1987) used Econometric models for forecasting international tourism demand, while Wu, Lai and Liu (1992) used different forecasting methods to conduct forecast analysis on number of inbound visitors; Chan (1993) made use of several forecasting methods to forecast the number of inbound travelers to Singapore; Wong (1997) combined linear trend and sine function to forecast the number of inbound travelers to Hong Kong; Chu (1998) used the ARIMA model, which combined seasonal and non-seasonal factors, and the sine wave nonlinear regressive forecasting model to forecast international inbound visitors; and Hu (2002) gave in-depth discussions on the implementation of the complicated Box-Jenkins methodology and the ANN modeling techniques in the context of international tourism demand forecasting. These results showed that unlike many previous studies in tourism demand forecasting that used simple ranking comparisons; These scientific methods used an overall performance index (OPI) to assess forecasting techniques’ overall performance. Cho (2003) made use of 3 types of time series forecast techniques – Index Smoothness Method, One Variable ARIMA, Artificial Neural Networks or ANN, to forecast the number of international inbound visitors to Hong Kong. Preez and Witt (2003) who processed an empirical investigation of tourism demand from four European countries to the Seychelles showed an absence of such a “rich” structure and that ARIMA exhibited better forecasting performance than univariate and multivariate state space modeling. They also showed “One implication that an absence of a ‘rich’ cross-correlation structure holds for econometric modeling is that explanatory variables which are strongly correlated with the tourist flow series are likely to be uncorrelated across origin countries”.
Fuzzy Neural Model for Bankruptcy Prediction
Dr. Chokri Slim, Manouba University, ISCAE, Tunisia
In this paper, we present a novel fuzzy neural model (FNM) as an alternative technique to both Linear Discriminant Analysis (LDA) and Classical Backpropagated Neural Networks (CBNNs) for forecasting corporate solvency. Each method is applied to a dataset of 68 bankrupt and nonbankrupt Tunisian firms for the period 2000-2005. The results of this study indicate that the FNM applied in the present study provides superior results to those obtained from either the LDA method or from CBNNs using the standard backpropagation algorithm. Beaver’s (1966) study is considered the pioneer work of bankruptcy prediction models. For Beaver, the firm is viewed as a “reservoir of liquid assets, which is supplied by inflows and drained by outflows. A firms’ solvency can be defined in terms of the probability that the reservoir will be exhausted, at which point the firm will be unable to pay its obligations as they mature.” Using this framework, the author states four propositions: The larger the reservoir, the smaller the probability of failure. The larger the net liquid-asset flow from operations, the smaller the probability of failure. The larger the amount of debt held, the greater the probability of failure. The larger the fund expenditures for operations, the greater the probability of failure. In recent years, much attention is given to the choice of methodology for bankruptcy prediction models. Methods like recursive partitioning, neural networks and genetic programming are commonly applied to the bankruptcy prediction problem. Morris (1998) offers a survey of both new and traditional approaches to bankruptcy prediction. Recent work, such as that done by Poddig T. (1995) suggests that CBNNs are a superior methodology to LDA in terms of ability to accurately predict corporate distress. At first glance, this information should come as no surprise, as CBNNs are capable of constructing highly complex decision surfaces, while LDA is limited to linear hyperplanes. Moreover, CBNNs are not encumbered by unequal group dispersions or group distributions that stray from the multivariate normal. CBNN models do, however, have drawbacks, which have attracted some criticism in recent years.(Shah, 1992) In this paper, we develop a neuro-fuzzy system to predict the probability of business failure, where the rules are induced through an approach that combines neural networks and fuzzy logic, typically referred in the literature as a “neuro-fuzzy approach.” Accordingly, Section 2 briefly reviews some fundamental notions of the fuzzy system, and neuro-fuzzy models. Section 3 describes the FNM, while Section 4 presents the results and performance evaluation of its implementation in bankruptcy prediction. Finally, Section 5 advances some conclusions and recommendations for further research. Since introduction in 1965 by Zadeh (1965), fuzzy set theory finds applications in a wide variety of disciplines. Modeling and control of dynamic systems belong to the fields in which fuzzy set techniques have received considerable attention, not only from the scientific community but also from industry. However, many systems are not amenable to conventional modeling approaches due to lack of precise, formal knowledge about the system; strongly nonlinear behaviors; high degrees of uncertainty; or due to time varying characteristics. Fuzzy modeling, along with other related techniques, such as neural networks, is recognized as powerful tools that can facilitate the effective development of models. (Chokri et. al., 2003) Further, fuzzy models can be seen as logical models that use typical "if--then" rules to establish qualitative relationships among the variables in the model. In essence, fuzzy sets serve as a smooth interface between the qualitative variables involved in the rules and the numerical data at the inputs and outputs of the model. The rule-based nature of fuzzy models allows the use of information expressed in the form of natural language statements and consequently, makes the models transparent to interpretation and analysis. At the computational level, fuzzy models can be regarded as flexible mathematical structures, similar to neural networks, which can approximate a large class of complex nonlinear systems to a desired degree of accuracy. Recently, a great deal of research activity focuses on methods development to build or update fuzzy models from numerical data, in order to automatically generate fuzzy models from measurements. The purpose of fuzzy logic is to map one space (input) to another (output) with relative precision (normally using if--then rules). It is a better tool for simulating human thinking and allows the computer to understand and compute much in the same way as a human being. There are several advantages of fuzzy logic, which include: Ease of understanding. The tolerance for imprecise data. Can bring human knowledge directly to the system. Can be integrated with other systems smoothly, for example, Neural Networks and Control Systems. Stronger power when solving difficult nonlinear problems. Great flexibility. In a fuzzy system, the process of generating output begins with taking the inputs, fuzzifying, and then executing all the rules from those rule bases that are active. Active rules outputs are aggregated into a single output, and after de-fuzzification, a new output is generated. The first step of fuzzy logic is to define a fuzzy set. The second step is to define membership function (MF). Membership function is a function, or a curve, which defines how each point in the input space (sometimes called universe of discourse) can be mapped to the value of output space (a value between 0 and 1). The next step of fuzzy logic is to define rules. Fuzzy logic implementation has four steps: (Takagi, 1995)
Mergers and Scale Economies in Taiwan’s CPA Firms
Dr. Chung-Cheng Yang, National Yunlin University, Taiwan
Tsung-Yi Tsai, Shin Chien University Kaohsiung Campus, Taiwan
The Translog cost function model is specified to analyze the relationship between the mergers of CPA firms’ in Taiwan. Estimation of the model uses a balanced panel data for 120 CPA firms for the period 1997-2001. The survey data are based on "Survey Report of CPA Firms in Taiwan," from the Department of Statistics, Ministry of Finance, Taiwan, ROC. The empirical results indicate that scale economies are prevalent in Taiwan‘s public accounting industry. Taiwan’s public accounting industry underwent a brief period of dramatic change in recent years. After a spate of mergers, by the end of 1999, the Big 6 CPA firms had become the Big 5. The recent increase in merger activity is often attributed to Certified Public Accounting (CPA) firms’ efforts to avail themselves of scale economies (Banker et al., 2003), so merger has been an important strategy adopted by CPA firms for growth and development. In an ever-changing environment, new management strategies are designed and undertaken, but CPA firms have also consolidated so their professional skills could be combined and their services could be more comprehensive. Mergers have benefited CPA firms by increasing scale economies, allowing multi-dimensional development, increasing market share, combining resources, reducing total costs, and maintaining the market standing of the firms (Wootton et al., 1990, 1994). The United States General Accounting Office (GAO, 2003) also pointed out that mergers could produce scale economies in the public accounting industry. We address the question of whether these incentives are consistent with merger activities among Taiwan’s CPA firms by constructing and estimating the public accounting industry production function using balanced panel data published in the "Survey Report of CPA firms in Taiwan" for the years 1997-2001 for the 120 CPA firms in Taiwan, ROC. In the past, the research on CPA firm mergers often focused on the auditor concentration, market share and audit fees (Minyard and Tabor, 1991; Payne and Stocks, 1998; Tonge and Wootton, 1991; Wootton et al., 1990, 1994). Mergers could lead to an increase in market share and market power (Lee, 2005) and, by increasing market share, CPA firms could also increase profit margin (Owen, 2003). Although prior research acknowledges the benefits of mergers, they have not been examined in Taiwan’s public accounting profession. To observe these phenomena, we try to include variables of mergers to establish CPA firms’ cost function. Our empirical results show that scale economies prevail in the public accounting industry in Taiwan. The remainder of the paper is organized as follows. Section 2 will construct the public cost function. Section 3 will describe data description and estimation model. Section 4 will explain the empirical results, and the last section offers conclusion. Considering the diversified services offered by CPA firms, Banker et al. (2003) defined the three output variables as accounting and auditing services, tax services, and management advisory services. Cheng, Wang and Weng (2000) and Chen, Chen, and Lee (2002) defined the three output variables for CPA firms in Taiwan as auditing revenue (Y1), tax revenue (Y2), and consultation revenue and other income (Y3). We used these three variables as the composite elements of the total output (total revenue, Y). Ordinarily, CPA firms have four main inputs: labor, computer and communication devices, office facilities, and building and other facilities. Let the factor price for these inputs be PL, PI, PO, and PR respectively. However, since the public accounting profession is a knowledge-intensive and labor-intensive industry (in which knowledge has an important marketable value), the labor input is the most important input for production. In order to simplify the analysis, this paper follows Hick (1946), that isolated the labor input and considered the other three inputs as a composite goods, namely, capital input (K). In addition, let the corresponding factor price be PK, which could be considered as the price index formed by the corresponding inputs of factor prices, PI, PO, and PR. A dummy variable M is included to examine the effects of mergers (1=merger; 0=nonmerger). In examining the relationship between merger and scale economies, the change in number of branches (B) must be considered. In theory, the behavior of a company could be analyzed in terms of production side, cost side and profit side. Since this paper is to observe the relationship between mergers and scale economies, we addressed the question in terms of cost side. In terms of the behavior of CPA firms, we hypothesized that CPA firms would attempt to find the way to produce a given level of output that minimized cost. Thus, the cost function for CPA firms is derived as: Christensen, Jorgenson and Lau (1973) proposed the Translog cost function in order to examine the agreement of the theory, the simplicity of the calculation, and the flexibility of limits. The Translog cost function of this paper is specified as: Where b is the estimated coefficient, i, q = 1, 2, 3, and i ≠ q；j, r = L, K, and j ≠ r. In addition, the cost function needs to satisfy the criterion of the first degree of homogeneity, so: Caves et al. (1985) defined scale economies and the scale elasticity measure (SCALE) as: If SCALE is greater than one, then scale economies are prevalent in the public accounting industry and CPA firms could reduce their average cost by providing a broader range of professional services and increasing the number of branches. Based on the economic model for the CPA firms’ cost function and the characteristics of the empirical data is the balanced panel, it is possible that other explained variables were neglected or that approximation error and unpredictable random behaviour has occurred. Thus, the error term (e) is added to Eq. (4), and the annual regression estimation model is rewritten as:
Antecedents of Turnover Intention toward a Service Provider
Dr. Ipek Kalemci Tuzun, Baskent University, Ankara, Turkey
This paper reports an investigation of the variables that may be predictive of turnover intention, and test a model that includes mediating variables. A total of 578 bank employee attended to the study. Participants completed measures of organizational identification, perceived external prestige, and turnover intention and job satisfaction. Structural equation modeling and LISREL8.30 (Jöreskog and Sörbom, 1993) is used to assess the research model. The results indicated that PEP is positively related to the job satisfaction and identification. Identification and job satisfaction are also negatively related to the turnover intentions. The findings show that the relationship between perceived external prestige and turnover intentions is mediated by both job satisfaction and identification. The current paper is very useful to understand the determinants of turnover intention. Future research should apply longitudinal design to fully understand the process of turnover intention and this study could be repeated with a larger sample with a wide range of demographic and social cultural features. The study of employee turnover attracts academic attention in the field of human resources management. By identifying the determinants of turnover, researchers could predict turnover behaviors (Mobley et al., 1978; Newman, 1974). Among the scholars job satisfaction and identification has been studied and little attention has been paid to the role of perceived external prestige as determinants of turnover (Carmeli, 2005; Herrbach et al., 2004). This article is attempted to link perceived external prestige (PEP) and turnover intention through the job satisfaction and identification. These variables are selected because researches have shown their influence on important human resources outcomes such as turnover. The current study suggests a mediation model, in which the relationship between PEP and turnover intention is mediated by both identification and job satisfaction. Turnover intention is an individual own estimated (subjective) probability that they are permanently leaving the organization at some point in the near future (Vandenberg and Nelson, 1999 p.1315). Intention to quit is probably the most important antecedents of the turnover decision (Elangovan, 2001). PEP refers to the employee’s own beliefs about how other people outside the organization evaluate the status and prestige of the organization (Smidts et al., 2001). Several authors propose that PEP affects organizational identification (Mael and Ashforth, 1992; Pratt, 1998; Smidts et al., 2001). The strength of member’s organizational identification has been shown to relate to positive organizational behaviors (Pratt, 1998; Schrodt, 2002; Wiesenfeld et al, 1998). A sense of identification, create a perception in which degree to a member associate him or herself with the organization’s goals and values (Miller et al., 2000). Moreover, member that identify with an organization may be more likely to indicate supportive behavior toward organization (Shamir, 1990). Previous research found that positive identification with the organization improves motivation, job satisfaction and commitment, on the other hand decreases turnover and conflict within the organization (Pratt, 1998). What member think outsiders think about their organization (Dutton and Dukerich, 1991) influences members identification with the organization. Carmeli (2005) argue that the more favorable the organization prestige is, higher the employees job satisfaction and identification. On the other hand, identification with the organization is investigated as a negative predictor of intention to quit in many research (Abrams et al., 1998, Mael and Ashforth, 1995; van Knippenberg and van Schie, 2000; Harris and Cameron, 2005). And several models have postulated job satisfaction to be antecedents of turnover (Williams and Hazer, 1986; Farkas and Tetrick, 1989; Spector, 1997; Tan and Akhtar, 1995). The main aim of this research is to determine the relationship among PEP, job satisfaction, identification and turnover intention. Paper offers a model, which indicates that PEP influences turnover intention by job satisfaction and identification as mediator. The present study examines the mediating role of job satisfaction and identification in predicting turnover intentions in a multicultural non-Western environment. The research is based on survey of a sample of 578 bank employees. PEP also called as “construed external image” (Dutton et al., 1994). Dukerich, Golden and Shortell (2002) found that attractiveness of construed external image is strongly related to the strength of employee’s identification with the organization. Many definition of organization identification exist in the literature. Mael and Ashforth (1992, p.103) define organizational identification as “the perception of oneness with or belongingness to an organization, where the individuals defines him or herself in terms of the organization(s) which he or she is a member”. Dutton, Dukerich and Harquial (1994) define organizational identification as the extent to which an organizational member defines his or her self identity in term of the organization’s identity. Dukerich, Golden and Shortell (2002, p. 194) states that organizational identification is “a cognitive linkage between the definition of the organization and the definition of the self”. When an individual perceives organization’s construed external image as being increasingly more positive, the individual may be more inclined to identify with the organization (Dutton et al., 1994). When organization members believe that important outsiders or stakeholders see the organization as positive, identification with the organization enhances. However, the more prestigious the employee perceives one’s organization, greater the potential to identification exist (Smidts et al., 2001). Moreover, PEP (Smidts et al., 2001) is also known as organizational image. A favorable image fosters identification of an employee with his/her organization (Dutton and Dukerich, 1991). Job satisfaction is defined as how people feel about their job as and different aspects of their jobs (Spector, 1997). Job satisfaction is function of the perceived relationship between what one wants from his or her job and what one perceives it offers (Locke, 1969). According to the Fogarty (1994) job satisfaction refers to the extent to which employees gain enjoyment from their efforts in the workplace. Satisfaction is a positive or negative evaluative judgment one makes about one’s job or job situation (Weiss, 2002).When the employee has a higher level of job satisfaction, he or she can develop more positive attitudes towards their job. However, PEP provides employees with a basis of comparison with working in other companies, if jobs likely to be found elsewhere do not appear to be better than with the present employer, the individual will have less reason to want to leave (Price, 2001). If the employee perceives his/her organization is a better than the other ones, he/she will not leave the organization. An employee is less likely to be satisfied to an organization if he or she does not perceive certain level of prestige related to organization (Carmeli, 2005). Carmeli (2005) claimes that the more favorable the organization prestige, higher the employees job satisfaction and identification. Herrbach, Mignonac and Gatigon (2004) propose that perceived external prestige having an indirect effect on turnover intentions by job satisfaction. When outsiders’ perceive company positively, this has a positive influence on cognitive bias in the evaluative process on which satisfaction is based. A strong PEP influences on how the individual evaluates his or her work: an employee high in positive affect may selectively perceive the favorable aspects of job, thereby increasing his or her job satisfaction (Herrbach et al., 2004).
An Analysis of the Major Source of Finance for Small Businesses in Developing Countries
Kisembo K. Deogratius, Breyer State University, London Centre
Many smaller businesses in developing countries end up using the owner’s assets to start up and/or to stay afloat, and this has obvious problems. These assets are often needed for other things, and they also can run out quite quickly, leaving the (former) business owner in a great deal of trouble when he cannot pay his bills anymore, his business goes under, and he has nothing left in the bank with which to save himself. When something like this takes place the individual may then end up having to sell assets or property in order to keep the business going or in order to pay bills and get out of debt that was created through the business. While this is unfortunate it is all too common with small businesses, and this is especially true of developing countries where banking options are somewhat more expensive than in more developed nations. This paper therefore addresses the issue of funding for small scale businesses in Developing nations. The discussion will concentrate on Foreign Direct Investments (FDI) as the only way to rescue small businesses from financial constraints in developing nations. In order to fully understand how businesses in developing countries continue to grow and prosper, there are several issues that one must be aware of. The main concerns are with foreign direct investment (FDI), the growth that is taking place in Developing countries, and case studies of various areas such as Africa. The African region is of great importance when it comes to developing countries and FDI because it has both small and medium-sized businesses, as well as big business, and many of these businesses get financial help from companies and corporations in other countries through foreign direct investment. It is becoming increasingly difficult, however, for the smaller businesses to receive FDI because big businesses seem to take much of this. They have larger budgets for advertising and they are able to make presentations to companies, banks, and others as to why they should receive the funding. The smaller businesses cannot compete in this way and instead often must try to receive funding from other sources, including their own finances. It is not only the large businesses in developing countries that get help, although they are the majority, nor is it only the businesses that move from other, more developed countries into less developed ones. All businesses in all countries have the potential to be affected by FDI, but it is used for smaller businesses more often than many people realize. If the business is able to catch the attention of larger corporations and other sources of funding such as the World Bank or other bank credits. For these reasons, FDI will be addressed first here, since an understanding of it is necessary and helpful to the rest of the paper. Foreign direct investment has been around for some time, and it is important to understand this. More recently, however, FDI has moved into many more countries – quite a few of which are still developing, and many of which have a multitude of small businesses, such as those found in many villages and small towns in Africa. Those that have invested in already developed countries in the past have, in general, done well with these investments, because the economies of these countries are growing so strongly. However, those that invest in developing countries are also doing well, but in a more long-term way. When someone, or some business, invests in a country that is still developing, there is no great expectation of immediate wealth. Many of these countries do not have a lot of money, and their economies are troubled and sluggish to some extent. Since the economies of these countries are slow to perform, the businesses that are in these countries have the same problems. This is especially true with the smaller businesses because they are not capable of supporting themselves as strongly as the larger and more established businesses are. Despite this, though, these countries are also developing, and this has been taking place more rapidly in recent years as society becomes more global, and as outsourcing takes place. Because these countries are starting to expand and grow, they are more interesting to investors and other business individuals. As their growth increases, so will the volume of direct investment that are required for this rapid expansion. In turn, this will stimulate further growth, boosting their economies and prompting others to invest in these countries more strongly. Based on this, not only will the investors prosper, but the business communities in developing countries will prosper as well, and this will benefit everyone that is involved. This is not to say that foreign direct investment is always good, or that it completely helps every country because, with economic growth comes changes to the way that people live and work, and some of these changes are not always wanted. More people having nicer things due to new technology, and a better economy. For example, it can be a source of many problems like unemployment due to mechanisation and societal unrest like criminal activities to target for those who have wealth. However, overall, foreign direct investment appears to be a fast-moving and growing industry that is providing much to small businesses in developing countries. Without further study of the issue, though, this cannot completely be determined. At this point, it is necessary to understand exactly what foreign direct investment means. According to IMF and OECD, direct investment is a reflection of the aim of obtaining what would be a lasting interest in the economy of an enterprise through money, land, or some other offering to that enterprise. This “lasting interest” implies that there will be a long-term relationship between the investor and the investment enterprise, as well as a significant degree of influence by the investor when it comes to management of the enterprise. Naturally, where foreign direct investment is concerned, this kind of enterprise would be located in a country other than the one that the investor is located in. In short, therefore, foreign direct investment, or FDI, is when a company in one country provides money, capital, and other resources to expand their business in another country. This country that the company invests in is called the 'host country,' and it often receives many benefits (as well as some problems) from allowing other companies to come in and set up shop there. It is not always the ‘host country’ however, that is invested in. Sometimes it is just an industry in that country or a specific business that is invested in, and this is often where the small businesses lose out because they are overlooked or not judged to be important enough to be offered the chance at a large amount of FDI. FDI, however, provides many things for both the host country and the company or corporation that has moved in, and this is often a delicate balance that must be adjusted frequently to ensure that there are a minimum number of problems with the maximum number of benefits for everyone that is involved.
Post-Offering Performance of Convertible Bond Issuers: The Information Effect of Poison Put Covenants
Dr. Claudia Kocher, University of Michigan-Dearborn, Dearborn, MI
Dr. Hei Wai Lee, University of Michigan-Dearborn, Dearborn, MI
The objective of this study is to examine the information content of convertible bond offerings with and without poison puts by comparing long-term changes in operating performance of the issuing firms around their convertible bond offerings. The results show that poison put users experience smaller decreases in operating profit margin from the pre-issuance period through the first two fiscal years following the issuance, especially in the 1990s. These results are not surprising, given the relatively favorable market reactions reported in Nanda and Yun (1996) for convertible bond issuers who use poison puts. The use of poison put covenants in convertible bond contracts appear to convey a favorable information effect regarding the issuing firm’s operating performance for users relative to non-users. Event risk covenants (ERCs) are a recent innovation in corporate bond contracts. The stated purpose of ERCs is to protect bond investors from wealth losses due to the issuer’s leverage-increasing restructuring events. Poison puts are the most common form of ERCs, and are designed to allow bondholders to redeem their bonds prior to maturity, often for par value plus a specified premium, if a predefined change in the ownership of the issuing firm occurs and the bond is downgraded. The earliest poison puts, which were introduced in the U.S. in 1986, required that restructuring events be declared hostile by the issuing firm’s management in order for the covenants to be triggered. After the RJR Nabisco leveraged buyout, in late 1988, such declaration requirement was dropped. Poison puts remained popular in the 1990s. David (2001) reports that $141 billion of bonds with poison puts, which accounted for 15% of all debt issued by non-financial corporations, were issued during the period 1991-1997. While much of the literature on event risk covenants and poison puts focuses on their use by issuers of nonconvertible bonds, convertible bond issuers also use poison puts. Nanda and Yun (1996) document that poison put use in convertible bonds issued by NYSE and AMEX firms increased from 7.7% in 1987 to 88.9% in 1989. David (2001) reports that issuers continued to include poison puts in their bond offerings through the mid-1990s. After 1997 the use of poison puts by issuers of convertible bonds decreased dramatically. As the first study that examines stock price reactions to the issuance announcement of convertible debt with and without poison puts, Nanda and Yun (1996) find that issuers of convertible bonds with poison puts experienced significantly less negative returns than those associated with issuance of convertible bonds without poison puts. The average abnormal return for convertible bonds with poison puts is an insignificant -0.55% while that for convertible bonds without poison puts is –1.78%, which is statistically significant at the 5% level. While several studies have examined stock price reactions to the announcement of bonds with poison puts, none have investigated the relation between poison put use and post-issue operating performance of the issuing firm. Our research addresses this gap in the literature and seeks to provide further insight into the information content of poison put covenants in convertible bond contracts by examining long-term changes in operating performance of the issuing firms around their convertible bond offerings. Consistent with the difference in stock market reactions to announcements of the two groups (users versus non-users of poison puts) of bond issuers, our results show that issuers of convertible bonds with poison puts experience significantly less unfavorable post-issue operating performance than those without poison puts. While the literature documents that convertible bond offerings are generally associated with unfavorable information content regarding the issuing firms, Nanda and Yun (1996) and our study show that the use of poison puts by convertible bond issuers can convey the quality of the issuing firms and hence mitigate the negative stock price reactions to their issuance announcement. Jensen and Meckling (1976) show that stockholders can decrease the stockholder-bondholder conflict through the use of protective covenants or convertible debt. Lehn and Poulsen (1991) examine covenant use in 1989, after the RJR Nabisco leveraged buyout, and provide evidence that bond issuers use poison puts instead of conversion options to mitigate stockholder-bondholder agency problems due to event risk. Thus, their findings suggest that poison puts are substitutes for convertible bonds. Nanda and Yun (1996), however, examine convertible bond covenants during the period 1986-1992, and note that poison puts were included in approximately 80% of convertible bond contracts for bonds issued between 1988 and 1992, even though they were introduced only two years earlier, in 1986. Their results suggest that poison puts are complements to convertible bonds. The existing literature on poison puts focuses on two hypotheses that explain their use (see Cook and Easterwood, 1994; Nanda and Yun, 1996; Roth and McDonald, 1999). The stockholder wealth hypothesis states that poison puts reduce agency conflicts between stockholders and bondholders, and hence maximize shareholder wealth. The entrenchment hypothesis states that poison puts increase the cost and decrease the likelihood of hostile takeovers, resulting in increased job security for managers. Several studies examine stock price reactions to issuance announcements of bonds that include poison put, and find statistically significant positive impacts of the poison put provision on the market reactions. Cook and Easterwood (1994) and Roth and McDonald (1999) find a less negative reaction to the issuance announcement of nonconvertible bonds with poison puts than that for nonconvertible bonds without poison puts. Nanda and Yun (1996) document similar findings on differential stock price reactions for the case of convertible bond issuance. These findings are consistent with the stockholder wealth explanation for the role of the poison put provision in bond covenants.
Impact of Exchange Rate Changes on Domestic Inflation: The Turkish Experience
Dr. Cem Saatcioglu, Istanbul University, Istanbul, Turkey
Levent Korap, Marmara University, Istanbul, Turkey
Dr. Ara Volkan, Florida Gulf Coast University, Fort Myers, Florida
This paper examines the extent to which changes in exchange rates result in changes in Turkish domestic inflation. Specifically, we determine if there has been a change in the magnitude of this impact from the pre-2003 period to the post-2003, when the exchange rates were allowed to float. Employing monthly frequency data, we estimate two impulse-response functions and pass-through coefficients, one derived for the 1994April-2002December period using 1994 price indices as base (100) and the other one derived for the 2003January-2006December period using the 2003 price indices as base (100). We confirm that exchange rate shocks feed into domestic inflation, first at the level of manufacturers’ prices and then at the level of consumer prices, and that the impact of the shocks on the price variables of the various stages of the supply chain is different. Our findings indicate that the magnitude of the impact has declined for the post-2003 period by nearly one-half compared to the pre-2003 period during the early stages of the production process reflecting the predominance of the manufacturer price index in determining Turkish inflation rates. In addition, the decline in the exchange rate pass-through impact on domestic prices coincides with a 25 percent decline in the post-2003 consumer price inflation. Regardless, the consideration of the impact of exchange rate changes on the domestic inflationary process is still important when establishing monetary policies for the Turkish economy. The transmission of the effects of exchange rate fluctuations to the domestic inflation rates has been an issue of interest in contemporaneous economics literature. From a developing country perspective, exchange rate stabilization policies have serious consequences on the efficiency of other ex-post economic policy implementations. The Turkish economy constitutes an interesting case study, being subject to chronic, double-digit inflationary framework over the 1983-2002 period. While Ertuðrul and Selçuk (2002) present a brief outline of the Turkish economy for the post-1980 period, an extensive literature review of the Turkish inflation can be found in Kibritcioglu (2001) and Saatcioglu and Korap (2006). In 2000, an anti-inflationary stabilization program led by a quasi-currency board was established to fight domestic inflation (Ozdemir and Sahinbeyoglu, 2000). The board set fixed domestic currency exchange rates against foreign currencies, aiming to form expectations of exchange rate parities for economic agents. While this approach seemed to be successful in bringing inflation down by one-half during the first 10 months of its implementation, the subsequent two economic crises ended the program with a huge depreciation in real incomes. Dornbusch (2001), Eichengreen (2001), Uygur (2001), Alper (2001), Ertugrul and Yeldan (2002), Akyuz and Boratav (2003), and Ekinci and Erturk (2007) critically analyze the reasons for and the outcomes of the Turkish-2000 stabilization program. Currently, Turkish policy makers are trying to establish an inflation targeting (IT) framework supported by a free-floating exchange rate system, explicitly announcing annual targets through the Central Bank of the Republic of Turkey (CBRT). In this manner, a forward looking policy stance is being provided and presented as the main characteristic of the IT framework (Leigh and Rossi, 2002). This paper conducts an empirical analysis to reveal the extent to which changes in exchange rates result in changes in Turkish domestic inflation. Specifically, we determine if there has been a change in the magnitude of this impact from the pre-2003 period to the post-2003, when the exchange rates were allowed to float. The next section examines the importance of the process that describes how exchange rate changes pass through into domestic inflation. Then, we estimate empirical models for this process in the Turkish economy during the 1994-2002 and the 2003-2006 periods. Next, we conduct sensitivity analyses for the 2003-2006 period to demonstrate changes, if any, that have occurred in the pass-through relationships that we established for the 1994-2002 period. The final section presents our summaries, conclusions, and suggestions for future research. As emphasized by Choudhri and Hakura (2006), an important policy debate for the contemporaneous monetary and exchange rate policy implementations is to reveal the degree to which changes in exchange rates or import prices impact or pass-through into domestic consumer prices. Related to this policy issue is the fact that a low exchange rate pass-through provides policy makers freedom to pursue an independent monetary policy. Campa and Goldberg (2002) highlight such a process where low import price pass-through of nominal exchange rate fluctuations lead to lower expenditure-switching effects in the domestic economy, thereby leaving monetary policy free to deal with real shocks. Otherwise, shocks due to the pass-through effects of import prices and exchange rates make the domestic economy fragile and susceptible to trade linkages. Using data from the OECD countries, they estimate that macro variables do not have high explanatory power in describing the pass-through process. Instead, the composition of industries in a country’s import basket plays a much more important role in determining the pass-through. In this sense, the import price pass-through would mainly reflect the pricing behavior of foreign firms. Frankel et al (2005) give a brief summary of the factors affecting pass-through of the changes in import prices or exchange rates via devaluations on the inflationary framework for the developed and developing countries. They emphasize that pass-through effects have historically been much higher in poor countries than in rich ones and are significantly higher in an environment of high inflation. They observe that pass-through effects have declined significantly in the 1990s. They attribute this decline to barriers to arbitrage between different countries as well as to the ‘pricing to market’ phenomenon of Krugman (1987), indicating price discrimination by firms in different countries where foreign producers adjust their mark-ups to maintain a stable market share in the domestic economy thereby reducing the rate of pass-through (Korhonen and Wachtel, 2006).
Constructing Taiwanese Small-Enterprise Innovative Capital Indices by Using Fuzzy AHP
Dr. Jui-Kuei Chen, Tamkang University, Taiwan
I-Shuo Chen, National Dong Hwa University, Taiwan
The purpose of this study is to explore the ways in which innovative capital is used to upgrade innovative operations in small enterprises in Taiwan. Based on the literature and related research, the study extracts two related dimensions—Invisible Innovation and Visible Innovation—of innovative capital fit to the characteristics of small enterprises. In addition, the hierarchical framework of innovative capital evaluation for small enterprises is constructed based on the two dimensions and the factors under each. Fuzzy Analytic Hierarchy Process (FAHP) methodology is used to analyze the opinions collected from a sample expert in small enterprises in Taiwan. The results of the present study found that the most important innovative capital indices for small enterprises are “Innovative culture” (0.378), “Number of New Designs” (0.370), “Copyright and Brand” (0.193), “Number of New Customers” (0.029), “Number of R&D Workers” (0.018), and “Outer Tech Connection” (0.013). A discussion of the key research findings and some suggested directions for future research are provided. The rise of the knowledge-based economy has been attributed to the increasing importance of intellectual capital as an important resource for companies’ sustainable competitive advantage (Roos et al., 1997, as cited in Moon & Kym, 2006; Tan, Plowman, & Hancock, 2007; Sonnier, Carson, & Carson, 2007). Taxonomy of organizational resources or assets with suggested performance was analyzed from the point of view of intellectual capital (Ng, 2006). Therefore, both entrepreneurs and scholars have turned their attention to the subject of intellectual capital (Bontis, Keow, & Richardson, 2000; Lev & Feng, 2001; Guthrie, 2001; Bornemann & Leitner, 2002; Weatherly, 2003; Kaplan & Norton, 2004). A recent concern is that the primary goals of most organizations, including small enterprises in Taiwan, are the production and diffusion of ideas, and their main investments are in research and human resources (Sanchez & Elena, 2006). Thus, their investments and outcomes are generally invisible assets, and few instruments exist with which to measure thse precisely (Caddy, 2000; Dzinkowski, 2000; Canibano & Sanchez, 2004). It is appropriate for studies to examine intellectual capital by using innovative capital because intellectual capital almost always uses innovation as outcomes (Ahuja, 2000; Subramaniam & Venkatraman, 2001; Subramaniam & Youndt, 2005). Innovative capital is the idea that the innovative ability of a firm is owned, just as services and operational procedures, which include explicit intellectual property such as patent rights and implicit innovative research ability, are owned (Bass & Van Buren, 1999; Edvissson & Malone, 1997). The nature of innovation is ambiguous today, and because the literature provides different interpretations of its meaning, a definition is necessary (Camelo-Ordaz, Hernandez-Lara, & Valle-Cabrera, 2005). Subramaniam and Youndt (2005) indicated that innovation is about identifying and using opportunities to create new products, services, or work practices. Damanpour (1996) pointed out that innovation involves the adoption of an idea which is new for the organization which adopts it. Generally, innovation can be defined from three dimensions (Ordaz, Hernandez-Lara, & Valle-Cabrera, 2005): a product new to the business unit (Tushman & Nadler, 1986; Damanpour, 1996), a new process (O’Sullivan, 2000), and an attribute of the organization (Kimberly, 1981; Bantel & Jackson, 1989). The literature indicates that innovation can be divided into two dimensions: a visible part and an invisible part. Innovation capital indices for evaluating small enterprises are numerous. (See Table 1, which is cited in Wu, 2006), some of which the present study uses to evaluate small enterprises in terms of innovation capital. Innovative capital measuring has referred to medium or large firms; however, growing interest has extended the meaning to include small or public organizations (Sanchez & Elena, 2006). Several studies have also found that innovation plays a crucial rule in organizations, especially in increasing profit. (See Table 2.) Based on the extant literature and related research, the present study argues that evaluation of innovation capital will help small enterprises. Professor L.A. Zadeh first came up with the fuzzy set theory in 1965 while trying to solve fuzzy phenomenon problems which exist in the real world, such as uncertain, incomplete, unspecific, and fuzzy situations. Fuzzy set theory has more advantages to describe set concepts in human language than does traditional set theory. It shows unspecific and fuzzy characteristics in language on the evaluation and it uses a membership function concept to represent the field in which a fuzzy set can permit situation such as “incompletely belong to” and “incompletely not belong to.” We order the Universe of Discourse such that U is a whole target we discuss, and each target in the Universe of Discourse is called an element. Fuzzy which on U stated that random x →U, appointing a real number. The Fuzzy linguistic variable is a variable which reflects the different levels of human language. Its value represents the range from natural to artificial language. When precisely reflecting the value or meaning of a linguistic variable, there must be an appropriate ways to change. Variables on a human word or sentence can be divided into numerous linguistic criteria, such as equally important, moderately important, strongly important, very strongly important, and extremely important, as shown in Figure 2 and their definitions and descriptions as shown in Table 3. For the purpose of the present study, the five criteria above (i.e., equally important, moderately important, strongly important, very strongly important and extremely important) are used.
Just in Time Manufacturing System and Traditional Turkish Uniform Accounting System on Accounting Recording Basis: A Comparative Study
Dr. Fatma Ulucan Ozkul, Bahçeþehir University, Istanbul, Turkey
With the fact that the competitive environment is getting to be intensive every day, the competitive edge of enterprises in these market conditions is related to a great extent to their ability to reduce their costs. The search for producing the highest quality product at the lowest cost has caused the emergence of new methods in cost accounting applications. The just in time production system which targets elimination of inventories reveals the errors hidden in the inventories of the enterprise and help in the formation of an effective inventory control system in the enterprise. When the reasons for the rise of costs in the enterprises are examined it was found that many costs were those activities that do not create any value and form the waste. With just in time production system it is aimed to remove the activities that do not generate value in the enterprise and thus lower the costs that are created in the whole production process. The fact that in cost accounting that is traditionally practiced in enterprises specially focuses on inventory valuation and the intensity in the accounting records hinders the effective working of all the process. With the JIT system which is one of the modern approaches, changes come about in the production process and cost accounting of the enterprise by was of simplification and a healthier working of the process is enabled. Because of rapidly changing consumer needs and demands and increasing competition, service life of the goods has been shortened and enterprises’ current cost management systems became insufficient. When consumers demanded to obtain the goods at the price, quality, functionality, time and place they desired push type manufacturing systems have been replaced with pull type ones. JIT manufacturing is an approach which takes including of required or demanded activities immediately to the system as basis. In this paper, the conceptual dimension of JIT is examined; the benefits and purposes mentioned and the changes it caused in accounting recording system have been studied in an example. Through 1970s Toyota Motors Company has entered to the global market with a new production strategy which provides lower inventory, lower conversion cost, higher quality and less waste and less cost throughout whole value chain.(Minahan 1978) This approach has gained great importance with oil crisis occurred in 1973, has been used to decrease increased costs and become widespread in a very short time.(Clement 1998) JIT is an inventory management system and deals with elimination of the waste and residuals in total production process.(Türk-Özulucan 2001) The target of this approach is to catch up the zero level of inventory and a production process free from errors. Generally JIT(Payne 1993): Combines marketing and production strategies. Constitutes a material management and control system. Increases efficiency of human and machinery. Enables prompt deliveries. Decreases inventories. Develops the ability to respond to market. Increases employee involvement. The purpose of JIT’s eliminating stocks indicates not only elimination of inventory costs but also and mainly revealing the errors hidden in inventories. Buffer stocks and safety stocks hinder the solution of the problems appeared in production process.(Hay 2000) Companies while trying to reduce their stock levels have faced with several problems. Quality problems, various obstacles, coordination problems, insufficiency, wastages and distrust towards suppliers can be listed among these. In order to eliminate these problems and losses through production process JIT adopts four following objects: elimination of activities that do not create any value. high level quality. continuous improvement. facilitating to distinguish the activities generate value and giving importance to increase these activities. After the objectives above adopted, JIT manufacturing system has caused significant changes in all functions and management system of the enterprises. Thus, the fitting of cost and management accounting systems of enterprises to this structure has become important.(Hacýrüstemoðlu-Þakrak 2002) JIT manufacturing system has been widely used in Turkish companies. The application level of system which adopts the principle of zero level of inventory and a production process free from errors, by different sectors in Turkey is shown below www.//ref.advancity.net/ newsletters/ 2007/01/ quality_management.htm) It has been observed that application level of raw in-process inventory (RIP) method is 40% for sectors.
Challenges on Mode Decisions in South Dakota
Dr. Jack Fei Yang, Hsing-Kuo University, Taiwan
Dr. I-Chan Kao, The Open University of Kaohsiung, Taiwan
Min-Chun Chen, Hsing-Kuo University, Taiwan
Dr. Ching-Mei Hsiao, Hsing-Kuo University, Taiwan
Leading development of online education with desired outcomes in higher level learning requires knowledge of the impact of delivery systems and learner preferences. In South Dakota—a US leader in distance education—the writers asked faculty experienced in various modes of delivery what factors supported higher level learning outcomes, leading to application of knowledge to practical work challenges. The faculty in South Dakota public universities generally thought that most any distance delivery mode can produce higher level learning outcomes, if labor costs are well funded and student motivation is high. But, there is the rub: Costs of cyberspace delivery modes vary widely; student acceptance is related to popularity and newness; and incentives for faculty vary widely between in-load and overload assignments. Higher level learning outcomes for cyberspace programs have less to do with modes of delivery than with faculty teaching methods and choice of learning objectives. Educators who become involved in designing, delivering, or taking courses and programs online encounter a wide array of delivery modes, aggressive marketing advocacy of the merits of widely varying modes and methods, and a far-flung vocabulary of terms and teasers. Deciding to throw out their long-established correspondence courses for the sake of keeping up with the newest WebCT system requires less advertising “noise” and more good information. As online offerings to teachers continue to expand, K-12 teachers should be aware of the pressures placed on university faculty as they try to offer courses in cyberspace. Simonson et al. (2000) presented a taxonomy of distance technologies including the following applications: Correspondence: based on use of copy machines and postal system. Prerecorded media: based on use of audio or video recording systems. Two-way audio: based on use of telephone system. Two-way audio with graphics: based on use of display board transmitter or computer network. One-way live video: based on use of television classroom or video transmission system. Two-way audio, one-way video: based on use of telephone system. Two-way audio and two-way video: based on use of a telecommunications network. Desktop two-way audio and two-way video: based on use of a multimedia computing and high-speed network connection. Distance education modes of delivery are generally classified into four types: print‚ audio‚ video‚ and computer (Keegan‚ 1986). Bates (1995; 2002) developed a simple chart to explain the relationship among media‚ technology and distance educational applications (Table 1). Various delivery modes reach learners who have differing needs. Traditional distance education concepts matched specific modes to specific course content. However, lack of the instructional design to adopt appropriate modes of delivery for appropriate levels of learning objectives could produce low levels of learning quality. Inflexible modes of delivery or technology-driven policies might compromise the equality of the learning opportunities. For example, a computer-driven online distance learning environment limits learners who do not have computers, Internet access, or computer skills to access such educational opportunities. University and public school educational leaders in South Dakota have had continuing pressure from government to reform education, with significant pressure to use instructional technologies that require faculty to acquire new skills and perspectives. The pressure to use networks in South Dakota has generated a broad set of courses and programs offered online in cyberspace. The question of how faculty perceive their work, the modes of distance delivery and their impact on learning outcomes, and in which technologies faculty might place their hopes for a better future were of primary interest to the researchers as a research project. South Dakota is a rural state which has been in the forefront of distance education in the United States (Gosmire‚ & Vondruska‚ 2001). South Dakota began to offer distance education in higher education in 1915‚ with correspondence courses offered from The University of South Dakota. In order to increase educational access in rural geography‚ South Dakota has experienced many distance modes of delivery system innovations and developments such as the following (Bauck‚ 2001): The satellite network, the Rural Development Telecommunications Network (RDTN) in 1994, which consists of 18 two-way audio/video studios located throughout the state at universities and technical institutes. The Sanborn Interactive Video Network (SIVN) in 1996, which is a video consortia project offering classes for K-12 schools through a dedicated compressed video system. Southeast Interactive Long Distance Learning (SILDL) project in 1998 which has provided a real-time‚ interactive television full-motion video and audio system for post-secondary and university levels. In 2000‚ The Electronic University Consortium (EUC) of South Dakota which has been the largest distance learning network established to provide distance education across the South Dakota System of Public Higher Education. South Dakota distance educational modes of delivery methods as of 2007 included the following types (Dakota State University, 2007; Electronic Education Consortium of South Dakota ‚ 2007):
Decision Support for Hazardous Materials Routing and Facility Location
Dr. Kimberly Killmer Hollister, Montclair State University, Upper Montclair, NJ
Planning a hazardous waste management system is an extremely complex decision making process. In this paper, a framework within which decision makers can develop and evaluate hazardous waste routing and facility location plans is developed. This decision support system develops plans which are "robust" to uncertain realizations of the future. Our stochastic representation of the noxious facility location and materials routing problem provides planners with a tool with which they can develop plans which are "good" regardless of parameter outcomes. Planning a hazardous waste management system is an extremely complex decision making process. In this paper, a framework within which decision makers can develop and evaluate routing and facility location plans hazardous waste management is developed. Planners are faced with the simultaneous problem of decreasing budgets and increasing regulatory mandates. The Environmental Protection Agency (2006) has indicated that it considers the reduction of risk to be the most important goal of any environmental policy. This type of environment makes the difficult task of planning a hazardous waste management system even more challenging. As a result of the complex nature of the problem, planning has emerged as a mechanism for improved decision making. The purpose of a decision aid is to assist decision makers, not to replace decision makers. An appropriately designed decision support system (DSS) can be used to bridge the gap between policy makers and complex computerized models (Maniezzo, et. al, 1998). The main goal of the DSS is to reduce, to the extent possible, the overall impacts - economic, environmental, etc. - associated with the transportation and disposal of hazardous waste. In planning for hazardous waste management systems, planners are usually faced with the problem of evaluating a set of potential sites for waste disposal. Our DSS is designed to assist the decision maker in evaluating a restricted number of planned alternatives; these alternatives are constructed on the basis of a list of proposed locations for capacity expansion. Similarly, the set of potential routes in the system is assumed to be prescreened to a restricted number of alternative routes. Moin and Salhi (2007) provide a comprehensive literature review of routing models. The DSS supports public policy makers in the design of alternative hazardous waste routing and facility location plans which are "robust" to uncertain realizations of the future. The stochastic representation of the noxious facility location and materials routing problem provides planners with a tool with which they can develop routing and siting plans which are "good" regardless of parameter outcomes. The definition of an alternative plan requires the identification of a combination of locations for capacity expansion, assignment of waste generation sources to disposal facilities, and the set of paths for the transportation of waste from sources to destinations. The problem is a multi-criteria decision problem; the different impacts of each alternative are quantified and results compared. Each alternative plan must meet two important constraints: System capacity must to sufficient to treat all waste transported from waste generators; All waste generated must be transported away from the generation site. The DSS, shown in Figure 1, is composed of a user interface (both front and back end), a models management system, and a database management system. The database management system is responsible for the retrieval and update of all types of information required; the model management system is the mathematical model which generates potential alternative plans and provides a multicriteria analysis of said plans, finally, the user interface provides the user a means through which problem parameters and constraints can be entered and represents ranked solutions to the user to aid in the decision making process. It has been argued that the user interface is the most important component of a DSS; in fact, Keenan (1998) notes that "the interface design may provide a framework for the entire DSS." The more accurately the user interface captures the view of the decision maker, the more effective the decision support can be. In the case of a routing and siting problem, the user's view of the problem is a spatial one and the user can most easily interact with a system that incorporates this view. Therefore, the user interface should include a geographic representation of the study area; this should include a display of the road network as well as the location of potential treatment sites and sites of waste generation. The idea of incorporating onscreen maps into the decision making process is not new, its origin dates back to the 1970's with the GADS system at IBM. The incorporation of geographic information systems (GIS) into a DSS would transform the DSS into a spatial decision support system (SDSS). Kohsaka (2000) discuss that these systems have been shown to facilitate decision making. The use of SDSS for the routing and siting of hazardous waste is an obvious application of this type of model. The spatial representation should benefit decision makers both in the definition of the problem, and the expression of problem constraints, and in the final evaluation of alternatives. In addition to decision maker preferences, there are other data inputs to the model; these include the transportation network, information on source location and quantity of waste generated, transportation and disposal cost, transportation and disposal risk, and the complete specification of scenario realizations. Figure 2 shows the data input requirements from the user into the model. Once the decision maker has input all the relevant data, the Robust Routing and Siting Plan (RRSP) model is run to generate alternative routing and siting plans; plans are then evaluated in the evaluation module. The mathematical formulation of each model is provided in the following section.
The Impact of Cognitive Fit and Consensus on Acceptance of Collaborative Information Systems
Dr. Ming-Tien Tsai, National Cheng-Kung University, Tainan, Taiwan
Wenjywan Su, National Cheng-Kung University, Tainan, Taiwan
This paper incorporates a cognitive factor as an extension to the technology acceptance model (TAM) and empirically examines it in a collaborative working environment. In addition, it investigates the external variables influencing perceived usefulness and perceived ease of use from a cognitive fit perspective and evaluates the impact of consensus on appropriation (COA) and on perceived usefulness as a group factor in the team context. The paper concludes that (1) COA, task-tool fit and representation-task fit all have positive effects on perceived usefulness; (2) the representation-task fit influences the perceived ease of using the collaborative system; and (3) perceived usefulness substantially explains system adoption in terms of perceived performance. Thus, the paper provides empirical and theoretical support for how the use of cognitive intervention, such as consensus on appropriation and cognitive fit, influences the acceptance of technology. This paper contributes to the extension of TAM external variables and provides team leaders with a cognitive viewpoint when implementing a collaborative information system. There is a growing body of research on various information systems applied in the business environment. Information systems have been designed to facilitate problem-solving (DeFranco-Tommarello & Deek, 2004; Smelcer & Camel, 1997), entertainment (Hsu & Lu,2 004; Heijden, 2004), learning (Bondarouk, 2006; Liebowitz & Yaverbaum, 1998), and data-searching (Shih, 2004; Vandenbosch, 1997). Professional application software, like project management systems, gives practical help to users to resolve problems in their work effectively and efficiently and to improve their performance. Users of such systems can be regarded as problem-solvers who have the ability to distinguish how information representation will facilitate their work. System developers use cognitive fit theory to achieve higher problem-solving performance. This theory should be able to explain the adoption of IT when we view the end-users as the occupational problem-solvers. Teamwork has become more and more important in global competition, and collaborative groupware has been developed to integrate and facilitate teamwork. Through the adoption and use of collaborative information systems (CIS), collective contributions can be identified among team members. However, the question remains: Will high usage of the CIS have an effect on group cognition, causing more instances of consensus and conformity? The technology acceptance model (TAM) has been widely applied in information technology within different contexts. Factors contributing to the acceptance of information technology are likely to vary with the technology, target users and context (Moon & Kim, 2001). External variables of TAM have been explored in terms of system functionality (Hong et al., 2002), quality assurance (Shih, 2004), and behavioral control (Chau & Hu, 2002). This paper not only attempts to fill the gap related to external variables investigation from an individual cognitive perspective but also investigates the impact of group cognition on the acceptance of information technology. In this study, TAM has been extended by including the concept of cognitive fit and consensus on appropriation. Individual cognitive fit is an antecedent to perceived ease-of-use and perceived usefulness, and this construct involves two dimensions: task-representation fit and task-tool fit. This paper investigates the individual acceptance behavior in order to find out how team consensus will affect individual behavior in applying a collaborative production system. Adaptive structuration theory is a framework for studying the variations in organizational change that occur as advanced technologies are implemented and used (DeSanctis & Pool, 1994). The act of bringing the rules and resources from an advanced technology or other structural sources into action is termed structuration. The theory focuses on the interplay between two types of structures, intended and actual, to gain a deeper understanding of the impact of advanced technologies on an organization and of the processes through which advanced technologies are implemented. The visible actions that indicate deeper structuration processes are referred to as “appropriations” of the technology. Consensus on appropriation is defined as the extent to which individuals agree on how jointly to use an advanced information technology intervention (Pool and DeSanctis, 1992; DeSanctis and Poole, 1994). This agreement is a prerequisite in order for users to employ the technology effectively. If consensus on appropriation is not reached, effective coordination of the user’s effort may be difficult (Pool & DeSanctis, 1992; DeSanctis & Pool, 1994). Therefore, consensus on appropriation is a group-level phenomenon. Groupware can be divided into three categories based on the level of collaboration: communication tools, conferencing tools and collaborative management (co-ordination) tools. Collaborative management tools facilitate and manage group activities, e.g., project management systems which schedule, track, and chart the steps in a project as it is being completed (Wikipedia). Project management systems focus on collective contributions to the project through coordination and cooperation; as users learn about employing advanced information technology intervention, they will develop perceptions and opinions on this intervention (Fulk et al., 1990), which will affect how they apply the technology or system to their work. Effective use of collaborative groupware requires that the work team agree on applying the system. This study also addresses consensus on appropriation, which was addressed in a team using project management software. Individual adoption and use of technologies is influenced by others (Fishbein & Ajzen, 1975). Although consensus brings out the conformity of attitude (Conway & Schaller, 2005), individual attitude affects the behavior. Therefore, the acceptance behavior will be directed by the attitude toward the information system. Attitudes have been addressed in the form of ease-of-use and usefulness (Davis, 1989; Gopal et al., 1993). High levels of consensus on appropriation of the software will produce a favorable attitude toward usage of IS (Pool & DeSantics, 1992). Such a consensus can be formed by training, meeting facilitation or powered local majority (Kameda et al., 1995) and, once formed, the resulting favorable attitude will convince users of the system’s usefulness. Teams that achieve a high consensus on appropriation of using a project management system will strengthen the members’ perception of the system’s usefulness.
The Correlation between School Marketing Strategy and the School Image of Vocational High Schools
David W-S. Tai, National Changhua University of Education, Taiwan
Jorge W-C. Wang, National Changhua University of Education, Taiwan
C-E. Huang, National Changhua University of Education, Taiwan
This study focuses on investigating the correlation between school marketing and the school image of vocational high schools. The paper finds that the correlation is remarkably obvious. In addition, quality, one of the strategies of school marketing, plays a dominant role in building a school’s image. As the economy in Taiwan has been rapidly developing, the demand for people with higher education has been increasing. This increased demand has inspired the Taiwanese government to be more concerned with higher education. Furthermore, the increase in the number of universities and colleges contribute to a high percentage of students entering at an advanced level. According to the Ministry of Education in Taiwan (2006), statistics shows that the percentage of students entering the advanced level dramatically climbed from 61.35% in 2000 to 89.08% in 2005; however, the percentage of students entering advanced level in colleges just reached 58.02% in 2005. Consequently, most junior high students prefer to choose senior high schools rather than vocational high schools. Moreover, the government in Taiwan has changed the policy of education. For instance, the ratio of the amount of senior high schools to vocational high schools is 217: 204 in 1996 and 314: 157 in 2006, according to the Ministry of Education in Taiwan (2006). In other words, the increase in senior high schools has led to a decrease in vocational high schools, thereby extending the length of education. In the article “Broadening the Concept of Marketing,” Kotler and Levy (1969) first introduce the concept of the marketing of non-profit organizations; they view a school as a kind of non-profit organization and utilize methods of corporate management, such as marketing, to adjust and adapt changes in the modern age. Today, because of the effects of Taiwan’s economic recession on educational institutions and the shortage of national taxes, schools should begin to use marketing tools to attract and acquire more social resources. Jiang and Xu (2005) state that marketing for schools is necessary, especially when schools face competition due to freedom of choice in the market. According to Zheng (1998), when affairs and functions of schools become more complex, the administration and management of schools becomes more difficult. Thus, particularly important is how to establish a fine school image through marketing and by fully utilizing the functions of a school. Moreover, one function of school marketing is to establish and modify the school image (Vander Schee, 2004; Hanson, 2003). Cai (1998) suggests that before managing the school image, a school needs to build its image based on its culture. To establish its image, the primary goal should be determining the purposes of founding a school and of educating students. Only then can a school create its style and image and then promote this through marketing, management, and public relations. In summary, vocational high schools must now accept the challenge of education as a market. Also, in order to gain recognition from the public, build the school image, determine school characteristics and brands, and obtain sound reputation, a school must display its achievements, its quality, and the services it provides. According to a previous study about school marketing in Taiwan, schools tend to concentrates on staff and faculty instead of its students and their parents. It appears that these examinations do not follow the guideline of customer opinions and responses in marketing (Huang, 2006). In addition, Jane and Izhar (2006) point that the methods of school marketing are practical but incoherent, according to foreign journals about school marketing, and that the theory of school marketing is not suitable for all educational systems. Additionally, in some domestic and foreign research, the discussion of the correlation between school marketing and the school image relies mainly on qualitative research and numerous opinions. To address this problem, the paper uses quantitative analysis to evaluate the correlation between strategies of school marketing and the school image of vocational high schools. In addition, the paper will analyze the strengths and weaknesses thereof by correctly understanding school image and school marketing and accordingly, build unique characteristics to enhance the competitive advantage of vocational high schools. The major purpose of this study is threefold: To study and analyze the current situation of school marketing. To find the correlation between school marketing and the school image of vocational high schools. To predict impacts of school marketing on the school image of vocational high schools. In August 2004, The American Marketing Association (AMA) stated that the latest definition of marketing is a series of procedures and organizational functions to create, communicate, and transmit value to customers, and it is an administrational model of client relations which benefits organizations and their stockholders. Armstrong and Kotler (2005) argue that marketing is a social and management process for individuals and groups to create or exchange in order to meet their needs and desires. Kotler and Fox (1994) explain that marketing is an activity of trading the value. Likewise, most educational institutions gain resources they need through exchanging. Exchange, by definition, is an act of giving something while receiving something in return. To illustrate, educational institutions provide their target customers with programs, degrees, and some preparation courses; they receive tuitions, donations, volunteers, and subsidies in return for the services they provide. Huang (2004) explains that school marketing occurs when a school plans and executes activities to help its students enjoy studying and its teachers pleasantly instructing their students. Therefore, communities and students’ parents can realize and support the perspectives of founding a school as well as the educational activities needed to fulfill the expected goals of education.
Visionary Approaches to Management of Corporate Communication Strategy and Its Implications
Dr. Mohannad Khanfar, AL Ghurair University, Dubai, U.A.E.
Within the views of Newtonian science, and the classical ontology of management, organizations are operated according to deterministic modes. This worldview implies that structures determine the information needed and that perceptions must be managed by feeding the `right' information and withholding information that might lead to disorder and chaos. The formal planned approaches to strategic management have forced managers to be structured when communicating organizational goals and strategic issues. Current public relations theory in terms of management and corporate communication strategy is very much in line with the general strategic management views of structured planning and decision-making. A more recent approach to corporate communication has developed because of the fact that fast changing environments demand more contingent methods. This has moved organizations to postmodern approaches such as those described through the chaos and complexity theory. In this paper I suggest a new approach to corporate communication strategy in line with these postmodern theories. I argue for a more participative approach with high ethical and moral meaning creation through action science and research rather than the structured approaches suggested by current corporate communication theorists. I further more call for relationship management based on the basic interpersonal relationship principles where ethics, integrity, trust, openness, and listening skills determine the success of relationships.Organizations that favor their shareholders above other stakeholders and believe that business determines success and drives policy should be replaced with organizations that function as responsible, moral, and honest citizens of a larger environment. This approach ensures a positive reputation for the organization through socially responsible change processes that have relational influences into a larger societal community structure. Communication practitioners and students in the field of public relations often look for a step-by-step guide to follow in order to design a `proper' communication management strategy, which will be accepted by top management structures in organizations, as well as reflect the contribution this function makes to the overall success of the organization. Students are taught how to go through certain carefully designed processes, and they do assignments that are evaluated accordingly. Textbooks show detailed methods of long term strategic planning and they design communication programs derived from strategic management theory (Broom, Casey, & Ritchey, 1997; Cutlip, Center, & Broom, 1994; D'Aprix, 1996; Ferguson, 1999; Kendall, 1992; Oliver, 2001; Smith, 2002; B Steyn & Puth, 2000). There are now also theorists who are developing software that enables corporate communicators to plan strategies and communication plans that include budgeting and research aspects - a very worthwhile contribution to traditional corporate communication theory (B_tschi, 2004). New developments in management theory, as well as in corporate communication theory, have however extended the thoughts surrounding strategic planning, and these new developments are what I will put forward in this paper. Before looking at these new developments I will briefly discuss the traditional approaches to communication management strategy and planning. The traditional ontology of management science relies very heavily on strategic planning and strategic thinking. Management sees its role within this paradigm as reducing conflict, creating order, controlling chaos and simplifying all the complexities created by the environment. Goals and objectives are set, possible outcomes are predicted and alternatives for action are planned, and these are communicated throughout the organization. The traditional approach to strategic management describes it as a process of analysis where the strengths, weaknesses, opportunities, and threats of the organization are used to develop the mission, goals and objectives of an organization (Harrison, 2003:6). The management of tactics to plans and programs are short-term and adaptive whereas strategy would be more continuous and changes are geared toward broader goals and the vision of the organization. Structured and planned approaches of strategic management imply fixed patterns, plans and positions that influence the way the organization is managed and controlled. "For most people, strategy is generally perceived as a plan - a consciously intended course of action that is premeditated and deliberate, with strategies realised as intended" (Graetz, Rimmer,Lawrence, & Smith, 2002:51). Strategy and management is constantly referred to as the way to provide a framework for planning and decision-making that control and manage influences from the environment. Although flexibility is mentioned, it is still within the paradigm of a strong foundation and firm position.The planned approach to strategic management is a current overarching paradigm in management literature - especially from the perspective of change and transformation (Genus, 1998). (Examples can be seen in Burnes, 1996; Cummings & Worley, 2001; Ghoshal & Bartlett, 2000; Gouillart & Kelly, 1995; Head, 1997; Hill & Jones, 2004; Mintzberg & Quinn, 1996; Sanchez & Aime, 2004; Senior, 1997). With this approach the importance of strong leadership and strategic management teams are emphasized. This paradigm is tightly linked to strategy and on identifying and managing processes designed to make organizations more successful and competitive (Sanders, 1998). All these processes are focused on providing solutions to help management obtain improved productivity and competitive advantage. Strategic planning makes the results tangible, help control the processes, guide decision-making, and provide security around uncertainties. Current public relations theory in terms of management and corporate communication `strategy' is very much in line with the aforementioned general strategic management views of structured planning and decision-making. Public relations literature portray a very traditional view of `strategic communication management' and the emphasis is very much on the planning process of campaigns and communication plans – a very tactical and technical view of the communication management process. The planning process is usually described as well defined steps or stages that follow one another comprising broadly of research (formative and environmental scanning), planning (sometimes called the strategy stage), implementation (or tactics stage) and evaluation (Cutlip et al., 1994; Kendall, 1992; Oliver, 2001; Smith, 2002). The authors referred to examples of this approach to communication management.
Modernization of EC Competition Law Enforcement: From Regulation 17/62 to Regulation 1/2003
Dr. Lung-Tan Lu, Fo Guang University, Taiwan, R.O.C.
Regulation 1/2003 on the implementation of the competition rules laid down in Articles 81 and 82 EC Treaty took effect on May 1st, 2004. This article discusses and analyzes the centralized system set up by Regulation 17/62 and the modernization reform of EC competition law enforcement starting from the Commission White Paper. Regulation 1/2003 is the result of reform and brings several fundamental changes: (1) Abolition of notification and implementation of self-assessment; (2) Decentralization of enforcement; (3) Supremacy of EC law; (4) Broader investigation powers for the Commission; (5) The new relationship between the Commission and national authorities in the European competition network (ECN). These changes also bring some practical issues for undertakings: (1) the need for self-assessment; (2) dealing with an increased number of enforcement authorities; (3) increased costs for expert advice on complex procedural issues; (4) the legal basis of damage actions; and (5) the different procedural possibilities. Finally, this article proposes that the success of EC competition law enforcement will mainly depend on its implementation and application by the Commission, NCAs and national courts, and the cooperative degree of ECN. The centralized system of competition law enforcement governed by Regulation 17/62 provided a substantial degree of legal certainty to undertakings in the European Community for over four decades. However, the European Commission (hereafter: the Commission) could not deal with the agreements referred to it within a reasonable period of time or with individual exemptions upon which it was asked to adjudicate. A need to change the centralized enforcement of EC competition law was the root cause of reform, which is designed to cope with the enlargement of the European Union on May 1, 2004, and to strengthen the Commission’s position as the “competition watchdog” in the enlarged EU. The reforms have been known as the “modernization” of EC competition law. In 1999, the Commission published its White Paper on the Modernization of the Rules (hereafter: the White Paper), which proposed a thorough overhaul of the existing enforcement system (Wesseling, 1999). The EU Member States agreed to undertake a fundamental reform of the enforcement rules. The Council of Ministers adopted Council Regulation (EC) No 1/2003 (hereafter: Regulation 1/2003) on December 16, 2002, which provided for the implementation of the rules on competition in Articles 81 and 82, previously Articles 85 and 86, of the EC Treaty (Venit, 2003). Regulation 17/62 was replaced by a decentralized system, Regulation 1/2003. Under Regulation 1/2003, national competition authorities (NCAs) and national courts of Member States and undertakings take more responsibility for enforcement. All documents in this reform are referred to collectively as “the Modernization Package.” Modernization brings challenges for the system and imposes greater responsibility on undertakings in their agreements (Mueller, 2004). This article is structured as follows. Section two starts with a brief discussion of Articles 81 and 82 EC with EC competition rules applied in a centralized system, and introduces the Commission’s White Paper. Section three discusses how Regulation 1/2003 introduces five major changes to the enforcement of the competition rules of the EC Treaty: (1) abolition of notification and implementation of self-assessment; (2) decentralization of enforcement; (3) supremacy of EC law; (4) broader investigation powers for the Commission; and (5) the new relationship between the Commission and national authorities in the European Competition Network (ECN). In the forth section, this article discusses and analyzes some practical issues created by Regulation 1/2003: (1) the need for self-assessment; (2) dealing with an increased number of enforcement authorities; (3) increased costs for expert advice on complex procedural issues; (4) the legal basis of damage actions; and (5) the different procedural possibilities. Finally, section five closes this article by drawing its conclusions (Lenaerts and Gerard 2004). It is useful to review the structure of Articles 81 and 82 EC and of Council Regulation No 17/62 (hereafter: Regulation 17/62) before analyzing the modernization reform. Article 81(1) EC states that “all agreements between undertakings, decisions, associations of undertakings, and concerted practices which may affect trade between Member States and which have as their object or effect the prevention, restriction or distortion of competition within the common market … shall be prohibited”. It follows a non-complete list of such restrictive practices: (a) price-fixing or fixing of trading conditions; (b) quantitative restrictions; (c) market share; (d) discrimination of undertakings in order to reduce their competitiveness; and (e) tying agreements. According to Article 81(2) EC, such agreements are automatically void. Article 81(3) EC states that Article 81(1) EC may be declared inapplicable to an individual or a category of restrictive practices when the following conditions are met: (a) the result is an improvement of the production or distribution of goods or of technical or economic progress; (b) consumers are allowed a fair share of the benefits; (c) the restrictions are necessary in order to obtain the benefits; or (d) the restrictions do not enable the undertakings substantially to eliminate competition (Forrester, 2001). It is worth pointing out that Article 81(3) EC does not mention who may declare the provisions of paragraph 1 to be unsuitable. Regulation 17/62 empowered the Commission’s monopoly to apply to Article 81(3) EC and to request prior notification for the agreements. Regulation 17/62 had two main components that are regarded as a centralized system: (a) the notification procedure for agreements, which requires that the Commission be notified whether an agreement infringed Article 81(1) EC or an exemption decision under Article 81(3) EC was solely governed by the Commission; (b) the procedure of investigating and sanctioning violations of the EC competition rules. The notification procedure results form the implementation of Article 81 EC through a system of prohibition with the reservation of exemptions by approval. The Commission must be notified of all agreements potentially falling within Article 81(1) EC for assessment if they are to benefit form a negative clearance or an exemption under Article 81(3) EC. Regulation 17/62 also specified several kinds of decisions that the Commission may make: (a) a negative approval decision if there is no infringement of Article 81(1) EC; (b) an infringement decision when undertakings are found in violation of Article 81(1) EC; (c) an exemption decision when a restrictive agreement is found to fall within the scope of Article 81(1) EC, but the conditions for an exemption pursuant to Article 81(3) are met; and (d) a decision to impose a fine for infringing Article 81(1) EC (Forrester, 2000; Holmes, 2000). However, it should be pointed out that the EC Treaty does not use the term “exemption.” Article 81(3) provides that Article 81(1) “may be declared inapplicable” to agreements which fulfill its measures without using the label “exemption.” The Commission held exclusive power in applying Article 81(3) EC under the previous system, but it could not cope with the volume of notifications, simply because of the administrative burden. As a result, undertakings suffered extended delays before receiving a negative clearance or an exemption decision. The Commission was unable to produce sufficient numbers of individual decisions. Although the Commission had the power to grant individual exemptions, it rarely used that power. The practice of issuing “comfort letters” was introduced, in which the Commission would state either that an agreement appeared not to infringe Article 81(1) EC or that it appeared to satisfy the terms of Article 81(3) EC (Jones, 2001). In either case, the Commission would state that it was closing its file. These comfort letters were not, however, a sufficient substitute for adopting a formal decision. This situation created a certain degree of legal uncertainty. The enlargement to twenty-seven Member States on January 1, 2007, made the problem of maintaining this centralized system more critical (Roitman, 2006).
Reputation Herding in Corporate Investment: Evidence from China
Bei Ye, Wuhan University of Science and Technology and
Huazhong University of Science and Technology, Wuhan, P. R. China
The paper examines the relationship between corporate investment herding and managerial reputation concerns. I use the average return over equity ratio and the pay received by top directors as proxies for managers’ ability and accumulated reputation respectively. With a panel of 127 Chinese listed companies in the industry of machinery, equipment and instrument manufacturing during the period 2001-2005, I find that: reputation herding exists in corporate investment in China; investment herding decreases with managerial ability but increases with accumulated reputation; reputation concerns seem to exert greater effect on investment herding among state-controlling firms than among non-state-controlling ones. Herding is usually regarded as an irrational behavior for that the decision-maker imitates the actions of others while ignoring his own information and judgment. Investment herding is prevailing not only among financial investors, bust also among corporate decision makers. In corporate settings, herding increases investment concentration. It may lead to industry-wide over-investment or under-investment during a certain period, and affect the stability of macro economy. In China, there have been a lot of such lessons, such as the recent automobile-related investment boom, the over-investment in luxurious housing projects, and the shortage in primary industry investment etc. Behavioral financial research finds that though inefficient from a social standpoint, herding can be rational from the perspective of the managers. One explanation is that given the principal-agent relationship, to pursue self-interests, managers herd in order to preserve their accumulated reputation or hide their low ability so as to avoid penalty under relative performance evaluation. This idea has been theoretically illustrated by many scholars such as Scharfstein and Stein (1990), Zwiebel (1995), Graham (1999), Prendergast and Stole (1996), etc. However, their analyses are mostly based on observation of financial analysts. To the best of our knowledge, few have attempted to study it empirically in corporate settings, probably due to observation difficulties of real investment behavior. Bo (2006) initiated an empirical study in this area. She presents a reputation herding model and uses the pay received by the highest paid director of firms as the managerial reputation. Through testing with a panel of 564 UK public non-financial firms during the period of 1994-2003, she finds that the higher the manager’s reputation, the more likely the manager herds on the industry average. However, her study does not address the hiding-low-ability hypothesis proposed by Scharfstein and Stein (1990). Moreover, owing to the cultural and governance differences, it is unsure whether her findings apply to a developing economy such as China. Given the existence of investment herding among Chinese companies, the purpose of this paper is to empirically examine the relationship between managerial career concerns and investment herding among Chinese listed companies. In this paper, I take Bo (2006)’s method to measure herding behavior according to how much the firm’s investment level deviates from the industry average. But different from Bo(2006), first, I add a new variable AOROE to proxy the managers’ true ability in order to test the hiding-low-ability hypothesis; second, I choose one specific industry—the industry of machinery, equipment and instrument manufacturing—as my sample, so as to exclude industry difference. In fact, two features of this industry makes it an ideal sample for this study: first, it involves more fixed investment; second, it is more competitive and informational transparent compared with some industries such as the information industry and the biochemistry industry, and this may reduce the possible effect of information-driven herding (Devenow and Welch, 1996). The results of my empirical study support previous theoretical predictions, that is, corporate investment herding increases with managers’ accumulated reputation, but decreases with their ability. The paper proceeds as follows: the second section reviews the related literature and puts forward the hypotheses; the third section explains variables, data sets and models; then the paper provides the results, and concludes. The herding literature explains why agents mimic the decisions of others from different aspects. According to Devenow and Welch (1996)’s summary, herding may be rational or irrational from the perspective of the managers; the rational herding may be reputation-driven or information-driven, while reputation herding is often addressed when the principal-agent problem is concerned. Herding based on reputation concerns is closely related to the relative evaluation system. To exclude outside stochastic factors which obscure the accuracy of observing managerial efforts, managers are usually evaluated relative to their peers in the same industry. Scharfstein and Stein (1990) state that smart managers receive informative private information about the profitability of the project and their signals are positively correlated, while dumb managers receive uninformative signals. To pretend to be smart, dumb managers will herd on smart managers. Zwiebel (1995) shows that managers have an incentive to herd in order to minimize the negative consequences of underperformance on their reputation. Graham (1999) emphasizes that relative performance evaluation is one unique assumption for reputation herding under which reputation can be protected by herding. Prendergast and Stole (1996) compare the herding intention of new/young managers (impetuous youngsters) and old/mature managers (jaded old-timers). They find these impetuous youngsters are keen to show that they have ideas of their own by not following the herd, while the jaded old-timers are more likely to herd on standard or their past decisions and are not willing to deviate in order to preserve their accumulated reputation. In short, present theoretical papers on herding all agree that a great deal of herding owes to managerial career concerns and penalty on deviant from the crowd. This finding just coincides with Keynes (1936, p.158)’ famous saying: It is better for reputation to fail conventionally than to succeed unconventionally.
Emiratis' Demographics and their Reaction to TV Commercial Breaks: The Case of the Emirate of Sharjah (UAE)
Dr. Hussein Abdulla El-Omari, King Fahd University of Petroleum & Minerals, Kingdom of Saudi Arabia
The main objective of this research is to examine the relationship between Emiratis' demographics and their reaction to TV commercial breaks. To do so, convenience-sampling procedures were used and a total number of 700 questionnaires were distributed, evenly, between Mega and Sahara Shopping Malls in the Emirate of Sharjah (UAE). Of all the distributed questionnaires, 200 completed questionnaires were received. A follow-up study with 400 questionnaires was carried out using the same sampling procedure. The follow-up study resulted in 100 more completed questionnaires and, therefore, a total number of 300 completed questionnaires were received and used in the study. The researcher and his assistants did everything possible to encourage Emiratis to complete the questionnaires, but the response rate remained below 30% (i.e., 27.3%). The findings of this study showed that some significant relationships exist between some demographics and Emiratis' reactions to TV commercials. Local and international marketers should view the findings of this study with great care and interest, as the UAE market is considered as a competitive one. Sharjah is considered to be the UAE cultural centre. Tourists from all over the world, particularly from the AGCC countries, go there to enjoy every charming aspect of life. The Emirate is distinguished for its ever-growing modernization process, and for the various activities such as seminars, lectures, fairs, plays, festivals and other cultural events. On the banks of Khalid Lake, one can enjoy every sort of water sport and spend dreamy romantic evenings. In its well-known market, built to embody the Islamic architectural style, you can enjoy shopping with your family and friends. Sharjah does not have the immense oil resources like the Emirate of Abu Dhabi. However, there are proven reserves that run into billion of barrels. Sharjah's complex geology makes exploration and production an expensive challenge. However, recent improvements in technology have enhanced oil discovery. Trade, agriculture and fishing are the traditional way of life in Sharjah. Dates are grown extensively in the coastal plain and the highlands, and cattle are raised in many parts of the Emirate. The government of Sharjah is undertaking many development projects to modernize the economy, improve the standard of living, and become a more active player in the global marketplace. As a member of the UAE federation, Sharjah became a member of the World Trade Organization. It continues to amend its financial and commercial practices to conform to international standards. Sharjah is pursuing free trade agreements with a number of key trading partners. In its efforts to reduce its dependence on oil revenue, received from the Emirate of Abu Dhabi, and expatriate labor, the government of Sharjah projects significant increases in spending on industrial and tourism-related projects to foster income diversification and job creation in the private sector. Government programs offer soft loans and propose the building of new industrial estates in population centers outside the area of the city of Sharjah. The government is giving greater emphasis to "Emirization" of the labor force, particularly in banking, hotels, and municipally sponsored shops benefiting from government subsidies. Currently, efforts have been long underway to liberalize investment opportunities in order to attract foreign capital. About 55% of the population lives in the city of Sharjah. A large number of expatriates live in Sharjah, most of whom are guest workers from South Asia, Egypt, Sudan, Jordan, Palestine and the Philippines. The government of Sharjah has given high priority to education to develop a domestic work force, which the government considers a vital factor in the country's economic and social progress. Sharjah has many universities such as American University of Sharjah, Sharjah University, Higher Colleges of Technologies,...etc. Other post secondary institutions include business schools, technical colleges, banking institutes and health sciences institutes. Many full and partial scholarships are awarded to students each year for study abroad. Currently, many private colleges and universities exist, with several more in the planning stage. A few of these private institutions offer four-year degrees, while the remainder provide two-year post-secondary diplomas. The government of Sharjah has embarked on reforms in higher education designed to meet the needs of a growing population. Today's media environment is more fragmented than ever. Viewers, all over the world, have better control than ever for changing to TV programs that are more appealing to them and avoiding TV commercials that are perceived as of no relevance. TV commercials are usually used as they are seen as a very effective mass-market promotional element. The influence of an individual's demographic factors on his behavior has long been a subject of interest to sociologists and social psychologists. Much evidence has been amassed about the relationship between consumers' demographics and their reaction to TV commercial breaks and it is not surprising that academics of marketing would focus on this phenomenon in an attempt to understand better how, when and why consumer reaction to TV commercial breaks develops.
Examining Financially Distressed Companies in Taiwan: Application of Survival Analysis
Ou-Yang Hou, National Cheng Kung University and Kun Shan University, Tainan, Taiwan
Dr. Shuang-shii Chuang, National Cheng Kung University, Tainan, Taiwan
In contrast to the traditional modeling of financial distress construction at the firm level using only accounting ratio variables in financial statements and Logit regression, this essay provides an alternative method and offers variables to predict firm survival and financial distress across industries and over time. We use earnings management index, accounting ratio variables, and corporate governance variables to form Cox proportional hazard regression and to construct models for business financial distress. We adopt 63 Taiwanese companies in financial distress and 4,356 healthy companies during the period 1996 to 2006, as samples. Because we use matching in the analysis, we construct seven warning models for financial distress to examine the effects of earnings management index, accounting ratio variables and corporate governance variables on a firm’s survival power over financial distress. Our empirical results reveal that companies with higher earnings management, pledge ratio of directors, and lower profitability, liquidity, and activity much more easily enter financial distress situations. In particular, for earnings management index － the discretionary accruals item is the most important key factor on firm’s survival probability; it has positive effect on financial distress probability at the 1% significant level, with hazard ratios of 17.751，5.594，12.744 and 6.042 in Models 1, 3, 5 and 7. This paper provides evidence as to the key determinants of financial distress for publicly-listed companies in Taiwan, and our findings also provide substantial implications for and contributions to financial warning models of corporate distress. In recent finance and economic literature, considerable attention is directed toward issues concerning the effects of changes in the macroeconomic environment of Taiwan on company failure risk. The prediction of corporate financial distress employs a variety of predictive methodologies and models including: Multivariate Discriminant Analysis, Probit and Logit analysis and Artificial Neural Networks. While often effective to predict ultimate corporate failure, these approaches provide little analysis or insight into the dynamics of corporate failure. By contrast, this paper examines accounting ratio-based, earnings management and corporate governance determinants of probability of financial distress for Taiwanese publicly-listed companies during the 1996 to 2006 period, with a view to improve the current understanding of company failure risk. As static predictors, these variables assume a steady state of progression toward financial distress, and omit ‘time to failure’ as an integral factor in corporate distress analysis. Failure determinants are revealed from estimates based on a cross-section of 625 quoted firms, followed by an assessment of predictive performance based on a series of time-to-failure-specific survival functions, as is typical in the literature. Within the traditional cross-sectional data studies framework, a more complete model of failure risk is developed by adding to a set of traditional financial statement-based inputs, earnings management index－discretionary accruals, and the two variables that capture accounting scandal risk- one-year lagged, such as unanticipated changes in the pledging ratio and the director and supervisor shareholdings. Survival analysis is a dynamic technique that estimates the survival probability of a distressed company up to a specified time based on a selected set of indicator variables. We use the Cox proportional hazard form to assess the usefulness of traditional financial ratios, discretionary accruals and corporate governance variables as predictors of the probability of company endurance over a given time. A sample of publicly-listed Taiwanese companies is examined over the 1996 to 2006 period. Comparison of survival probabilities to a specified time calculated from the model with actual corporate survival times indicates that the selected financial ratios, discretionary accruals and corporate governance variables are efficient as estimators of corporate endurance and to detect financial distress. Although in terms of the individual accounting ratio significance and overall predictive accuracy, the findings of the present study may not be directly comparable to evidence from prior research due to differing data sets and model specifications, the results are intuitively appealing. First, the results affirm the important explanatory role of liquidity, activity, and profitability in the company failure process. Second, the findings for failure probability appear to demonstrate that shocks from unanticipated changes in pledging ratio and shareholdings of directors and supervisors may matter as much as the underlying changes in firm-specific characteristics of liquidity, activity, and profitability. Our results provide implications as to the important role of earnings management in financial distress and failure, while highlighting that changes in discretionary accruals conditions should be an important ingredient for possible extensions of company failure prediction models. The paper is divided into six sections. The next section reviews extant literature. Section 3 presents our model classification. In Section 4, data is described, the dependent variable is constructed, and methodology is offered. Section 4.1 describes an examination of a sample of Taiwan-quoted firms from 1996-2006. Section 4.2 describes explanatory variables and 4.3 offers an empirical model. The estimation results are presented in Section 5, and Section 6 offers our conclusions. The literature on bankruptcy prediction is not new. Beaver’s 1966 study is considered the pioneering work on bankruptcy-prediction models and there is a voluminous literature on company failure dating back to Beaver (1966). Much literature argues that, at the firm level, company failure can be explained by economic inefficiency, debt financing and management mistakes. However, as extant finance literature argues, debt performs an important function in contingent control allocations (Aghion and Bolton, 1992), enabling involvement in complex financial contracts by creditors to take over firm control once default occurs, and resolve distress. Viewed in this light, debt is an invaluable controlling device, while default and bankruptcy serve as a particular kind of catalyst for restructuring claims. Consequently, financial distress may not necessarily entail exit from the industry and welfare loss.
An Automatic Hyperlink Generation Approach for Content Management
Dr. Jihong Zeng, New York Institute of Technology, Old Westbury, NY
This study develops a new approach to automatically generating hypertext links using keyword extraction, off-the-shelf database software, and proximity measuring techniques. Two approaches were developed, differing in whether the keywords were manually generated by a human author, or mechanically extracted by an automatic process. In both prototypes, different types of intra-document and inter-document links were generated by using Salton et al.’s Vector Space Model based on occurrences of keywords. The first prototype was based on keywords that were produced by the document authors. A preliminary evaluation shows that this prototype system is comprehensive and sufficiently useful to help people with information searching and browsing tasks. A second prototype was developed to explore the possibility of generating hyperlinks through automatically extracted keywords. This prototype used the Oracle Text gist/theme generation tool plus a sliding window approach to extract keywords. Candidate links were generated using the same linking method identified in the first prototype. Although the automatically generated keywords were different from the keywords identified by the authors, there was a significant overlap between the set of links generated by each prototype. This study also develops a new concept of keyword-based links in a content-based, one-to-many linking environment, and develops a new approach to automatically extracting keywords. The analysis indicates that link generation based on keywords holds promise for further exploration and needs a thorough evaluation. The traditional manual method of generating hyperlinks is impractical for large collection of documents because there are a number of limits to manual link generation. However, automatic link generation is a challenging task in that it involves advanced techniques, and only limited success has been achieved (Bernstein, 1990; Allan, 1995; Agosti et al., 1996; Kellogg and Subhas, 1996; Cleary and Bareiss, 1996; Green, 1998; Witten et al., 1999; Lempinen, 2000; Liu et al., 2004; Cerbah, 2004). There would be considerable potential benefits, especially in terms of reducing the cost of building a hypertext collection, to having a hypertext system that is capable of creating links automatically. An automatic system would be especially useful when adding documents to a large collection of inter-linked documents. While automatic methods might not be as good as a skilled human editor would produce, they could be used to suggest links to an author/editor, who could decide whether or not to include the link. Therefore even if an automatic system is not perfect, it might be useful as an aid to authors/editors. This preliminary study explores the applicability of using keywords in the document to automatically generate hyperlinks for large collection of documents. Central to our system is a belief that, because keywords are succinct descriptions of important topics and characterize document content, they can make good hypertext link anchors. The keyphrase linking system (Witten et al, 1999) implemented by the New Zealand Digital Library (NZDL) is one example of link generation based on automatic keyphrase extraction. Our second belief is that generating links based on similarity values between paragraphs in the document collection is a useful approach to help people with information searching and browsing tasks. In this study, we sought to develop a useful approach to generating both intra-document links (i.e., links within the same document) and inter-document links (i.e., links between documents) based on keywords in the documents. We first examined the proximity between manually created hyperlinks and keywords in the documents. We then developed two prototype systems. In the first prototype, we developed a method that used human-created keywords to generate hyperlinks and examined whether these hyperlinks will be useful to users of the documents. In the second prototype, we generated hyperlinks through keywords that have been automatically generated and examined how good and useful these links are compared to those generated in the first prototype. Thus the two prototypes are different in whether the keywords were manually generated by a human author, or mechanically extracted by an automatic process. To test our beliefs and as background for developing the automatic system, a content analysis of a representative web site was performed to examine the physical proximity or closeness between the keywords and the hyperlinks in a human-generated set of hypertext documents. Our objective was to gain some insight into how important and useful (to authors) the keywords are in generating hyperlinks in practice. One piece of evidence is the physical closeness of the keywords and the hyperlinks in the document. A collection of 33 Web documents from a university research and technology center (1) Web site was chosen as the subject of our content analysis. It is believed that this Web site is a good sample because many of the research projects at the center involve using the Web and its use in information dissemination. Each author was invited to identify four to eight keywords for the documents he or she authored. 176 keywords/phrases were identified. We computed the number of keyword-based links in the documents, which we defined to be the sum of 1) the number of links that originate in the keywords, 2) the number of links that originate from the sentence containing the keywords, but not coincident with the keyword, and 3) the number of links that originate from the paragraph containing the keywords, but not the sentence containing the keywords. For each document, we computed two measures of association between keywords and links. The average percentage of keyword-based links is the ratio of the number of keyword-based links to the total number of links in the document. The higher this percentage, the more correlation there is between the keywords and links might be. Another important indicator we examined is the average number of links per occurrence of the keyword. It tells us how often a link is constructed for every occurrence of the keyword. It is the ratio of total number of keyword-based links to the total frequency of occurrences of the keyword. The larger this number, the more a keyword is directly related to a link.
A Management Policy for Taiwan’s Water Treatment Plant Using Toxic Chemical Compound
Yao-sheng Hsu, National Cheng Kung University, Tainan, Taiwan
Dr. Su-Chao Chang, National Cheng Kung University, Tainan, Taiwan
A risk characteristic analysis for chlorination disinfection system is of help for the dangerous-potential identification and the modification management of water treatment plants (WTPs) for using toxic material. This research included a policy with the risk analysis could be applied, and the risk management could improve in a WTP using toxic materials should be discussed. This study results shown the initial risk frequency at WTP was fairly high. After the modification of the system, the risk frequency could decrease. In addition, the modification of the chlorination disinfection system, the safety efficiency could increase. The risk analysis of serious and hazardous harmful on humans in a WTP can serve a standard model for using and management of toxic materials. A policy goal of water purification is so that the treatment natural (raw) water to meets the requirements of potable drinking water for human used (Gibbons and Laha, 1999). But the frequency change in raw (natural) water was causing a deviation of drinking water quality for water treatment plants (WTPs). Extremely awful, water pipes with rough inner linings generally facilitate the growth of microorganisms, and bacteria-mediated corrosion has been frequently reported (Holden et al., 1995). The harmful effect or health risk of pathogens is generally considered higher than that of chemical compounds (Craun, 1993; Downs, 1999), while the pathogens and bacteria have been assessed to be detrimental to human health (Hunter et al., 2002; Ashbolt, 2004). Therefore, in order to avoid the occurrence of water-born diseases caused by pathogens meanwhile to ensure safe drinking water quality, using disinfectant was to destroy pathogens should be acceptable in WTPs, which as for chlorine (Cl) chemical compound using in pre-chlorination of oxidization pollutants (for raw water), post-chlorination of sterilizing (for clean water) and re-chlorination of residual chlorine (for drinking water) etc. at Taiwan WTPs to insure a drinking water quality. Using chlorine was generally serves a disinfectant to oxidize organic matters, pathogens and pollutants in WTPs, its toxic was listed as “health hazards” in the Material Safety Data Sheet (MSDS) of Taiwan and the concentration regarded as Immediately Dangerous to Life or Health (IDLH) is 30 ppm. According to the United States National Fire Protection Association, the chlorine chemical compounds are on the third level, that is, the human threshold limit value is 0.5 ppm (Kirchsteiger, 2004), a operator in the work place needs to protect himself by wearing protective gloves and masks for routine management. Nevertheless, the disinfectant was generally a toxic chemical compound, few studies on the management policy of hazards materials for WTPs within the toxic chemical compound using (e.g., disinfection system of chlorine) was reported in the literature, and long-term to ignore assessed of government for Taiwan’s state-operated WTPs. Once the failure of a (chlorination) disinfection system for WTPs that the dispersal of toxic chemical compound (chlorine gas) into the atmosphere should also incur serious, immediate impacts on local environments and residents nearby, may not only induce an insufficient disinfectant dosage (chlorine residual) and unsafe drinking water quality. There is a necessity to care about an event arisen from improper storage and application of chlorine chemical compounds using in WTPs. According, this was required for how can help policy makers assess the external effects of the hazardous materials (e.g., chlorine chemical compound) using for WTPs and may also be useful in both prioritizing and assessing the benefits of safety for human health and life. There WTP (Nan-Hua WTP for 2003 year and Wo-San-Tou WTP for 2004 year), Sixth Branch, Taiwan Water Corporation, was selected to perform a policy demanded of hazards management for toxic chemical compound using in the area close to the (chlorination) disinfection system. The target inside, WTPs using the chlorine chemical compound for the disinfection systems are automatically operated for pre-chlorination (raw water) and post-chlorination (clean water). The auto-detector for chlorine is also installed to transfer the concentration information and thereby to take control of the WTP’s chlorination disinfection system, where the average chlorine daily demand is approximately 1,000 and 50 kilograms of Nan-Hua WTP and Wo-San-Tou WTP, respectively. The target of population near to Nan-Hua WTP is approximately 9,600 and a village with a population of 18,000 to the south and sparsely populated in general, there is a high density population of approximately 1,000 people and one school at south-west, the surrounding of Nan-Hua WTP at a non-tourism area. Another of the population near to Wo-San-Tou WTP, is approximately 48,300 and centralization populated in holiday, causing at a tourism area. Owing Taiwan belongs to the sub-tropical climate, these target WTPs are very similar except the wind direction, variations in meteorological condition. We aimed at Nan-Hua WTP (2003 year) to discussed and analyzed the system of toxic chemical compound using (e.g., disinfection system of chlorine), meanwhile to created a mode for quantify of hazards risk analysis. By referring, targeted at Wo-San-Tou WTP (2004 year) was should be to improve the management, and the hazards risk could be decreasing. Also the policy created of hazards management for WTPs are performed within two steps. In other word, that a hazards analysis processes and a hazards management and a deal with contingency or emergency was illustrated. What a risk is the cornerstone of the recent guidelines initiative from the World Health Organization, especial how the bacterial influencing effect analysis (Fewtrell and Bartram, 2001; Davison et al., 2003; Ashbolt et al., 2001). WTP’s disinfection system of chlorine using was long-term absent a completely assessment program for safeties management in Taiwan’s government. Achieving a standardized hazardous analysis processes is useful of help to understand the influencing effects and the serious for possible happen of event, also and tried to make prediction for effective control of toxic materials leaking for hazards target. QA/QC of hazardous analysis was adopted to transfer the subjective views of risk to a systematical evaluation and management, detail items will be discussed.
Corporate and Organizational Citizenship: A Case from Turkey
Dr. Muberra Yuksel, Kadir Has University, Istanbul, Turkey
Recently, organizational citizenship behavior such as sharing knowledge and complying with procedures has become increasingly important as a source of competitive advantage. A lively debate is going on across the business world questioning the role of business, balance of power between the organizations and the social agenda. While encouraging employees to act in line with the organizational goals both in terms of result and competencies, organizations emphasize human resources and knowledge management so that integration for innovative solutions to business problems may be possible in the globalizing economy. Meanwhile, compliance with the norms of stakeholders concerning issues such as consumer and employee rights and environmental safety in developing countries are often the top issues of corporate citizenship. Following Matten & Crane, I have regarded corporate citizenship in a broad sense which emphasizes the role of a corporation in administering individual citizenship rights that distinguished it from corporate social responsibility. Such a definition reframes the citizenship by acknowledging that the corporation administers certain aspects of citizenship for other constituencies. These include traditional stakeholders, such as employees, customers, or shareholders, along with wider constituencies with no direct transactional relationship to the selected organization. Most studies on organizational citizenship are either focusing on personality aspects of employees or on organizational culture. This study aims at contributing to the literature by examining this important and diffuse issue empirically and paves the way for clarifying the conceptualization of organizational citizenship attitudes and behaviors. Building upon prior studies on citizenship in Turkey, I aim at showing the perception of employees about organizational citizenship in a developing country framework empirically. I will analyze the perceptions on organizational citizenship based on results of the survey of employees in the finance sector in Istanbul, Turkey. My first assumption is that citizenship has to increase the awareness of their employees as internal customers and investors as internal stakeholders and then spread it out to external stakeholders. Consequently, my second assumption is organizational citizenship precedes corporate citizenship. Thus, an effective human resource management that enables and cascades down corporate values and priorities is presumed to be a precondition for employees. I expect that identification will be high particularly among employees who believe both in the significance of their future role and competencies in terms of their career and their corporate status within the selected joint-venture bank. I have aimed at analyzing identification mainly from two perspectives: The social identity and organizational impression management theories. There are generally two sets of psychological dimensions: normative and perceptual. The first aspect is about social developments and commitments particularly at work that provide expectations and limitations, the latter is about personal cognitive processes and perceptions that help to interpret and organize information, particularly on the image of organization as well as employees’ self concept. Drawing upon social identity theory, I argue that the salience of organizational identification leads to greater commitment and organizational citizenship behavior. In this vein, an empirical research in this field of corporate communications may pave the way for a better understanding on employees’ attitudes on citizenship and its impact on their organizational identification as well as their alignment with organizational codes of conduct. Organizational citizenship may facilitate organizational identification not only because it enhances the perceived organizational identity, but also because it is contributing to an affirmative external image of the firm. On the whole, in this paper I argue that organization-oriented behavior is linked to the role organizations play in defining employee’s social identities, given that the organization follows the ground rules and codes of conduct that are cascaded down through organizational culture. I aim at explaining the intangible features of commitment to work as well as to organizations in a particular case study with a social identity and work values focus. The normative aspects of organizational citizenship emphasizing complying with procedures, on one side and the perceptional measures concerning achievement, appreciation and procedural justice on the other are expected to be the main determinants of organizational belonging and citizenship. In the globalization process corporations have been gaining influence, often without engaging in the improvement of the common good for all stakeholders. Along with the affirmative impact of globalization , the negative effects of foreign investment to various stakeholders such as the environmental damage, financial market instabilities, exploitation of both employees and consumers, cultural hegemony, erosion of local culture and community have increased the debates on the accountability of foreign corporations and joint ventures (Matten, D & Crane, A 2004) It is no longer sufficient for corporations to behave fine in developed countries, while violating basic norms of worker, consumer, environmental and community protection elsewhere. Corporate Citizenship (CC) functions as a new way of presenting existing concepts of Corporate Social Responsibility (CSR) but applied to a wider and further set of issues (Kalkan & Yüksel; 2007). Corporate citizenship focuses on corporate responsibilities; however, CC comprises of individual, social, civil and political citizenship rights and obligations that are conventionally granted and are transnational most of which are protected by national polities and governments (Matten & Crane, 2005: 166). There are numerous conceptual and operational definitions of corporate citizenship; we have employed Maignan and Ferrel’s (2005) definition which emphasizes stakeholder management theory in our preliminary evaluations through in depth interviews with managers in our case study. We emphasized the legal, economic, and ethical along with the most significant discretionary responsibilities imposed upon the financial institutions by stakeholders such as employees, shareholders, business partners, suppliers, customers, competitors, legal and public authorities and local communities. We have checked the documents related with corporate citizenship and then decided that the most promising stakeholders are traditional internal ones, i.e., the shareholders, partners and employees.
The Work Adjustment of Taiwanese Expatriates
Dr. Hsin-Kuang Chi, Nanhua University, Taiwan
Dr. Cherng-Ying Chiou, The Overseas Chinese Institute of Technology, Taiwan
The purpose of this study is to explore the work adjustment factors that influence the Taiwanese expatriates when they work in the U.S. Questionnaires were mailed to HR Department in 93 subsidiaries which were all selected from TSEC (Taiwan Stock Exchange Corporation) Market in Taiwan. A total of 186 subjects were asked to respond to the questionnaire. The results indicated that language, support, relationship, role novelty, role ambiguity, role conflict and role discretion were related to expatriates’ adjustment to the work. However, previous experience was not related to work adjustment. Family support and satisfactory work adjustment were related to intent to stay in the overseas assignment. Equally important, support, role conflict, and role ambiguity are the most influential on expatriate work adjustment. It is important to develop and retain expatriates who possess global knowledge and experience in international business. Organizations have used several methods to help expatriates acquire global knowledge and experience. One of the methods is to have expatriates live and work in multicultural groups where members have diverse cultural backgrounds (Adler, 1984). However, the failure rate commonly fell in the 20% - 40% range for expatriates’ transfers because of poor performance or the inability of the employee or the family to ineffectively adjust to the foreign work environment (Tung, 1981; Black, 1988; Mendehall & Oddou, 1985). Moreover, there are few key studies in Taiwan on the overseas adaptation of Taiwanese expatriates working at Taiwanese subsidiary companies in the U.S. Therefore, understanding the consequences for the adjustment might help organizations find a way to reduce the failure of the international assignment by improving the adjustment process for expatriates. The study explores the interplay of the individual factors, social factors and work factors as they affect the Taiwanese expatriate’s work adjustment in the U.S. The study on expatriate adjustment has contributed to business to understanding and management of the problems contributing to their failure and in creasing management’s knowledge of practices and of employee behaviors in different countries. The results of this study provide information for the expatriate, and promote the corporate competitiveness of Taiwan. Work adjustment is the degree of fit between the expatriates and the work environment, both socio-culture and work (Aycan, 1997). Work adjustment is marked by both reduced conflict and increased effectiveness in working. However, expatriate failures in foreign work assignments are caused by the inability to adjust to foreign social relationship and working conditions. Expatriate work adjustment is identified in social support and in work domains. Social support refers to an expatriate progress in becoming fully effective in the society and his/her ability to handle problems in establishing relationship situations (Aycan, 1997). Work adjustment includes demonstration of behaviors that result in effective accomplishment of an expatriate’s required task, and the expression of positive attitudes towards the new work role (Aycan, 1997). Those adjustments emphasized the individual factors, social support factors and work factors. Individual factors have been found substantial and criterion-related validity for predicting job performance in its various dimensions ( Church, 1982). In 1991, Black et al. examined the perceived importance of personal and situational variables for overseas adjustment and success. Family situation. Birdseye and Hill (1995) pointed out that families were the second set of individual related elements in expatriate overseas adjustment. Chi and Yeh found (2006) that family and family support is an important and positive factor in an expatriate’s oversea assignment. Spouse or family inability to adjust to the new environment affects the expatriate’s international assignment. Spouse/family adjustment had been found to be significant to expatriate adjustment (Black et al. 1991; Mendenhall & Oddou, 1985; Church, 1982; Black & Stephens, 1989). Host language. Knowledge of the language of the host country is vital to success in living and working in that country (Ashamalla, 1998). Knowing the host language will also help an expatriate feel less isolated and build the kind of teamwork needed to succeed overseas (Dolainski, 1997). Host language skills will reduce the misunderstanding and miscommunications. It helps in understanding the world perspective of the people with whom one is working and living (Ashamalla, 1998). Previous experience. Previous knowledge of the host culture and experience are considered to be another important individual factor that leads to successful adjustment (Black, 1988; Tung, 1988). Caligiuri et al. (2001) stated that the more expatriates know about the host culture and experience, the more accurate their expectations and the better their adjustment to the host country. Previous foreign experience and work experience are positively related to success in a foreign assignment ( Black, 1988). Black (1988) proposed that previous work experience could provide the expatriates with information about work transition, thereby reducing uncertainty and increasing the predictability, which results in an increase in the individual’s familiarity with the transition. Studies clearly demonstrate that previous work experience knowledge is a great source of support in work performance and adjustment.
Cyclical Cooperation and Non-cooperation in an Alliance
Hsien-Hao Ting, National Cheng Kung University and Shih Chien University, Taiwan
Len-Kuo Hu, National Chengchi University, Taiwan
Shaung-Shii Chuang, National Cheng Kung University, Taiwan
This study attempts to provide a model explaining the dynamic behavior of an individual’s cooperativeness and the evolution of an alliance based on the implication of prospect theory. By endogenizing an individual’s willingness to cooperate through its direct impact on his utility function and incorporating the factors of cooperation attitude and the individual’s output share in the Ricardian type of production function, this model is able to describe the cyclical fluctuation of the individual’s willingness to cooperate. This study concludes that except for perfect alignment of the initial cooperativeness among its constituents, the dynamics of an organization’s cooperation cycle will be very irregular. The gradually widening and divergent cooperation attitudes among its constituents will eventually lead to the collapse of the alliance. Throughout history, an individual’s free choice and the government’s central authority have gone hand in hand. Nowadays the bottom-up mechanism like democracy or capitalism seems to transcend the top-down totalitarianism or communism. However, these two mechanisms will alternate if the evolution of our economic system can be examined over a longer span of time. As Douglass North argued in his recent book (2005) that the key to human evolutionary change is the intentionality of the players, the alternation of these mechanisms, therefore, is for the most part a deliberate process shaped by the perceptions of the individuals about the consequences of their actions. Our study basically follows North’s analysis and ascribes the above institution change to the interaction of two opposing forces that dictate an individual’s daily behavior: one is pro-individual’s autonomy and absolute freedom; the other favors gregariousness and is prone to yielding to the central governance. This paper examines the deeper determinants of how these forces evolve and how economies change. A firm would behave like a representative individual if we could ignore the problem of aggregation. Henceforth, we will treat the issue of inter-firm’s relationship as the one of inter-person’s, and focus on the general theory of organization that can bring in the cyclical consequence. Different structure of alliance provides different incentives to the individuals of the alliance. The bottom-up system is primarily driven by an individual’s self-interest. The alliance thus formed provides a platform to accomplish the benefit of cooperation based on a Nash non-cooperated solution concept. On the other hand, the individuals in the top-down system give up more of their own freedom of choice in return for a greater public benefit. The individuals distribute more of their utilities toward the public benefit rather than their own benefit. The alliance is thus constructed on a more cooperated basis. In general, the proportion of an individual’s utility put on the organization as a whole (altruism) versus the proportion put on the individual himself should be dynamically determined in the system. Modern society has thrived on the market prowess. The new upper-class generation has amassed great wealth through the corporate ladder. Giant multinational companies are the leviathan navigating the flow of capital and global resources. They are the owners of not only physical capital but also knowledge capital. Under the pretext of democracy and free market, their self-interest can be unflinchingly stretched and extended to every corner of the world market, grabbing most of the fruits of production. This trend of development has left remaining no other nexus between man and man than naked self-interest, than callous ‘cash payment’. It has resolved personal worth into exchange value and reduced the family relation to a mere money relation. This study attempts to provide a model explaining the dynamic behavior of an individual’s cooperativeness and the evolution of an economic system based on the implications of prospect theory developed by Tversky and Kahneman (1979). According to the prospect theory, the objective function that a representative individual is intended to achieve is defined to be the gain or loss relative to some reference point. If the individual accomplishes more gain than loss in the past, his reference point will be raised by the Bayesian learning rule, thereby making the further gain less likely and bringing in the seed of withdrawing from his initial economic choice (e.g., backing up from the regional agreement). Analogously, when the individual suffers from more loss than gain and refrains from making an initial choice, his reference point will become lower and lower and facilitate the gain from his further choice. The formation of reference point in determining the gain or loss from advancing a relation with its counterpart is critical to overturn an initial decision and result in a cycle. According to the experience of our learning process, our reference point is closely related to our past history and the position of our peers. As for the source of value from which potential gain or loss might be derived, it is defined from the content of each issue. For instance, if we would like to evaluate the consequence of joining a regional agreement, either the factor ratio difference (as in the Heckscher-Ohlin model, 1933) or the diverse relative comparative advantage (as in the Ricardo model) among the member firms in the alliance is the driving force that causes the gain or loss of the value function. This study will reexamine the human decision by extending the implication of prospect theory in several dimensions. First of all, we categorize two different forces that drive the formation of our daily decision, that is, conforming (or cooperative) force vs. centrifugal (or self-loving) force. The former facilitates us to adapt for the outer environment by conforming our decision to the majority of society. The reference points or benchmarks for our decision are the imprints on our minds that are cultivated gradually from our education, experience, culture and history. To secure our survivorship we tend to seek a mental and physical safe harbor by abiding by the majority rule. The latter (centrifugal or self-loving) force accounts for the formation of self-identification. By purposefully distinguishing ourselves from others we are able to ascertain our own identity and pride. Under the patronage of property and human right an individual’s character can be nurtured and developed. The resulting idiosyncrasy of our society contributes to the innovation and the continuation of our growth.
Characteristics of Power Interruptions and Its Impacts on Tourist Hotels in Kenya
Nehemiah Kiprutto, Moi University, Eldoret, Kenya
Power outages can be characterized along a number of dimensions including frequency, duration, timing, warning time and interruption depth (Adenikinju, 2005). Interruptions of electricity supply in Kenya are frequent, and have become normal and generally accepted. Consequently, operations in many industries including tourism have been affected negatively. Tourism in Kenya is concentrated in Masai land, Kenyan Coast and Nairobi. Hence, this study aimed at determining the frequency, duration and warning time of scheduled power interruption in two of these regions by analyzing interruption of electricity supply announcements posted by the Kenya Power and Lighting Company on the Daily Nation newspaper and the company’s website. Through questionnaire technique, the study also aimed at finding out the behaviour of tourist hotels in their attempts to mitigate outage losses. It was found that Nairobi and Kenyan Coast experienced high frequency of scheduled power interruptions that last for over eight hours. Due to these frequent outages, tourist hotels have had to acquire generators and Photovoltaic (PV) panels to supplement the grid. As a result, they incur extra expenses in running and maintaining their generators, which in turn reduces their profit margins. Nevertheless, bookings in these hotels have not been affected by the outages. Infrastructure plays a positive role in economic development. It represents an intermediate input to production, and thus changes in infrastructure quality and quantity affect the profitability of production, and invariably the levels of income, output and employment. Tourism infrastructure provides an important foundation for tourism development. Infrastructure increases the efficiency of privately producing and distributing tourism services. Electricity is an important infrastructure that has become central to achieving the interrelated economic, social, and environmental aims of human development. The important role of electricity is notable. Even at the lowest economic levels, just above subsistence, radios and torches can make a significant improvement in living standards. The amounts of electricity involved here are tiny, but are absolutely essential (Foley, 1995). Provision of electricity in many developing countries is the responsibility of the government. Private investors have been impeded from investing in electricity generation and supply by the high set up cost, among other reasons. Like many poorly performing corporations in the hands of the public sector in the developing world, failures in supply of electricity by Kenya Power and Lighting Company (KPLC) does not take residents by surprise. Frequent scheduled and unscheduled power interruptions are daily occurrence in the country. One or more parts of Nairobi and Kenyan Coast experience a form of power interruption on almost daily basis. Consequently, many services have been interrupted, causing great inconveniences that result in poor sales, hence smaller profit margins in various industries including tourism. Tourism is now Kenya’s fastest growing industry and has become the second largest foreign exchange earner after tea generating $803 million in 2006 up from $699 million in 2005 (UNWTO, 2007). There were 10600 tourist accommodation establishments in 2003, with the majority located in Nairobi and Kenyan coast. Nairobi is one of Africa’s largest cities. It has a National museum, a cultural center, a National Park, excellent restaurants and shopping. On the other hand, Kenyan Coast comprises of the North Coast & Mombasa and South Coast. Generally, historical sites like Fort Jesus, beaches, cultural parks and snake parks are some of the attractions at the Kenyan Coast. Tourist accommodation establishments in these two regions range from small villas to 5-star hotels. They include motels, beach hotels and tourist resorts. Tourist hotels offer catering and accommodation services, have recreational activities like swimming pools, and some provide both transport and communication services. Most of these services need electricity. They include cooking, food refrigeration, lighting, water heating and so on. Therefore, frequent electricity interruptions have direct effect on the operations of a tourist hotel, and are therefore unwelcome. Unreliability of power supply can be a key element of customer dissatisfaction. Interruptions particularly at night interfere with tourists’ activities in the hotel like indoor games and dancing. When such disruptions occur, the quality of their experience is affected. Thus, it becomes one of the numerous small encounters that form the overall impressions which determine the type of image tourists develop for Kenya. Destination image is not only the perception of individual destination attributes but also the holistic impression made by the destination (Echtner and Ritchie, 1991). Thus, frequent power interruptions are more likely to create a negative image of Kenya as a tourist destination. To mitigate the effects of power outages, many hotels have resorted to expensive stand-by diesel generators to guarantee continuity of services for tourists. Although it is known that Kenya experience frequent power interruptions, actual frequency and duration based on empirical understanding has not been reported. Thus, the objective of this study was to determine the frequency, duration and warning time of scheduled power interruption in the City of Nairobi and the Kenyan coast. These two regions have the majority of tourist hotels and receive 86% of all tourists in Kenya (GOK, 2004). Hence, the study also attempted to find out the measures hotels take whenever they experience power interruption, determine the general impacts of interruptions on hotels and establish their perception of interruptions on their businesses. Thus, the important questions that this study sought to answer included: How many times in a week does Nairobi and Kenyan Coast experience scheduled interruption of electricity supply? How long do these interruptions take? How much time is given by KPLC prior to the scheduled power cut? How do tourist hotels in Nairobi and Kenyan Coast respond to these interruptions? The paper begins with a review of previous studies on power outages followed by the methods employed in the study then the results are presented, discussed and conclusions drawn.
The Synergy of Brand Alliance: A Brand Personality Perspective for Benq-Siemens
Wei-Lun Chang, Tamkang University, Taipei, Taiwan
Brand management is the significant issue in the competitive business environment, for example, specialize a brand among many choices for consumers. The present paper explores the synergy and effects of brand alliance from BenQ-Siemens case by survey. The results reveal the consumers’ perception and cognition for BenQ-Siemens and the effects of brand personality for brand alliance. The relationships between the customers and companies become dynamic and unpredictable in the existing business environment. Companies attempt to build a strong brand image in order to attract customer attention and bind the relationships. That is, brand is not only a company name but an embedded image for customers. The enterprises can attain high market share if they identify the personalities and specialization of their brands. Brand alliance is a special strategy for brand management which combines two brands to a single brand with a unified name. Brand alliance is a branding strategy used in a business alliance. Brand alliance, which has become increasingly prevalent, is defined as a partnership or long-term relationship that permits partners to meet their goals (Cravens, 1994). For instance, Sony-Ericsson is a successful paradigm for brand alliance through joint venture. The synergy of brand alliance is unpredictable and potentially powerful. BenQ, the top 10 brand in Taiwan, merges the Siemens’ telecommunication department in 2005. Despite the outcome is failure for two companies, the brand image and personality are worthy to explore from customer perspective. The present research explores the perceptions of customers for two brands and allied brand through survey analysis. The results demonstrate the attitudes of brand personality for BenQ-Siemens based on customer perceptions. Meanwhile, several advantages are identified from this work: (1) providing clues for the perceptions of BenQ-Siemens from customers, (2) exploring the brand personality of BenQ-Siemens primitively, and (3) furnishing a roadmap for brand alliance research. The rest of the paper are organized as follows, section 2 briefly defines the brand and brand alliance from the literature, section 3 demonstrates the research framework , section 4 provides data analysis, and a conclusion is furnished in section 5. AMA (American Marketing Association) defines brand as “a name, term, symbol, design or the combination of above in order to identify the product or service and distinguish from the competitors”. Furthermore, Aaker (1991) defines brand is “a specialized name or symbol”. That is, brand is not only the tool to identify products, source, or assurance of quality, but transmits the message of attributes, functions, and quality to customers. Chernatony and McWilliam (1989) specify the functions of a brand: (1) a tool for identification and the differentiation from competitors, (2) the assurance and commitment for product quality, (3) a way to project the image, and (4) a means for decision making. Hence, the brand affects the customer’s decision for purchasing products or services. The customers simplify decision making process through a well-organized brand management and the companies earn advanced profits and competition. In addition, brand alliance is a short-term or long-term partnership for two or more brands. Five types of bundled products for brand alliance are furnished as follows. Bundled Products: Bundled products is a type of short-term or un-frequent alliance. The benefit of this type is the increased customer needs from stimulating the prices. For example, buying a bottle shampoo getting a piece of soap free or buying a flight ticket getting three nights hotel free. Joint Sales Promotion: Joint sales promotion combines the resources of two or more brands in order to decrease the costs and reach the sales opportunities. For instance, earning the mileages from using credit card. Composite Brand Extension: Composite brand extension is the axiom of combinational concepts from psychology. Customers can acquire a new product information in purchasing process via combines two or more brands. For example, Slim-Fast chocolate cake-mix by Godiva. Ingredient Branding: Ingredient branding attempts to build the perception and preference of product ingredient. One of the brands may play a vital role in terms of material or components. For instance, IBM laptops with Intel inside and Sony computers with Dolby digital system. Co-branding: Co-branding indicates two or more brands combine to a single product. One of the brands expect the alliance can strength the brand preference and purchase intension in terms of a new name either A/B or B/A. For example, Sony-Ericsson and BenQ-Siemens. Desai and Keller (2002) argued that the main advantage of a brand alliance is that a product may be more uniquely and convincingly positioned by virtue of the multiple brands involved, thereby generating more sales and reducing the cost of product introduction. However, an unsatisfactory brand alliance could have negative repercussions for the brands involved. Most of the extant research focuses on how consumers’ attitudes toward the brand alliance and the images of the allied brands interact with each other. Park et al. (1996) compared co-brands to the concept of conceptual combinations in psychology and revealed how carefully selected brands could overcome the problems of negatively correlated attributes. Argawal and Rao (1996) argued that a brand alliance could signal product quality when the loss of reputation (future profit) or sunk investments were significant enough for the branded allies. Simonin and Ruth (1998) found that consumers’ attitudes toward a brand alliance could influence subsequent impressions of each partner’s brands, although these effects also depended on other factors, such as product fit or image congruity. Big five model, proposed by Galton (1884), is the most well-known theory to measure personality in psychology which employs lexical hypothesis to describe human personalities. Allport and Odbert (1936) extend Galton’s theory to 17953 adjectives for human personalities. Cattell (1943) reduces the number of adjectives from 17953 to 171. Next, Fiske (1947) utilizes factor analysis to extract 171 adjectives to five factors for human personality. Finally, Norman (1963) summarizes certain literature and redoes factor analysis to develop the big five model. The most used version of big five model is modified by McCrae et al. (1986) and Goldberg (1990) with five factors: surgency, agreeableness, dependability, emotional stability, and culture. Hough and Schneider (1996) verify that big five model is a good classification framework to measure human personality. Borkeanau (1992) and Peabody (1987) conduct the empirical research for big five model, and confirm to the research of MaCrae and Goldberg.
Effect Estimation of Workplace Health Promotion Practice of Taiwan High-tech Industry: Using System Dynamics as an Example
Ching-Kuo Wei, Oriental Institute of Technology, Taiwan
Ming-Shu Chen, Far Eastern Memorial Hospital., Taiwan
Corporate practice of Workplace Health Promotion (WHP) can considerably reduce the employee absence, employee turnover rate, and application for health insurance expense. Although there are many successful cases overseas, companies in Taiwan still believe that this measure will increase corporate expenditure. Moreover, under the current national insurance system, employers are relieved from serious burden of medical insurance cost. Thus, there is lack of incentives and motives for companies in Taiwan to introduce WHP. However, past studies indicated that the benefits of implementing WHP include the increase of employees’ available work hours, reduction of turnover rate, reduction of related medical expenses, and important performance indicators, such as reduction of personnel expenses, and increase of productivity. This paper used the financial statements of leading company in the Taiwan high-tech industry, Taiwan Semiconductor Manufacturing Company, of the recent five years, and System Dynamics to simulate the effect of WHP implementation. The results revealed that if the Workplace Health Promotion was implemented for 20 years under normal condition and high effect, the personnel expenses could be reduced by 2.84%(NT$1,606,079) and the productivity increased by 3.76%( NT$449/person year) per year in average. If the effect was normal, the personnel expenses could be reduced by 1.8%( NT$1,022,451/year) and the productivity increased by 2.5% ( NT$299/person year) per year in average. Under low effect, the personnel expenses could be reduced by 0.77% (NT$442,048) and the productivity increased by 1.25% (NT$149/person). In the first and second year of the implementation, the effects were lower or even negative. However, from the third year on, the positive would become effect, and from the sixth year on, the effect would be stable. Through the simulation by the system dynamic tool, this research validated the hypothesis that “by promoting WHP, companies could improve their operational performance” For domestic companies, they should strengthen the support from the executive levels and facilitate the willingness of implementing WHP, so that public health organizations and medical institutions can have the opportunities to intervene in the implementation of WHP. The proper and feasible WHP promotion measures can thus be established. Corporate implementation WHP can considerably reduce employee absence, employee turnover, and health insurance expenses. A healthy work environment has very positive influence on the organizational effect. When employees’ morale, work satisfaction, and productivity are increased, the organizational responsibility for employees’ health damage, related insurance expense and indemnification would be considerably reduced. Workplace is a considerably important environment which can directly or indirectly influence workers’ physical and mental health, or even the harmony of families and the society. Thus, workplace health promotion becomes relatively important. WHO indicated that there is about 58% of work population worldwide, and they spend nearly 1/3 of their time at work. It shows the importance of WHP, which upgrades 3-section and 5-grade “health promotion” in public health and strengthens workers’ health. There have been successful WHP models overseas, yet due to the difference between the medical environments of Taiwan and foreign countries, WHP was not easy promoted in Taiwan. Only TaiPower Corp., Shihlin Electric and Engineering Corp., Taiwan Semiconductor Manufacturing Company (TSMC), and other few high-tech companies implemented similar health managerial models. The reason was that the executives believed that strengthening employees’ health conflicts with the goal of increasing productivity and reducing costs. Under such belief, they neglected employees’ health. To discuss this issue, this study presented TSMC as an example, and proposed a new evaluation concept. It combined the theoretical tools of management science and economics, constructed the model to meet the actual organizational scenarios, resolved the errors of past studies based on simulation results, and further facilitated WHP to lead to win-win situation for both employees and companies. WHO defines health as a state of physical, psychological, and social harmony, and it is not limited to disease or weakness prevention. Past literatures concerning the factors on health, among the critical factors for disease and death showed life style as a most important factor. Other factors included inheritance, environmental pollution, and medical progression. Those studies pointed out that of the factors influencing human health, lifestyle was 43%, biological factor was 27%, environmental factors was 19%, and medical progression was 11%. With regard to the top ten causes of death in Taiwan and the factors influencing people’s health, lifestyle was 45%, biological factor was 25%, environmental factors was 20%, and medical progression was 10%. In 1975, Pender elaborated the factors influencing an individual’s decision and action in disease prevention, and started the wave of “health promotion”. Thereafter, several countries proposed “health promotion” plans according to those factors. In 1986, WHO held the first World Health Promotion Conference in Ottawa, Canada, and defined “health promotion” as the process for allowing people to strengthen their capacities to control and improve their own health. The US Department of Health and Human Services defined “health promotion” as the act which helps an individual to have the proper choices in lifestyle, and further reach the goal of preventing chronic disease. “Health promotion” means to apply various educational, organizational, economic, and environmental supports to encourage the public to act responsibly for their health. In the 70s, European and American countries started to value WHP, but tended to focus on single disease or risk factor, or specific risky behaviors on certain people, while neglected the health factors of environment, society, and the whole organization.
Consumer Expectation and Consumer Satisfaction Measurements: A Case Study from India
Dr. Marwan Mohamed Abdeldayem, AL Ghurair University (AGU), Dubai, UAE
Dr. Muhannad Radi Khanfar, AL Ghurair University (AGU), Dubai, UAE
Previous research on consumer satisfaction revealed that satisfying consumer’s needs and wants is critically important for the success of any business organization. Many prior studies have examined the antecedents of consumer satisfaction ( e.g. Cho & Park, 2001; Devaraj, Fan &Kohli, 2002; Bloemer & Kasper, 1995; Jones & Suh, 2000; Szymanski & Henard, 2001; Teng et al., 2006; Teng & Hung 2007). In this study we introduce a survey among two wheeler buyers in India. Where in satisfaction has studied with some variables such as prior expectation, actual product demography, confidence, performance, etc. For the purpose of this study, data relating to buyer expectations from the scooter, and disconfirmation (performance scores minus expectations scores) were cross tabulated both among each other and against other variables (where associations were observed, chi square tests were also carried out). While many of the findings of this study are confirmations of past observations, (mainly ceiling-floor effect, expectation effect, deleterious effect of involvement), the finding relating to the unique impact of very high expectations on satisfaction is both new and interesting and has serious implications for business and future research. The concept of consumer satisfaction is crucial in market nowadays. This is happened because of the competition in market, awareness of consumer or customer’s and because of entry multinational companies. The consumer can be satisfied by keeping the balance in demand and supply, we always keep in mind what consumer expect and as per their expectation how market (Shopkeeper) perform this relationship plays a very important role in satisfying the consumer. Consumer satisfaction provides the basis for the marketing concept and has been shown to be a good indicator of future purchase behavior. Consequently, consumer satisfaction is a popular topic in the marketing literature. Most models of consumer satisfaction maintain that discrepancies between ex ante expectations of a good or service and the product's ex post performance are the best indicators of the satisfaction or quality perceived by the customer (e.g., Oliver 1977, 1980; Parasuraman, Zeithaml, Berry 1985, 1988 and McQuitty et al. 2000). However, there are many alternatives to this approach (e.g., Clemons and Woodruff 1992; Oliver and DeSarbo 1988; Spreng, MacKenzie, and Olshavsky 1996; Westbrook and Reilly 1983), and there is controversy regarding the relationship between consumer satisfaction and service quality. If the differences between the various positions can be set aside, their most obvious similarity is the utilization of "gaps" models for measurement (McQuitty et al. 2000). Furthermore, Consumer satisfaction is having prime importance to business both in short term and long term. Short term importance includes word of mouth communication and higher repeat sales while long term will imply image and sustained market share, short term benefits pave way to long term benefits it is therefore understandable that consumer satisfaction surveys are in vogue, both in Indian and abroad and companies using it are said to be reaping great benefits. The seventies and eighties especially saw a spate of satisfaction surveys although they pertained mostly to products. It was only in the late eighties that a separate tool for measuring satisfaction among service was developed by Parsuraman et al. (1988). Once the instrument was refined by the early nineties satisfaction surveys soon spread from product to service. However while satisfaction surveys are quite popular in the western countries they are fairly new in India and China. However while the service companies have realized the importance of satisfaction survey, the product manufacturers appear more or less oblivious of the development. In addition, the notion that the degree of disconfirmation felt by a consumer should decrease over time recognizes that consumers will learn as they gain experience with a product, and should modify expectations accordingly. Otherwise, a consumer would be surprised every time they experienced a product again. However, people may not immediately adjust their expectations to match perceptions of the product’s performance. For example, Oliver (1980) finds that expectations can be resistant to change, and Olson and Dover (1979) suggest that "After several disconfirmations, expectations may eventually coincide with pos The notion that the degree of disconfirmation felt by a consumer should diminish over time recognizes that consumers will learn as they gain experience with a product, and should modify expectations accordingly. Otherwise, a consumer would be surprised every time he or she experienced a product again. However, people may not immediately adjust their expectations to match perceptions of the product’s performance. Oliver (1980) finds that expectations can be resistant to change, and Olson and Dover (1979) suggest that "After several disconfirmations, expectations may eventually coincide with post trial beliefs so that further disconfirmations are not possible." (p. 187). The rate at which consumers adjust their expectations to meet perceived product performance can be influenced by the variability of a product's performance, the degree of involvement with the product, the completeness and accuracy of information that forms expectations, and the precision with which a product's level of performance is recalled. Adjustments to expectations are likely to be swift when the product is easily evaluated, but slow when a product is complicated and has many attributes Present paper deals with the survey among the two wheeler buyers in India. Whereas vis a vis some other variables such as prior expectation, demography actual product performance, and consumer confidence etc. Consumer satisfaction has been a popular topic in marketing since Cardozo (1965), and the associated literature can be divided into three broad topics (see McQuitty et al, 2000): (1) the relationship between consumer expectations and appraisals of performance (e.g., Anderson 1973; Cardozo 1965); (2) the antecedents of satisfaction (e.g., Oliver 1977, 1980; Tse and Wilton 1988); and (3) the consequences of consumer satisfaction for purchase decisions, sales, and firm profitability (e.g., Anderson, Fornell, and Lehmann 1994; Fornell 1992; LaBarbera and Mazursky 1983). Research relating these and other constructs is now explored.
The Wicker Basket Effect: A Special Case of the Expulsion Effect
José Villacís, Universidad San Pablo-CEU, Madrid, Spain
In a budget deficit situation where public expenditure is greater than taxes (G>T), the Treasury borrows money to pay the margin of expenditure – the budget deficit – by means of national debt. In this case, the Treasury borrows from the Central Bank, which, in turn, produces money to purchase national debt. This operation – formally similar to an expansive open market operation – is not monetary policy. Moreover, such monetary disarray inexorably leads to very serious inflation levels. Although private economy savings are available to finance investment, they drop in nominal terms due to inflation. Due to such erosion, savings cannot finance planned investment. In other words, private investment is expelled as a result of the wicker basket effect of savings. Financial flows in an economy have an origin, a purpose connected to that origin, and full monetary support. They are money. Any economy having to grow, or even maintain itself, requires savings – not consuming now in view of consuming more in the future. Those savings are used to finance investment in maintenance or replacement and also, if possible, to increase the system production capacity by implementing net investment. It is clear that both in a monetary and a non-monetary economy savings are required to finance investments. Any activity that diverts savings or causes them to vanish is evil for the economic system itself because these activities prevent the system from maintaining itself properly and/or from growing. Involvement of the public sector contributes to the annihilation of savings. The simplest example of this occurs when the budget deficit turns to the private economy (companies and household economies) to attract its savings and then pays the margin of public expenditure, that is, the budget deficit. We call an investment expulsion effect to the situation when the monetary value of reduced inversion is equivalent to the declined monetary value of the deficit. Here we will examine a particular case wherein, given a budget deficit, instead of turning to the private sector for payment, the margin of expenditure money is borrowed from the Central Bank. The Central Bank finances the deficit by producing money, and therefore savings are available to finance investments. In this situation, the expulsion effect should not occur, since savings have not been diverted from their destination, which are investments. However, the expulsion effect does occur due to external causes that originate from the financing of deficits. Indeed, the monetary financing of deficits generates very serious inflation that cause savings to vanish by reducing their purchasing power aimed at tackling investments. In an inflationary situation, savings can be compared to holding water in a wicker basket, thus the name, “wicker basket effect.” When expenditures in the State budget are greater than income arising from taxes, a deficit situation occurs, which is measured as the difference (G–T). At this point, there is a need to borrow money in order to pay such difference. The Treasury borrows money by issuing and selling national debt, which, in this case, is produced by the Central Bank. We suppose that the borrowing is possible and that there is no legal obstacle as happens with European Union states within the euro system. Provided there are no obstacles, the Central Bank purchases national debt by making money. This operation is called monetary financing of the deficit because the Central Bank directly finances the deficit by simply producing money. This operation generates an increase of the monetary base since the production of money is assigned to the Treasury debit account in the Central Bank accounts. Therefore, with regard to the creation of money, we can state that: The creation of money can be affected either by physically producing money or by means of entry into the Treasury account. One way or the other, the result is the same: an increase of the monetary base or high-power money. An increase of the monetary base entails an increase of the monetary supply due to the effect of the bank multiplier on the monetary base. New money is collected by the Treasury, who give it to the administration and economy actors that form the State. This means that public sector actors will spend such new money, measured as the difference (G–T). 4) It is important to establish the difference between the two different situations whereby the Central Bank purchases national debt: the first situation is when it purchases old, or second-hand, national debt, that is, national debt from previous years. In this case, the monetary authority applies an open market expansive monetary policy. In the second situation, the Central Bank purchases new national debt, which corresponds to the period deficit; it finances such deficit. This is the situation we analyze in this paper and it is not related to monetary policy. As we have seen in the previous section, there is a difference between a monetary policy and an arbitrary creation of money to finance the deficit. Monetary policy reveals itself via several aspects, First, it is revealed in the calculation of the monetary magnitude to be introduced in the system in order to finance the real growth and the monetary growth of domestic products and income. Also, an estimate of the expected goals should be previously calculated: the magnitude of the gross domestic product growth rate (%). Another aspect of monetary policy is the time-based measure of the monetary effect. This means that a statistical, historical and projective calculation of the period of time when a certain measure is implemented should be made, until it produces the expected effect. And yet another aspect of monetary policy calculates the amount and intensity of the monetary dose injected into the system. According to the above paragraphs, the golden rules of monetary policy are: cautiousness, regularity, austerity, and again, cautiousness. None of these aspects is present in the production of money when the deficit is financed with the creation of money. First, the creation of money is determined by the arbitrary and irregular imposition of the deficit. Money is created in the same proportion as deficit growth. Because the State is involved, the amounts of money (or deficit) are characterized by the following factors:
A Study of Level of Service on the Departure System of the Taiwan Taoyuan International Airport
Chui-Yen Chen, Yuan-Ze University and Chin-Min Institute of Technology, Taiwan
As the economy changes, service management must change, too. Since Taiwan joined the World Trade Organization (WTO), since 2002 there will be more chaos and keenly competitive circumstances in airport operations. With the prevalence of education and the widespread concept of ISO9001, customers are concentrating more on the service quality of airport operations. As a consequence, airport operators are devoted to improving their service quality and can promote their reputation. Raising customer satisfaction may lead to a rise in customer loyalty, and therefore airport operations will retain good customers. The performance evaluation of airport operations is an important issue for the government. In recent years, there have been some research studies published on this subject. The dimensions considered and the research methods used, however, are not satisfying. In this study, we use Process Capability Indices (PCI) to develop a methodology for performance evaluation in airport operations. We have selected the departure system of Taiwan Taoyuan International Airport to establish airport operations performance indices. The world’s economic situation is growing continuously; the ratio of service business’s earned value is also becoming higher and higher. Service businesses, therefore, are playing a very important role in the world’s economic system. Thus, scholars lately have tried to study all the characteristics of service businesses and their effective management in order to reach a level of service quality that can satisfy customers. Owing to the booming economy and increasing national income in recent years, passenger’s value their time more highly and fast air transportation becomes people’s preference. As an island country, Taiwan can transport internationally by sea and air only; its passenger transportation is mainly by air. Taiwan is located on the key position of Asia-Pacific air transportation, and we can see the importance of air transportation in the international transportation arena. Air transportation is also very important for local passenger transportation, although the high speed railroad and highway system are well developed in the region. In the air transportation system, the airport is the main location for transit, as well as loading and unloading of passengers and freight traffic. There are four main groups in the air transportation operation: 1) passengers, 2) the airline company, 3) airport accredited offices (such as financial services, car-rentals and hotels) and 4) airport management authorities. The airport terminal operator offers service to the passengers through the airline company and airport accredited offices; therefore, the service that passengers receive from an airport are crucial for the reputation of this airport. Furthermore, because airport income mainly comes from passengers, their appraisal of an airport should be considered as the main evaluation index of airport service levels. The method of evaluating effective operations in an airport terminal is the most important topic to study if we intend to promote competition among airport terminals. The subjective standard of service quality is highly related to the following factors: 1) simplifying service procedures, 2) reducing handling time, 3) promoting the positive attitude of service staff and 4) reconcile the efficiency cognition conflict between the customers and service staff. Our study, therefore, selects the departure system in Terminal 2 of Taiwan Taoyuan international airport (which has the biggest flow rate) to examine separate service achievement indices of each individual unit’s facilities and the whole service achievement index in the airport terminal by using the method of Process Capability Indices (PCI). We hope to establish an airport service level evaluation model through the result of our study. In recent years, there have been many studies related to Process Capability Indices. We can use the PCI analysis information to set the border of process control; therefore, Process Capability Indices are an effective tool to evaluate the process quality. Many scholars have completed investigations in this field during the past few years including: Kane (1986), Chan et. al. (1988), Boyles (1991), Pearn et al (1992), Chen (1995), and Cheng (1994). Juran (1974) first offered the index, He assumed that the distribution of product quality is normal distribution. He defined the ratio of tolerances width and quality standard deviation. Kane (1986), Chou, Owen (1989) and Chou (1990) studied the statistics characteristics of. The index does not consider the different results caused by process changes; the process average value is ignored. The index is not suitable for evaluation when the process average value is far from the midpoint of the specification range, so Kane (1986) offered a index to reflect effectively the average position of process and avoid the failing evaluation when exceeds boarder. Further, Chan et al. (1988) offered index for considering specific target value T within the Process Capability Indices. This index also considers the variation of process average and target value. This index can be used in different specifications boarder. Pearn et al. (1992) combined the and indices into index. The index considers the process average value and the midpoint of specification range. It also considers the variation degree between process variation and target value. Although the product or process quality evaluation methods are already very well designed, the service business quality evaluation indices still need to be improved.
The International Role of the ECB: Myth or reality?
Elisabeth Paulet, Jean Monnet Chair, ESCEM Groupe, France
Since its foundation in 1999, the ECB has been criticized for its weak position on the international stage. After having recalled that its statutes explain this situation, we intend by a comparison between the FED and the ECB, to discuss the differences of the two institutions in order to justify their respective role. In the last step, some hypotheses will be formulated as regards the future of the ECB. The aim of the third section will be to give some necessary conditions to reach more efficiency and stability for the European financial markets. Both the ECB and the Fed are independent and obtain similar result as regards inflation (e.g. table 1). The main difference is that the ECB concentrates its effort to fight inflation whereas the Fed’s target is more macroeconomic. Among the Fed’s objectives, one could mention growth and unemployment (e.g. table 2). To satisfy these goals, the Fed uses interest rate to attract foreign direct investment, stimulates consumption etc... Table 1 provides evidence of this situation: short term interest rates are more volatile in the USA than in the Euro Zone; long term interest rates are much higher than the ones proposed by the ECB. On the contrary to European situation, budgetary policy is very voluntary in United States and United Kingdom. For the USA, between 2000 and 2004 it represents an impulsion of 5 points of GDP. Cumulated with flexible monetary policy based strongly on consumer indebtedness (we will develop this point in our second section), deficit spending (e.g. table 2) is one important instrument of American economic policy, which leads to growth rate of 3.5 for 2005. Until a recent past, its level was not at the core of the government preoccupation. However, just before its retirement, Alan Greenspan began to prevent the authorities about the danger of the public debt, which could imply a decrease of growth (more than 3.2 for 2006). This is essentially due to strong investment supported by the consumption of economic agent. Unfortunately this level consumption is only possible through the credit channel process, which leads to a phenomenon of over-indebtedness for American population. According to the Maastricht criteria, Europeans are more cautious about their deficits which prevent them to undertake a real policy mix. Budget policy remains to the discretion of the partners. They are constraint by an upper limit of 3% of Gross Domestic Product (GDP). Monetary policy is under the responsibility of the ECB whose major objectives are the control of inflation and exchange rate. Moreover the beginning of recession for Italy and Germany could explain the gap in term of growth rate between USA and the Euro zone. The integration of Eastern countries both to European economic criteria could reinforce the above argument for a global measure of growth in Europe. These divergences of targets could justify the absence of reaction of the ECB as regards the level of interest rate despite a common basis of independence criteria. Hence the specific organization of the ECB explains its limited role on the international stage. The common currency, the Euro, does not give any evidence for the absence of international position. The next step of our argument could then be the following: suppose now that both institutions focus on the same objective (which means that the ECB includes in its policy macro economic targets like growth and unemployment). Could the ECB become as reactive as the Fed? What are the elements which infers, or not, this proposition? Over the last years, economists have criticized the passivity of the ECB. This section aims to demonstrate that, even if both institutions pursue the same objectives, structural factors will prevent the ECB to be as reactive as the Fed. The first element is the functioning of capital markets which are very different in the USA and in Europe. In the United States, the main actors (the NYSE for mature firms and the NASDAQ for companies in expansion) are clearly identified. In Europe, a large number of financial places still exist. The most important one as regards capitalisation (e.g. table 3) is the London Stock Exchange followed by Euronext newly composed by the merger of the financial centre of Amsterdam, Brussels, Paris, Lisbon and LIFE. Moreover the number of listed companies is not comparable to the figure given for American firms. Enterprises rely more on credit banking than on stocks for the financing of their project (e.g. Paulet 2003). Except for the case of United Kingdom where the financial markets are very developed (since the nineteenth century), French and German companies finance their project through banking credit. Even if the beginning of the millennium has experienced an increase of markets as regards financial policy for firms, the successive speculative bubbles have not confirmed this movement. On the contrary, in countries where universal banks are dominant (like Germany), banks have recovered their place. For other partners, the structure of industry where small and medium size enterprises (SME) represent 80% of all companies could justify the choice in favour of banks for their investment projects. In the future, one could object that this argument could be subject to evolution as Euronext merges with the NYSE. Would this cooperation induce a change in the habit of European enterprises as regards their financing? Up to now, no evidence supports this argument. The fact, that most of them are SME, could infer that a structural change will be improbable. Moreover, cultural factors as regards their funding of investment infirm this eventuality.
Copyright 2000-2018. All Rights Reserved