What Is Management
Dr. Sidharth S. Raju
Associate Professor & Head of Department in Faculty of Hospitality Management at Vivekananda Global University.
What Is Management ???????
Management is the distinct process consisting of planning, organizing, activating, and controlling activities performed to determine and accomplishes the objectives by the use of people and resources.” If we give our attention towards the definition we find that terry perceives the management as a process a systematic way of doing things. The four management activities are included under the process and they are planning, organizing, activating and controlling
MANAGEMENT = MANAGE+MEN+T (TACTFULLY There are 5 concepts of management :-
1. Functional concept
Management basically is the task of planning, coordinating, motivating and controlling the efforts of other towards the goals and objectives of the organization. According to this concept, management is what a manager does (planning, executing, and controlling)
2. Human relation concept
According to this concept, Management is the art o getting things done through and with people in organized groups. It is the art of creating an environment in which people can perform and individuals could cooperate towards attaining of group goals. It is an art of removing blanks to such performance a way of optimizing efficiency in reaching goals.
3. Leadership and decision making concept
According to this concept, management is the art and science of preparing, organizing, directing human efforts applied to control the forces and utilize the materials of nature for the benefits to man.
4. Productive concept
According to this concept, management may be defined as the art of securing maximum prosperity with a minimum effort so as to secure maximum prosperity and happiness for both employer n employee and provide best services thereby.
5. Integration concept
According to this concept, management is the coordination of human and material resources towards the achievement of organizational objectives as well as the organization of the productive functions essential for achieving stated or accepted economic goal.
These above definition of management, given by different writers and authorities, are found giving different senses. Virtually, the five concepts are found developed by the authorities emphasizing in different aspects. However, it has been realized by many that it will not be fair to define management based upon any one aspect. Management can be taken as process-managerial process or social process either engage in planning, organizing, staffing, directing and controlling or mobilizing the group activities to achieve the corporate goals.
Levels of management are:
- Top level management.
- Middle level management
- Supervisory level, operational or lower level of management.
1. Top Level Management:
Top level management consists of Chairman, Board of Directors, Managing Director, General Manager, President, Vice President, Chief Executive Officer (C.E.O.), Chief Financial Officer (C.F.O.) and Chief Operating Officer etc. It includes group of crucial persons essential for leading and directing the efforts of other people. The managers working at this level have maximum authority.
2. Middle Level Management:
This level of management consists of departmental heads such as purchase department head, sales department head, finance manager, marketing manager, executive officer, plant superintendent, etc. People of this group are responsible for executing the plans and policies made by top level.
3. Supervisory Level/Operational Level:
This level consists of supervisors, superintendent, foreman, sub-department executives; clerk, etc. Managers of this group actually carry on the work or perform the activities according to the plans of top and middle level management
The following facts come to light about its nature and features:
(1) Planning Focuses on Achieving Objectives:
Management begins with planning and planning begins with the determining of objectives. In the absence of objectives no organisation can ever be thought about. With the determining of objective, the way to achieve the objective is decided in the planning.
(2) Planning is Primary Function of Management:
Planning is the first important function of management. The other functions, e.g., organising, staffing, directing and controlling come later. In the absence of planning no other function of management can be performed.
This is the base of other functions of management. For example, a company plans to achieve a sales target of 112 crores a year. In order to achieve this target the second function of management, i.e., organising comes into operation.
Under it the purchase, sales, production and financial activities are decided upon. In order to complete these activities, different departments and positions are decided upon. The authority and responsibility of every position are decided upon.
(3) Planning is Pervasive:
Since the job of planning is performed by the managers at different levels working in the enterprise, it is appropriate to call it all-pervasive. Planning is an important function of every manager; he may be a managing director of the organisation or a foreman in a factory.
The time spent by the higher-level managers in the process of planning is comparatively more than the time spent by the middle-level and lower-level managers. It is, therefore, clear that all the managers working in an enterprise have to plan their activities.
For example, the decision to expand business is taken by the higher-level managers. The decision to sell products is taken by the middle-level and lower-level managers.
Importance of planning in management are:
Planning is the first and most important function of management. It is needed at every level of management. In the absence of planning all the business activities of the organisation will become meaningless. The importance of planning has increased all the more in view of the increasing size of organisations and their complexities
Planning has again gained importance because of uncertain and constantly changing business environment. In the absence of planning, it may not be impossible but certainly difficult to guess the uncertain events of future.
The following facts show the advantages of planning and its importance for a business organisation:
(1) Planning Provides Direction:
Under the process of planning the objectives of the organisation are defined in simple and clear words. The obvious outcome of this is that all the employees get a direction and all their efforts are focused towards a particular end. In this way, planning has an important role in the attainment of the objectives of the organisation.
For example, suppose a company fixes a sales target under the process of planning. Now all the departments, e.g., purchase, personnel, finance, etc., will decide their objectives in view of the sales target.
In this way, the attention of all the managers will get focused on the attainment of their objectives. This will make the achievement of sales target a certainty. Thus, in the absence of objectives an organisation gets disabled and the objectives are laid down under planning.
(2) Planning Reduces Risks of Uncertainty:
Planning is always done for future and future is uncertain. With the help of planning possible changes in future are anticipated and various activities are planned in the best possible way. In this way, the risk of future uncertainties can be minimised.
For example, in order to fix a sales target a survey can be undertaken to find out the number of new companies likely to enter the market. By keeping these facts in mind and planning the future activities, the possible difficulties can be avoided.
(3) Planning Reduces Overlapping and Wasteful Activities:
Under planning, future activities are planned in order to achieve objectives. Consequently, the problems of when, where, what and why are almost decided. This puts an end to disorder and suspicion. In such a situation coordination is established among different activities and departments. It puts an end to overlapping and wasteful activities.
Consequently, wastages moves towards nil, efficiency increases and costs get to the lowest level. For example, if it is decided that a particular amount of money will be required in a particular month, the finance manager will arrange for it in time.
In the absence of this information, the amount of money can be more or less than the requirement in that particular month. Both these situations are undesirable. In case, the money is less than the requirement, the work will not be completed and in case it is more than the requirement, the amount will remain unused and thus cause a loss of interest.
(4) Planning Promotes Innovative Ideas:
It is clear that planning selects the best alternative out of the many available. All these alternatives do not come to the manager on their own, but they have to be discovered. While making such an effort of discovery, many new ideas emerge and they are studied intensively in order to determine the best out of them.
In this way, planning imparts a real power of thinking in the managers. It leads to the birth of innovative and creative ideas. For example, a company wants to expand its business. This idea leads to the beginning of the planning activity in the mind of the manager. He will think like this:
Should some other varieties of the existing products be manufactured?
Should retail sales be undertaken along with the wholesales?
Should some branch be opened somewhere else for the existing or old product?
Should some new product be launched?
In this way, many new ideas will emerge one after the other. By doing so, he will become habituated to them. He will always be thinking about doing something new and creative. Thus, it is a happy situation for a company which is born through the medium of planning.
The Lorenz Curve is a graphical display of the distribution of the cumulative percent of events by the cumulative percent of people in the population.
Application of the Lorenz Curve
In the Health Inequities deliverable (Martens et al., 2010), the Lorenz curve is a graphical display of the distribution of the cumulative percent of events by the cumulative percent of people in the five neighbourhood income quintiles in the population, by increasing income.
The horizontal axis (x-axis) of the curve displays the cumulative percent of people in the population (by increasing neighbourhood income quintile group) and the vertical axis (y-axis) displays the cumulative percent of events in the population. The Lorenz curve can be expressed as what percentage of the population represented by the neighbourhood income quintile holds what percentage of the events in the population.
Each neighbourhood income quintile represents approximately 20% of the Manitoba population, divided into rural or urban (Winnipeg and Brandon). In a perfectly equitable situation, one would expect that 20% of events (i.e., premature deaths, teenage pregnancies, etc.) would occur in each income quintile group: U1 would contribute 20% of all events in the population; U2 would contribute another 20% of all events in the population and so forth. As a reference, a line of equality is also displayed on the graph to indicate this perfectly equitable situation; however, most cases present some inequality between the percentage of events and the income quintiles of the population. A Lorenz curve is generated when at least one of the income quintiles that captures N% of the population does not contribute the same N% on the Y axis. If a larger proportion of events occur in lower neighbourhood income quintile groups, the Lorenz curve will bend above the line of equality; if a larger proportion of events occur in higher neighbourhood income quintile groups, the Lorenz curve will bend below the line of equality (Lorenz, 1905).
The total area lying in-between the line of equality and the Lorenz curve is known as the Gini coefficient; larger areas represent larger disparities between neighbourhood income groups and smaller areas represent smaller disparities between neighbourhood income groups. Please see the glossary term Gini coefficient for more information.
Overview: Approach to Generating the Lorenz Curve
This is the approach taken to generate the Lorenz Curve.
First you have to fit a model for the outcome of interest (say premature mortality). This model will be adjusted by sex and age - using a Poisson or Negative Binomial distribution.
From this adjusted model, you will get an adjusted rate.
Now for the Lorenz Cure:
The Lorenz curve has an x-axis (cumulative proportion of the population - reported as the crude cumulative percent of the denominator) and a y-axis (cumulative proportion of the event - reported as the adjusted cumulative percent of the numerator).
Using the adjusted rate from the model, calculate the adjusted numerator:
adjusted numerator = adjusted rate * denominator
Determine the proportion each income quintile contributes to the x and y -axis. (i.e.: U1 percent denominator = 514408/2639878 = 0.19, U1 adjusted numerator = 2730.21/8170.65 = 0.33)
Add up the cumulative percents (each of the sums for the denominator and the adjusted numerator should = 1).
Each percentage value determines a point on the Lorenz Curve. (Please see the table below for an example of how to calculate the points for the Lorenz Curve).
You can use the values in the Lorenz Cure to calculate its GINI Coefficient.
Example of a Lorenz Curve
Given a system of linear equations, Cramer's Rule is a handy way to solve for just one of the variables without having to solve the whole system of equations. They don't usually teach Cramer's Rule this way, but this is supposed to be the point of the Rule: instead of solving the entire system of equations, you can use Cramer's to solve for just one single variable.
Let's use the following system of equations:
2x + y + z = 3
x – y – z = 0
x + 2y + z = 0
We have the left-hand side of the system with the variables (the "coefficient matrix") and the right-hand side with the answer values. Let D be the determinant of the coefficient matrix of the above system, and let D be the determinant formed by replacing the x-column values with the answer-column values:
Similarly, D and D would then be: Copyright ? Elizabeth Stapel 2004-2011 All Rights Reserved
Evaluating each determinant, we get:
Cramer's Rule says that x = D÷ D, y = D÷ D, and z = D÷ D. That is:
x = 3/ = 1, y = –6/ = –2, and z = 9/ = 3
That's all there is to Cramer's Rule. To find whichever variable you want (call it "?" or "beta"), just evaluate the determinant quotient D ÷ D. (Please don't ask me to explain why this works. Just trust me that determinants can work many kinds of magic.)
- Given the following system of equations, find the value of z.
2x + y + z = 1
x – y + 4z = 0
x + 2y – 2z = 3
To solve only for z, I first find the coefficient determinant.
Then I form D by replacing the third column of values with the answer column:
z = 2
The point of Cramer's Rule is that you don't have to solve the whole system to get the one value you need. This saved me a fair amount of time on some physics tests. I forget what we were working on (something with wires and currents, I think), but Cramer's Rule was so much faster than any other solution method (and God knows I needed the extra time). Don't let all the subscripts and stuff confuse you; the Rule is really pretty simple. You just pick the variable you want to solve for, replace that variable's column of values in the coefficient determinant with the answer-column's values, evaluate that determinant, and divide by the coefficient determinant. That's all there is to it.
Almost.
What if the coefficient determinant is zero? You can't divide by zero, so what does this mean? I can't go into the technicalities here, but "D = 0" means that the system of equations has no unique solution. The system may be inconsistent (no solution at all) or dependent (an infinite solution, which may be expressed as a parametric solution such as "(a, a + 3, a – 4)"). In terms of Cramer's Rule, "D = 0" means that you'll have to use some other method (such as matrix row operations) to solve the system. If D = 0, you can't use Cramer's Rule.
In probability theory and statistics, kurtosis (from Greek: κυρτ??, kyrtos or kurtos, meaning "curved, arching") is a measure of the "tailedness" of the probability distribution of areal-valued random variable. In a similar way to the concept of skewness, kurtosis is a descriptor of the shape of a probability distribution and, just as for skewness, there are different ways of quantifying it for a theoretical distribution and corresponding ways of estimating it from a sample from a population. Depending on the particular measure of kurtosis that is used, there are various interpretations of kurtosis, and of how particular measures should be interpreted; these are primarily tail weight, peakedness (width of peak), and lack of shoulders (distribution primarily peak and tails, not in between).
The standard measure of kurtosis, originating with Karl Pearson, is based on a scaled version of the fourth moment of the data or population. This number measures heavy tails, and not peakedness;[1][2] hence, the "peakedness" definition is misleading. For this measure, higher kurtosis means more of the variance is the result of infrequent extremedeviations, as opposed to frequent modestly sized deviations.
The kurtosis of any univariate normal distribution is 3. It is common to compare the kurtosis of a distribution to this value. Distribution with kurtosis less than 3 are said to beplatykurtic. An example of a platykurtic distribution is the uniform distribution, which does not have positive-valued tails. Distributions with kurtosis greater than 3 are said to beleptokurtic. An example of a leptokurtic distribution is the Laplace distribution, which has tails that asymptotically approach zero more slowly than a Gaussian. It is also common practice to use an adjusted version of Pearson's kurtosis, the excess kurtosis, which is the kurtosis minus 3, to provide the comparison to the normal distribution. Some authors use "kurtosis" by itself to refer to the excess kurtosis. For the reason of clarity and generality, however, this article follows the non-excess convention and explicitly indicates where excess kurtosis is meant.
Alternative measures of kurtosis are: the L-kurtosis, which is a scaled version of the fourth L-moment; measures based on 4 population or sample quantiles.[3] These correspond to the alternative measures of skewness that are not based on ordinary moments
Pearson moments
The kurtosis is the fourth standardized moment, defined as
where μ is the fourth moment about the mean and σ is the standard deviation. Several letters are used in the literature to denote the kurtosis. A very common choice is κ, which is fine as long as it is clear that it does not refer to a cumulant. Other choices include γ, to be similar to the notation for skewness, although sometimes this is instead reserved for the excess kurtosis.
The kurtosis is bounded below by the squared skewness plus 1: [4]
where μ is the third moment about the mean. The lower bound is realized by the Bernoulli distribution with p = ?, or "coin toss". There is no upper limit to the excess kurtosis of a general probability distribution, and it may be infinite.
Under the above definition, the kurtosis of any univariate normal distribution is 3. The excess kurtosis, defined as the kurtosis minus 3, then takes a value of 0 for the normal. Much of the statistics literature prefers to use the excess kurtosis ostensibly to match the fact that the fourth cumulant (not moment) of a normal distribution vanishes. Unfortunately, this has resulted in a schism in which the excess kurtosis is sometimes simply called "kurtosis" without the qualifier. Some software packages popular among the pure mathematics and science communities such as Mathematica, Matlab, and Maple all use "kurtosis" in the original manner defined above, for which the kurtosis of a normal distribution is 3. Other software packages popular among the statistics and finance communities including Excel and R return the excess kurtosis under "kurtosis" function calls, and NumPy defaults to this behavior.
A reason why some authors favor the excess kurtosis is that cumulants are extensive. Formulas related to the extensive property are more naturally expressed in terms of the excess kurtosis. For example, let X, ..., X be independent random variables for which the fourth moment exists, and let Y be the random variable defined by the sum of the X. The excess kurtosis of Y is
where is the standard deviation of . In particular if all of the X have the same variance, then this simplifies to
The reason not to subtract off 3 is that the bare fourth moment better generalizes to multivariate distributions, especially when independence is not assumed. The cokurtosisbetween pairs of variables is an order four tensor. For a bivariate normal distribution, the cokurtosis tensor has off-diagonal terms that are neither 0 nor 3 in general, so attempting to "correct" for an excess becomes confusing. It is true, however, that the joint cumulants of degree greater than two for any multivariate normal distribution are zero.
For two commuting random variables, X and Y, not necessarily independent, the kurtosis of the sum, X + Y, is
Note that the binomial coefficients appear in the above equation.
Data (/?de?t?/ day-t?, /?d?t?/ da-t?, or /?dɑ?t?/ dah-t?) is a set of values of qualitative or quantitative variables; restated, pieces of data are individual pieces of information. Data is measured, collected and reported, and analyzed, whereupon it can be visualized using graphs or images. Data as a general concept refers to the fact that some existinginformation or knowledge is represented or coded in some form suitable for better usage or processing.
Raw data, i.e. unprocessed data, is a collection of numbers, characters; data processing commonly occurs by stages, and the "processed data" from one stage may be considered the "raw data" of the next. Field data is raw data that is collected in an uncontrolled in situ environment. Experimental data is data that is generated within the context of a scientific investigation by observation and recording.
The word "data" originated as the plural of "datum", and still may be used as a plural noun in this sense. Nowadays, though, "data" is most commonly used in the singular, as amass noun (like "information", "sand" or "rain")
Quantitative and Qualitative Data collection methods
The Quantitative data collection methods, rely on random sampling and structured data collection instruments that fit diverse experiences into predetermined response categories. They produce results that are easy to summarize, compare, and generalize.
Quantitative research is concerned with testing hypotheses derived from theory and/or being able to estimate the size of a phenomenon of interest. Depending on the research question, participants may be randomly assigned to different treatments. If this is not feasible, the researcher may collect data on participant and situational characteristics in order to statistically control for their influence on the dependent, or outcome, variable. If the intent is to generalize from the research participants to a larger population, the researcher will employ probability sampling to select participants.
Typical quantitative data gathering strategies include:
Experiments/clinical trials.
Observing and recording well-defined events (e.g., counting the number of patients waiting in emergency at specified times of the day).
Obtaining relevant data from management information systems.
Administering surveys with closed-ended questions (e.g., face-to face and telephone interviews, questionnaires etc).
Interviews
In Quantitative research (survey research),interviews are more structured than in Qualitative research
In a structured interview, the researcher asks a standard set of questions and nothing more.(Leedy and Ormrod, 2001)
Face -to -face interviews have a distinct advantage of enabling the researcher to establish rapport with potential partiocipants and therefor gain their cooperation. These interviews yield highest response rates in survey research.They also allow the researcher to clarify ambiguous answers and when appropriate, seek follow-up information. Disadvantages include impractical when large samples are involved time consuming and expensive.
Telephone interviews are less time consuming and less expensive and the researcher has ready access to anyone on the planet who hasa telephone.Disadvantages are that the response rate is not as high as the face-to- face interview but cosiderably higher than the mailed questionnaire. The sample may be biased to the extent that people without phones are part of the population about whom the researcher wants to draw inferences.
Computer Assisted Personal Interviewing (CAPI): is a form of personal interviewing, but instead of completing a questionnaire, the interviewer brings along a laptop or hand-held computer to enter the information directly into the database. This method saves time involved in processing the data, as well as saving the interviewer from carrying around hundreds of questionnaires. However, this type of data collection method can be expensive to set up and requires that interviewers have computer and typing skills.
Questionnaires
Paper-pencil-questionnaires can be sent to a large number of people and saves the researcher time and money.People are more truthful while responding to the questionnaires regarding controversial issues in particular due to the fact that their responses are anonymous. But they also have drawbacks.Majority of the people who receive questionnaires don't return them and those who do might not be representative of the originally selected sample.
Web based questionnaires : A new and inevitably growing methodology is the use of Internet based research. This would mean receiving an e-mail on which you would click on an address that would take you to a secure web-site to fill in a questionnaire. This type of research is often quicker and less detailed.Some disadvantages of this method include the exclusion of people who do not have a computer or are unable to access a computer.Also the validity of such surveys are in question as people might be in a hurry to complete it and so might not give accurate responses.
Questionnaires often make use of Checklist and rating scales.These devices help simplify and quantify people's behaviors and attitudes.A checklist is a list of behaviors,characteristics,or other entities that te researcher is looking for.Either the researcher or survey participant simply checks whether each item on the list is observed, present or true or vice versa.A rating scale is more useful when a behavior needs to be evaluated on a continuum.They are also known as Likert scales.
Qualitative data collection methods play an important role in impact evaluation by providing information useful to understand the processes behind observed results and assess changes in people’s perceptions of their well-being.Furthermore qualitative methods can beused to improve the quality of survey-based quantitative evaluations by helping generate evaluation hypothesis; strengthening the design of survey questionnaires and expanding or clarifying quantitative evaluation findings. These methods are characterized by the following attributes:
they tend to be open-ended and have less structured protocols (i.e., researchers may change the data collection strategy by adding, refining, or dropping techniques or informants)
they rely more heavily on iteractive interviews; respondents may be interviewed several times to follow up on a particular issue, clarify concepts or check the reliability of data
they use triangulation to increase the credibility of their findings (i.e., researchers rely on multiple data collection methods to check the authenticity of their results)
generally their findings are not generalizable to any specific population, rather each case study produces a single piece of evidence that can be used to seek general patterns among different studies of the same issue
Regardless of the kinds of data involved,data collection in a qualitative study takes a great deal of time.The researcher needs to record any potentially useful data thououghly,accurately, and systematically,using field notes,sketches,audiotapes,photographs and other suitable means.The data collection methods must observe the ethical principles of research.
The qualitative methods most commonly used in evaluation can be classified in three broad categories:
indepth interview
observation methods
document review
The following link provides more information on the above three methods.
Different ways of collecting evaluation data are useful for different purposes, and each has advantages and disadvantages. Various factors will influence your choice of a data collection method: the questions you want to investigate, resources available to you, your timeline, and more.
ANSWER 1
In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variableabout its mean. The skewness value can be positive or negative, or even undefined.
The qualitative interpretation of the skew is complicated. For a unimodal distribution, negative skew indicates that the tail on the left side of the probability density function is longer or fatter than the right side – it does not distinguish these shapes. Conversely, positive skew indicates that the tail on the right side is longer or fatter than the left side. In cases where one tail is long but the other tail is fat, skewness does not obey a simple rule. For example, a zero value indicates that the tails on both sides of the mean balance out, which is the case for a symmetric distribution, but is also true for an asymmetric distribution where the asymmetries even out, such as one tail being long but thin, and the other being short but fat. Further, in multimodal distributions and discrete distributions, skewness is also difficult to interpret. Importantly, the skewness does not determine the relationship of mean and median.
Consider the two distributions in the figure just below. Within each graph, the bars on the right side of the distribution taper differently than the bars on the left side. These tapering sides are called tails, and they provide a visual means for determining which of the two kinds of skewness a distribution has:
- negative skew: The left tail is longer; the mass of the distribution is concentrated on the right of the figure. The distribution is said to beleft-skewed, left-tailed, or skewed to the left.[1]
- positive skew: The right tail is longer; the mass of the distribution is concentrated on the left of the figure. The distribution is said to beright-skewed, right-tailed, or skewed to the right.[1]
Skewness in a data series may be observed not only graphically but by simple inspection of the values. For instance, consider the numeric sequence (49, 50, 51), whose values are evenly distributed around a central value of (50). We can transform this sequence into a negatively skewed distribution by adding a value far below the mean,
Relationship of mean and median
The skewness is not strictly connected with the relationship between the mean and median: a distribution with negative skew can have the mean greater than or less than the median, and likewise for positive skew.
In the older notion of nonparametric skew, defined as where μ is the mean, ν is the median, and σ is the standard deviation, the skewness is defined in terms of this relationship: positive/right nonparametric skew means the mean is greater than (to the right of) the median, while negative/left nonparametric skew means the mean is less than (to the left of) the median. However, the modern definition of skewness and the traditional nonparametric definition do not in general have the same sign: while they agree for some families of distributions, they differ in general, and conflating them is misleading.
If the distribution is symmetric, then the mean is equal to the median, and the distribution has zero skewness.[2] If, in addition, the distribution is unimodal, then the mean =median = mode. This is the case of a coin toss or the series 1,2,3,4,... Note, however, that the converse is not true in general, i.e. zero skewness does not imply that the mean is equal to the median.
Paul T. von Hippel points out: "Many textbooks, teach a rule of thumb stating that the mean is right of the median under right skew, and left of the median under left skew. This rule fails with surprising frequency. It can fail in multimodal distributions, or in distributions where one tail is long but the other is heavy. Most commonly, though, the rule fails in discrete distributions where the areas to the left and right of the median are Such distributions not only contradict the textbook relationship between mean, median, and skew, they also contradict the textbook interpretation of the median
By :- Professor Sidharth Raju
( Faculty & Head Of Management )