Thesis MÉTODOS DE PRONÓSTICOS A TRAVÉS DE SERIES DE TIEMPO: UN ENFOQUE PARA LA TOMA DE DECISIONES.
Loading...
Date
2011-06
Journal Title
Journal ISSN
Volume Title
Program
DEPARTAMENTO DE INDUSTRIAS. INGENIERÍA CIVIL INDUSTRIAL
Campus
Campus Vitacura, Santiago
Abstract
La presente investigación nace de la necesidad implícita de las organizaciones (en todos los niveles) de conocer con la mayor exactitud posible el futuro. En el proceso de planificación, creación de una estrategia, realización de un proyecto, etc. surge lo que se denomina pronóstico. Este concepto es una simple cifra que, dependiendo de la exactitud con que se calcule, puede llegar a hacer la diferencia entre el éxito y el fracaso. Autores reconocidos en el tema como Makridakis, Hibon, Armstrong, entre muchos otros; han dedicado su vida a estudiar y analizar los pronósticos. Los avances desde la década del sesenta han sido sin duda sorprendentes en cuanto a la posibilidad de trabajar con modelos cada vez más complejos, sin embargo aún no es posible generar un modelo que asegure un ciento por ciento un pronóstico y probablemente jamás será posible.
El trabajo se compone de dos fases principales: la primera parte recopila información de investigaciones previas afines y la segunda parte corresponde a un trabajo empírico. De la primera fase se obtuvo importante información respecto a cómo se ha trabajado el tema de los pronósticos a través de los años, comprendiendo la importancia que tienen en la generación de valor de cualquier organización. Quizá la investigación más relevante relacionada es la primera competencia de Makridakis (1979), quien utilizó 111 series de tiempo y aplico 22 filtros de distintos tipos, siendo un filtro un método cuantitativo que calcula un pronóstico para un tiempo dado. Sus resultados fueron reveladores y marcaron una nueva etapa en la era de los pronósticos.
La segunda fase, corresponde a un trabajo similar al de Makridakis (1979). Se utilizaron 126 series de tiempo, se aplicaron 20 filtros distintos y se analizaron los distintos resultados obtenidos a través de dos indicadores de precisión: el error absoluto porcentual promedio (MAPE) y la raíz del error cuadrático medio relativa (RelRMSE o U-Theil). El primer indicador mide la desviación promedio del error asociado a los pronósticos y el segundo compara los resultados de un filtro en comparación a otro filtro denominado base, en este caso el filtro de paseo aleatorio con ajuste estacional.
Los resultados mostraron una realidad curiosa: los filtros sofisticados, es decir los que requieren de especialización por parte del usuario y por lo tanto de un software específico son, en promedio, igual de efectivos que los filtros más simples como un promedio móvil, un suavizamiento exponencial simple o un paseo aleatorio. Sin embargo el filtro ARIMA y Experto de SPPSS 17® mostraron ser filtros un poco más precisos que el resto. Pero el resultado quizá más interesante fue que un simple paseo aleatorio mostró resultados casi tan buenos como los modelos antes mencionados. Parece ser un tanto paradójico, dado que se tiende a pensar que a mayor cantidad de recursos que demande un método, mejores resultados se pueden obtener. Sin embargo, luego de un análisis más profundo, fue posible conocer las causas que afectan de alguna manera a la precisión de cada uno de los filtros, lo que sin duda será un valor agregado a quienes deseen saber qué filtro será el más adecuado a la hora de enfrentarse a una serie de tiempo.
This research arises from the implicit necessity of the organizations (including all levels) to know the future with the mayor accuracy as possible. In process like planification, strategic management, project development, etc. arises what is commonly named as forecast. This concept is a simple cipher that could do the difference between success or a total disaster. Important authors like Makridakis, Hibon, Armstrong, and so many more, have dedicated their entire professional life to study the forecasting process. The advances since 60’ have been without doubt amazing as for the possibility to work with more and more complex models; however is not possible yet to generate a perfect model for forecasting and probably never will see that. The work have two main phases: the first collects information about previously investigations and the second part is an empirical work. Important information was obtain of the first part respect how the issue has been worked through the years, understanding the relevancy that forecast have in the value creation process on any organization. Perhaps the more important investigation related is a work of Makridakis (1979) named the first M-Competition. In this work, he uses 111 time series and applied 22 filters of different types (a filter is a quantitative method that makes a forecast for a given time).The results was very significant and marked a new stage for the forecasting process. The second phase of this investigation is an empirical work similar to Makridakis (1979). Now, it was used 126 time series with 20 filters and the results was analyzed through three accuracy indicators: the Root Mean Square Error (RMSE), the Mean Absolute Percentage Error (MAPE) and the Theil Statistic or U-Theil known as Relative Root Mean Square Error (RelRMSE). The outcomes show a curious reality: sophisticated models were, in average, as accurate as simple ones. However, there is some filter as ARIMA or the SPSS 17® Expert Model that shows results a little more accurate than simple ones. But the most impressive result was that a simple random walk was one of the most accurate filters. Simple filters like exponential smooth or a moving average shows interesting results too. Its paradoxical, given that sounds more logic that a more complex method require more resources than a simple one, so a forecasting user tends to think that sophisticated method give more accurate results. After a more deep analysis was possible to know the causes that affects in a way the accuracy of every one of the filters, certainly an aggregated value for who wants to know what filter can use for a specific time series.
This research arises from the implicit necessity of the organizations (including all levels) to know the future with the mayor accuracy as possible. In process like planification, strategic management, project development, etc. arises what is commonly named as forecast. This concept is a simple cipher that could do the difference between success or a total disaster. Important authors like Makridakis, Hibon, Armstrong, and so many more, have dedicated their entire professional life to study the forecasting process. The advances since 60’ have been without doubt amazing as for the possibility to work with more and more complex models; however is not possible yet to generate a perfect model for forecasting and probably never will see that. The work have two main phases: the first collects information about previously investigations and the second part is an empirical work. Important information was obtain of the first part respect how the issue has been worked through the years, understanding the relevancy that forecast have in the value creation process on any organization. Perhaps the more important investigation related is a work of Makridakis (1979) named the first M-Competition. In this work, he uses 111 time series and applied 22 filters of different types (a filter is a quantitative method that makes a forecast for a given time).The results was very significant and marked a new stage for the forecasting process. The second phase of this investigation is an empirical work similar to Makridakis (1979). Now, it was used 126 time series with 20 filters and the results was analyzed through three accuracy indicators: the Root Mean Square Error (RMSE), the Mean Absolute Percentage Error (MAPE) and the Theil Statistic or U-Theil known as Relative Root Mean Square Error (RelRMSE). The outcomes show a curious reality: sophisticated models were, in average, as accurate as simple ones. However, there is some filter as ARIMA or the SPSS 17® Expert Model that shows results a little more accurate than simple ones. But the most impressive result was that a simple random walk was one of the most accurate filters. Simple filters like exponential smooth or a moving average shows interesting results too. Its paradoxical, given that sounds more logic that a more complex method require more resources than a simple one, so a forecasting user tends to think that sophisticated method give more accurate results. After a more deep analysis was possible to know the causes that affects in a way the accuracy of every one of the filters, certainly an aggregated value for who wants to know what filter can use for a specific time series.
Description
Keywords
TOMA DE DECISIONES, SERIES DE TIEMPO ANALISIS