Our approach helps to fit functional data models with sparsely and irregularly sampled data. Also, it overcomes the limitations of current methods which face major challenges in the fitting of more complex nonlinear models. Currently, many models cannot be consistently estimated unless one assumes that the number of observed points per curve grows sufficiently quickly with the sample size, whereas, we show that our approach which is based on Random Forest can produce consistent estimates without such an assumption using multiple imputation. In this method, we average over many unpruned classification or regression trees. Random forest intrinsically constitutes a multiple imputation scheme. Evaluation is done on multiple simulations and real datasets coming from a diverse selection with artificially introduced missing values ranging from 50% to 90%. Additionally, our method exhibits attractive computational efficiency and can cope with high-dimensional data when compared with PACE and MICE.