A. M. Abbaker, H. Wang, and Y. Tian, Voltage control of solid oxide fuel cell power plant based on intelligent proportional integral-adaptive sliding mode control with anti-windup compensator, Trans. Inst. Measur. Contr, vol.42, pp.116-130, 2020.

H. Abouaïssa and S. Chouraqui, On the control of robot manipulator: A modelfree approach, J. Comput. Sci, vol.31, pp.6-16, 2019.

C. W. Anderson, D. C. Hittle, D. C. Katz, A. D. Kretchmar, and R. M. , Synthesis of reinforcement learning, neural networks and PI control applied to a simulated heating coil, Artif. Intell. Engin, vol.11, pp.421-429, 1997.

K. J. Åström and T. Hägglund, Advanced PID Control. Instrum. Soc. Amer, 2006.

K. J. Åström and R. M. Murray, Feedback Systems: An Introduction for Scientists and Engineers, 2008.

O. Bara, M. Fliess, C. Join, J. Day, and S. M. Djouadi, Toward a model-free feedback control synthesis for treating acute inflammation, J. Theoret. Biology, vol.448, pp.26-37, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01756743

J. M. Barth, J. Condomines, M. Bronz, G. Hattenberger, J. Moschetta et al., Towards a unified model-free control architecture for tail sitter micro air vehicles: Flight simulation analysis and experimental flights, AIAA Scitech Forum, 2020.

J. M. Barth, J. Condomines, M. Bronz, J. Moschetta, C. Join et al., Model-free control algorithms for micro air vehicles with transitioning flight capabilities, Int. J. Micro Air Vehic, vol.12, 2020.
URL : https://hal.archives-ouvertes.fr/hal-02542982

T. Baumeister, S. L. Brunton, and J. N. Kutz, Deep learning and model predictive control for self-tuning mode-locked lasers, J. Opt. Soc. Am. B, vol.35, pp.617-626, 2018.

M. Bekcheva, M. Fliess, C. Join, A. Moradi, and H. Mounier, Meilleuré elasticité "nuagique" par commande sans modèle, 2018.

F. Beltran-carbajal, G. Silva-navarro, and L. G. Trujillo-franco, On-line parametric estimation of damped multiple frequency, Elec. Power Syst. Res, vol.154, pp.423-452, 2018.

N. Bourbaki, Fonctions d'une variable réelle, Hermann. English translation, 1976.

S. L. Brunton, B. R. Noack, and P. Koumoutsakos, Machine learning for fluid mechanics, Annu. Rev. Fluid Mech, vol.52, pp.477-508, 2020.
URL : https://hal.archives-ouvertes.fr/hal-02398670

M. A. Bucci, O. Semeraro, A. Allauzen, G. Wisniewski, L. Cordier et al., Control of chaotic systems by deep reinforcement learning, Proc. Roy. Soc. A, vol.475, 2019.
URL : https://hal.archives-ouvertes.fr/hal-02406677

L. Bu?oniu, T. De-bruin, D. Toli?, J. Koberb, and I. Palunko, Reinforcement learning for control: Performance, stability, and deep approximators, Annual Rev. Contr, vol.46, pp.8-28, 2018.

J. Chen and T. Huang, Applying neural networks to on-line updated PID controllers for nonlinear process control, J. Process Contr, vol.14, pp.211-230, 2004.

K. Cheon, J. Kim, M. Hamadache, and D. Lee, On replacing PID controller with deep learning controller for DC motor system, J. Automat. Contr. Engin, vol.3, pp.452-456, 2015.

M. Clouatre and M. Thitsa, Shaping 800nm pulses of Yb/Tm co-doped laser: A control theoretic approach, Ceramics Int, 2020.

S. T. Qu, Unmanned powered paraglider flight path control based on PID neutral network, IOP Conf. Ser. Mater. Sci. Eng, vol.470, p.12008, 2019.

J. Rabault, M. Kuchta, A. Jensen, U. Réglade, and N. Cerardi, Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control, J. Fluid Mech, vol.865, pp.281-302, 2019.

M. Radac, R. Precup, and R. Roman, Model-free control performance improvement using virtual reference feedback tuning and reinforcement Q-learning, Int. J. Syst. Sci, vol.48, pp.1071-1083, 2017.

M. Rampazzo, D. Tognin, M. Pagan, L. Carniello, and A. Beghi, Modelling, simulation and real-time control of a laboratory tide generation, Contr. Eng. Pract, vol.83, pp.165-175, 2019.

B. Recht, A tour of reinforcement learning: The view from continuous control, Annu. Rev. Contr. Robot. Autonom. Syst, vol.2, pp.253-279, 2019.

V. Rocher, C. Join, S. Mottelet, J. Bernier, S. Rechdaoui-guerin et al., La production de nitrites lors de la dénitrification des eaux usées par biofiltration -stratgie de contrôle et de réduction des concentrations résiduelles, J. Water Sci, vol.31, pp.61-73, 2018.

S. Russel and P. Norvig, Artificial Intelligence -A Modern Approach, 2016.

C. Sancak, F. Yamac, M. Itik, and G. Alici, Model-free control of an electroactive polymer actuator, Mater. Res. Expr, vol.6, p.55309, 2019.

T. J. Sejnowski, The unreasonable effectiveness of deep learning in artificial intelligence, Proc. Nat. Acad. Sci, 2020.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre et al., Mastering the game of Go with deep neural networks and tree search, Nature, vol.529, pp.484-489, 2016.

H. Sira-ramírez, C. García-rodríguez, J. Cortès-romero, and A. Luviano-juárez, Algebraic Identification and Estimation Methods in Feedback Control Systems, 2014.

P. Stalph, Analysis and Design of Machine Learning Techniques Springer, 2014.

M. Sugiyama, Statistical Reinforcement Learning -Modern Machine Learning, 2015.

R. S. Sutton and A. G. Barto, Reinforcement Learning, 2018.
URL : https://hal.archives-ouvertes.fr/hal-00764281

M. Ticherfatine and Q. Zhu, Fast ferry smoothing motion via intelligent PD controller, J. Marine. Sci. App, vol.17, pp.273-279, 2018.

J. Villagra, C. Join, R. Haber, and M. Fliess, Model-free control for machine tool systems, 2020.

Y. Wang, H. Li, R. Liu, L. Yang, and X. Wang, Modulated model-free predictive control with minimum switching losses for PMSM drive system, IEEE Access, vol.8, pp.20942-20953, 2020.

H. Wang, S. Li, Y. Tian, and A. Aitouche, Intelligent proportional differential neural network control for unknown nonlinear system, Stud. Informat. Contr, vol.25, pp.445-452, 2016.

Y. Wang, K. Velswamy, and B. Huang, A novel approach to feedback control via deep reinforcement learning. IFAC PapersOnLine, pp.31-36, 2018.

Z. Wang and J. Wang, Ultra-local model predictive control: A model-free approach and its application on automated vehicle trajectory tracking, Contr. Eng. Pract, vol.101, p.104482, 2020.

S. L. Waslander and G. M. Hoffmann,

C. J. Tomlin, Multiagent quadrotor testbed control design: integral sliding mode vs. reinforcement learning, IEEE/RSJ Int. Conf. Intell. Robot. Syst, 2005.

Y. Wu, Q. Song, and X. Yang, Robust recurrent neural network control of biped robot, J. Intell. Robot. Syst, vol.49, pp.151-169, 2007.

H. Yang, C. Liu, J. Shi, and G. Zhong, Development and control of four-wheel independent driving and modular steering electric vehicles for improved maneuverability limits, SAE Tech. Paper, pp.2019-2020, 2019.

K. Yosida, Operational Calculus (translated from the Japanese), 1984.

Y. Zhang, S. X. Ding, Y. Yang, and L. Li, Data-driven design of two-degreeof-freedom controllers using reinforcement learning techniques, IET Contr. Theory Appli, vol.9, pp.1011-1021, 2015.

J. Zhang, J. Jin, and L. Huang, Model-free predictive current control of PMSM drives based on extended state observer using ultra-local model, IEEE Trans. Indus. Electron, 2020.

X. Zhang, M. Li, H. Ding, and X. Yao, Data-driven tuning of feedforward controller structured with infinite impulse response filter via iterative learning control, IET Contr. Theory Appli, vol.13, pp.1062-1070, 2019.

Y. Zhang, X. Liu, J. Liu, J. Rodriguez, and C. Garcia, Model-free predictive current control of power converters based on ultra-local model, IEEE Int. Conf. Indust.Techno, 2020.

X. Zhang, H. Wang, Y. Tian, L. Peyrodie, and X. Wang, Model-free based neural network control with time-delay estimation for lower extremity exoskeleton, Neurocomput, vol.272, pp.178-188, 2018.

X. Zhang, Z. Wei, A. , R. Yang, X. Wang et al., When does reinforcement learning stand out in in control? A comparative study on state representation. npj Quantum Inf, 2019.

L. Zhu, J. Ma, and S. Wang, Deep neural networks based real-time optimal control for lunar landing, Azimuth velocity (blue ??), reference trajectory (red ??) (b) Control, vol.608, p.1, 2019.

, Azimuth (a) Pitch position (blue ??), reference trajectory (red ??) (b) Control u2, Fig. 3. Scenario, vol.1

, Fig. 6. Scenario, vol.2, p.Pitch