A. M. Abbaker, H. Wang, and Y. Tian, Voltage control of solid oxide fuel cell power plant based on intelligent proportional integral-adaptive sliding mode control with anti-windup compensator, Trans. Inst. Measur. Contr, vol.42, pp.116-130, 2020.

H. Abouaïssa and S. Chouraqui, On the control of robot manipulator: A modelfree approach, J. Comput. Sci, vol.31, pp.6-16, 2019.

C. W. Anderson, D. C. Hittle, D. C. Katz, A. D. Kretchmar, and R. M. , Synthesis of reinforcement learning, neural networks and PI control applied to a simulated heating coil, Artif. Intell. Engin, vol.11, pp.421-429, 1997.

K. J. Åström and T. Hägglund, Advanced PID Control. Instrum. Soc. Amer, 2006.

K. J. Åström and R. M. Murray, Feedback Systems: An Introduction for Scientists and Engineers, 2008.

O. Bara, M. Fliess, C. Join, J. Day, and S. M. Djouadi, Toward a model-free feedback control synthesis for treating acute inflammation, J. Theoret. Biology, vol.448, pp.26-37, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01756743

J. M. Barth, J. Condomines, M. Bronz, G. Hattenberger, J. Moschetta et al., Towards a unified model-free control architecture for tail sitter micro air vehicles: Flight simulation analysis and experimental flights, AIAA Scitech Forum, 2020.

J. M. Barth, J. Condomines, M. Bronz, J. Moschetta, C. Join et al., Model-free control algorithms for micro air vehicles with transitioning flight capabilities, Int. J. Micro Air Vehic, vol.12, 2020.
URL : https://hal.archives-ouvertes.fr/hal-02542982

T. Baumeister, S. L. Brunton, and J. N. Kutz, Deep learning and model predictive control for self-tuning mode-locked lasers, J. Opt. Soc. Am. B, vol.35, pp.617-626, 2018.

M. Bekcheva, M. Fliess, C. Join, A. Moradi, and H. Mounier, Meilleuré elasticité "nuagique" par commande sans modèle, 2018.

F. Beltran-carbajal, G. Silva-navarro, and L. G. Trujillo-franco, On-line parametric estimation of damped multiple frequency, Elec. Power Syst. Res, vol.154, pp.423-452, 2018.

N. Bourbaki, Fonctions d'une variable réelle, Hermann. English translation, 1976.

S. L. Brunton, B. R. Noack, and P. Koumoutsakos, Machine learning for fluid mechanics, Annu. Rev. Fluid Mech, vol.52, pp.477-508, 2020.
URL : https://hal.archives-ouvertes.fr/hal-02398670

M. A. Bucci, O. Semeraro, A. Allauzen, G. Wisniewski, L. Cordier et al., Control of chaotic systems by deep reinforcement learning, Proc. Roy. Soc. A, vol.475, 2019.
URL : https://hal.archives-ouvertes.fr/hal-02411475

L. Bu?oniu, T. De-bruin, D. Toli?, J. Koberb, and I. Palunko, Reinforcement learning for control: Performance, stability, and deep approximators, Annual Rev. Contr, vol.46, pp.8-28, 2018.

J. Chen and T. Huang, Applying neural networks to on-line updated PID controllers for nonlinear process control, J. Process Contr, vol.14, pp.211-230, 2004.

K. Cheon, J. Kim, M. Hamadache, and D. Lee, On replacing PID controller with deep learning controller for DC motor system, J. Automat. Contr. Engin, vol.3, pp.452-456, 2015.

M. Clouatre and M. Thitsa, Shaping 800nm pulses of Yb/Tm co-doped laser: A control theoretic approach, Ceramics Int, 2020.

B. Kiumarsi, K. G. Vamvoudakis, H. Modares, and F. L. Lewis, Optimal and autonomous control using reinforcement learning: A survey, IEEE Trans. Neural Netw. Learn. Syst, vol.29, pp.2042-2062, 2018.

F. Lafont, J. Balmat, N. Pessel, and M. Fliess, A model-free control strategy for an experimental greenhouse with an application to fault accommodation, Comput. Electron. Agricul, vol.110, pp.139-149, 2015.
URL : https://hal.archives-ouvertes.fr/hal-01081757

N. O. Lambert, D. S. Drew, J. Yaconelli, S. Levine, R. Calandra et al., Low-level control of a quadrotor with deep model-based reinforcement learning, IEEE Robot. Automat. Lett, vol.4, pp.4224-4230, 2019.

Y. Le-cun, Quand la machine apprend, 2019.

Y. Lecun, Y. Bengio, and G. Hinton, Deep learning, Nature, vol.521, pp.436-444, 2015.

S. Li and Y. Zhang, Neural Networks for Cooperative Control of Multiple Robot Arms, 2018.

S. Lucia and B. Karg, A deep learning-based approach to robust nonlinear model predictive control. IFAC PapersOnLine, pp.511-516, 2018.

B. Luo, D. Liu, T. Huang, and D. Wang, Model-free optimal tracking control via critic-only Q-learning, IEEE Trans. Neural Netw. Learn. Syst, vol.27, pp.2134-2144, 2016.

F. Lv, C. Wen, Z. Bao, and M. Liu, Fault diagnosis based on deep learning, Amer. Contr. Conf, 2016.

N. Ma, G. Song, and H. Lee, Position control of shape memory alloy actuators with internal electrical resistance feedback using neural networks, Smart Mater. Struct, vol.13, pp.777-783, 2004.

N. Matni, A. Proutiere, A. Rantzer, and S. Tu, From self-tuning regulators to reinforcement learning and back again, 58th Conf. Decis. Contr, 2019.

N. Matni and S. Tu, A tutorial on concentration bounds for system identification. 58th Conf. Decis. Contr, 2019.

L. Menhour, B. Andréa-novel, M. Fliess, D. Gruyer, and H. Mounier, An efficient model-free setting for longitudinal and lateral vehicle control: Validation through the interconnected Pro-SiVIC/RTMaps, IEEE Trans. Intel. Transp. Syst, vol.19, pp.461-475, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01515681

I. T. Michailidis, T. Schild, R. Sangi, P. Michailidis, C. Korkas et al., Energy-efficient HVAC management using cooperative, self-trained, control agents: A real-life German building case study, App. Ener, vol.211, pp.113-125, 2018.

I. Miller, W. T. Sutton, R. S. Werbos, and P. J. , Neural Networks for Control, 1990.

V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness et al., Human-level control through deep reinforcement learning, Nature, vol.518, pp.529-533, 2015.

S. Moe, A. M. Rustand, and K. G. Hanssen, Machine learning in control systems: An overview of the state of the art, Artificial Intelligence XXXV, Lect. Notes Artif. Intel. 11311, pp.250-264, 2018.

I. N'doye, S. Asiri, A. Aloufi, A. Al-awan, and T. Laleg-kirati, Intelligent proportional-integral-derivative control-based modulating functions for laser beam pointing and stabilization, IEEE Trans. Contr. Syst. Techno, vol.28, pp.1001-1008, 2020.

C. Nicol, C. J. Macnab, and A. Ramirez-serrano, Robust neural network control of a quadrotor helicopter, Canad. Conf. Elec. Comput. Engin, 2008.

B. Plumejeau, S. Delprat, L. Keirsbulck, M. Lippert, and W. Abassi, Ultralocal model-based control of the square-back Ahmed body wake flow, Phys. Fluids, p.85103, 2019.

Z. Qin, Y. Xin, and J. Sun, Dual-loop robust attitude control for an aerodynamic system with unknown dynamic model: algorithm and experimental validation, IEEE Access, vol.8, pp.36582-36594, 2020.

S. T. Qu, Unmanned powered paraglider flight path control based on PID neutral network, IOP Conf. Ser. Mater. Sci. Eng, vol.470, p.12008, 2019.

J. Rabault, M. Kuchta, A. Jensen, U. Réglade, and N. Cerardi, Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control, J. Fluid Mech, vol.865, pp.281-302, 2019.

M. Radac, R. Precup, and R. Roman, Model-free control performance improvement using virtual reference feedback tuning and reinforcement Q-learning, Int. J. Syst. Sci, vol.48, pp.1071-1083, 2017.

M. Rampazzo, D. Tognin, M. Pagan, L. Carniello, and A. Beghi, Modelling, simulation and real-time control of a laboratory tide generation, Contr. Eng. Pract, vol.83, pp.165-175, 2019.

B. Recht, A tour of reinforcement learning: The view from continuous control, Annu. Rev. Contr. Robot. Autonom. Syst, vol.2, pp.253-279, 2019.

V. Rocher, C. Join, S. Mottelet, J. Bernier, S. Rechdaoui-guerin et al., La production de nitrites lors de la dénitrification des eaux usées par biofiltration -stratégie de contrôle et de réduction des concentrations résiduelles, J. Water Sci, vol.31, pp.61-73, 2018.

S. Russel and P. Norvig, Artificial Intelligence -A Modern Approach, 2016.

C. Sancak, F. Yamac, M. Itik, and G. Alici, Model-free control of an electroactive polymer actuator, Mater. Res. Expr, vol.6, p.55309, 2019.

T. J. Sejnowski, The unreasonable effectiveness of deep learning in artificial intelligence, Proc. Nat. Acad. Sci, 2020.

D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre et al., Mastering the game of Go with deep neural networks and tree search, Nature, vol.529, pp.484-489, 2016.

H. Sira-ramírez, C. García-rodríguez, J. Cortès-romero, and A. Luviano-juárez, Algebraic Identification and Estimation Methods in Feedback Control Systems, 2014.

P. Stalph, Analysis and Design of Machine Learning Techniques Springer, 2014.

M. Sugiyama, Statistical Reinforcement Learning -Modern Machine Learning, 2015.

R. S. Sutton and A. G. Barto, Reinforcement Learning, 2018.
URL : https://hal.archives-ouvertes.fr/hal-00764281

M. Ticherfatine and Q. Zhu, Fast ferry smoothing motion via intelligent PD controller, J. Marine. Sci. App, vol.17, pp.273-279, 2018.

J. Villagra, C. Join, R. Haber, and M. Fliess, Model-free control for machine tool systems, 2020.

Y. Wang, H. Li, R. Liu, L. Yang, and X. Wang, Modulated model-free predictive control with minimum switching losses for PMSM drive system, IEEE Access, vol.8, pp.20942-20953, 2020.

H. Wang, S. Li, Y. Tian, and A. Aitouche, Intelligent proportional differential neural network control for unknown nonlinear system, Stud. Informat. Contr, vol.25, pp.445-452, 2016.

Y. Wang, K. Velswamy, and B. Huang, A novel approach to feedback control via deep reinforcement learning. IFAC PapersOnLine, pp.31-36, 2018.

Z. Wang and J. Wang, Ultra-local model predictive control: A model-free approach and its application on automated vehicle trajectory tracking, Contr. Eng. Pract, vol.101, p.104482, 2020.

S. L. Waslander and G. M. Hoffmann,

C. J. Tomlin, Multiagent quadrotor testbed control design: integral sliding mode vs. reinforcement learning, IEEE/RSJ Int. Conf. Intell. Robot. Syst, 2005.

Y. Wu, Q. Song, and X. Yang, Robust recurrent neural network control of biped robot, J. Intell. Robot. Syst, vol.49, pp.151-169, 2007.

H. Yang, C. Liu, J. Shi, and G. Zhong, Development and control of four-wheel independent driving and modular steering electric vehicles for improved maneuverability limits, SAE Tech. Paper, pp.2019-2020, 2019.

K. Yosida, Operational Calculus (translated from the Japanese), 1984.

Y. Zhang, S. X. Ding, Y. Yang, and L. Li, Data-driven design of two-degreeof-freedom controllers using reinforcement learning techniques, IET Contr. Theory Appli, vol.9, pp.1011-1021, 2015.

J. Zhang, J. Jin, and L. Huang, Model-free predictive current control of PMSM drives based on extended state observer using ultra-local model, IEEE Trans. Indus. Electron, 2020.

X. Zhang, M. Li, H. Ding, and X. Yao, Data-driven tuning of feedforward controller structured with infinite impulse response filter via iterative learning control, IET Contr. Theory Appli, vol.13, pp.1062-1070, 2019.

Y. Zhang, X. Liu, J. Liu, J. Rodriguez, and C. Garcia, Model-free predictive current control of power converters based on ultra-local model, IEEE Int. Conf. Indust.Techno, 2020.

X. Zhang, H. Wang, Y. Tian, L. Peyrodie, and X. Wang, Model-free based neural network control with time-delay estimation for lower extremity exoskeleton, Neurocomput, vol.272, pp.178-188, 2018.

X. Zhang, Z. Wei, A. , R. Yang, X. Wang et al., When does reinforcement learning stand out in in control? A comparative study on state representation. npj Quantum Inf, 2019.

L. Zhu, J. Ma, and S. Wang, Deep neural networks based real-time optimal control for lunar landing, Azimuth velocity (blue ??), reference trajectory (red ??) (b) Control, vol.608, p.1, 2019.

, Azimuth (a) Pitch position (blue ??), reference trajectory (red ??) (b) Control u2, Fig. 3. Scenario, vol.1

, Fig. 6. Scenario, vol.2, p.Pitch