Voltage control of solid oxide fuel cell power plant based on intelligent proportional integral-adaptive sliding mode control with anti-windup compensator, Trans. Inst. Measur. Contr, vol.42, pp.116-130, 2020. ,
On the control of robot manipulator: A modelfree approach, J. Comput. Sci, vol.31, pp.6-16, 2019. ,
Synthesis of reinforcement learning, neural networks and PI control applied to a simulated heating coil, Artif. Intell. Engin, vol.11, pp.421-429, 1997. ,
, Advanced PID Control. Instrum. Soc. Amer, 2006.
Feedback Systems: An Introduction for Scientists and Engineers, 2008. ,
Toward a model-free feedback control synthesis for treating acute inflammation, J. Theoret. Biology, vol.448, pp.26-37, 2018. ,
URL : https://hal.archives-ouvertes.fr/hal-01756743
Towards a unified model-free control architecture for tail sitter micro air vehicles: Flight simulation analysis and experimental flights, AIAA Scitech Forum, 2020. ,
Model-free control algorithms for micro air vehicles with transitioning flight capabilities, Int. J. Micro Air Vehic, vol.12, 2020. ,
URL : https://hal.archives-ouvertes.fr/hal-02542982
Deep learning and model predictive control for self-tuning mode-locked lasers, J. Opt. Soc. Am. B, vol.35, pp.617-626, 2018. ,
Meilleuré elasticité "nuagique" par commande sans modèle, 2018. ,
On-line parametric estimation of damped multiple frequency, Elec. Power Syst. Res, vol.154, pp.423-452, 2018. ,
Fonctions d'une variable réelle, Hermann. English translation, 1976. ,
Machine learning for fluid mechanics, Annu. Rev. Fluid Mech, vol.52, pp.477-508, 2020. ,
URL : https://hal.archives-ouvertes.fr/hal-02398670
Control of chaotic systems by deep reinforcement learning, Proc. Roy. Soc. A, vol.475, 2019. ,
URL : https://hal.archives-ouvertes.fr/hal-02406677
Reinforcement learning for control: Performance, stability, and deep approximators, Annual Rev. Contr, vol.46, pp.8-28, 2018. ,
Applying neural networks to on-line updated PID controllers for nonlinear process control, J. Process Contr, vol.14, pp.211-230, 2004. ,
On replacing PID controller with deep learning controller for DC motor system, J. Automat. Contr. Engin, vol.3, pp.452-456, 2015. ,
Shaping 800nm pulses of Yb/Tm co-doped laser: A control theoretic approach, Ceramics Int, 2020. ,
Unmanned powered paraglider flight path control based on PID neutral network, IOP Conf. Ser. Mater. Sci. Eng, vol.470, p.12008, 2019. ,
Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control, J. Fluid Mech, vol.865, pp.281-302, 2019. ,
Model-free control performance improvement using virtual reference feedback tuning and reinforcement Q-learning, Int. J. Syst. Sci, vol.48, pp.1071-1083, 2017. ,
Modelling, simulation and real-time control of a laboratory tide generation, Contr. Eng. Pract, vol.83, pp.165-175, 2019. ,
A tour of reinforcement learning: The view from continuous control, Annu. Rev. Contr. Robot. Autonom. Syst, vol.2, pp.253-279, 2019. ,
La production de nitrites lors de la dénitrification des eaux usées par biofiltration -stratgie de contrôle et de réduction des concentrations résiduelles, J. Water Sci, vol.31, pp.61-73, 2018. ,
, Artificial Intelligence -A Modern Approach, 2016.
Model-free control of an electroactive polymer actuator, Mater. Res. Expr, vol.6, p.55309, 2019. ,
The unreasonable effectiveness of deep learning in artificial intelligence, Proc. Nat. Acad. Sci, 2020. ,
Mastering the game of Go with deep neural networks and tree search, Nature, vol.529, pp.484-489, 2016. ,
Algebraic Identification and Estimation Methods in Feedback Control Systems, 2014. ,
Analysis and Design of Machine Learning Techniques Springer, 2014. ,
Statistical Reinforcement Learning -Modern Machine Learning, 2015. ,
, Reinforcement Learning, 2018.
URL : https://hal.archives-ouvertes.fr/hal-00764281
Fast ferry smoothing motion via intelligent PD controller, J. Marine. Sci. App, vol.17, pp.273-279, 2018. ,
Model-free control for machine tool systems, 2020. ,
Modulated model-free predictive control with minimum switching losses for PMSM drive system, IEEE Access, vol.8, pp.20942-20953, 2020. ,
Intelligent proportional differential neural network control for unknown nonlinear system, Stud. Informat. Contr, vol.25, pp.445-452, 2016. ,
A novel approach to feedback control via deep reinforcement learning. IFAC PapersOnLine, pp.31-36, 2018. ,
Ultra-local model predictive control: A model-free approach and its application on automated vehicle trajectory tracking, Contr. Eng. Pract, vol.101, p.104482, 2020. ,
,
Multiagent quadrotor testbed control design: integral sliding mode vs. reinforcement learning, IEEE/RSJ Int. Conf. Intell. Robot. Syst, 2005. ,
Robust recurrent neural network control of biped robot, J. Intell. Robot. Syst, vol.49, pp.151-169, 2007. ,
Development and control of four-wheel independent driving and modular steering electric vehicles for improved maneuverability limits, SAE Tech. Paper, pp.2019-2020, 2019. ,
Operational Calculus (translated from the Japanese), 1984. ,
Data-driven design of two-degreeof-freedom controllers using reinforcement learning techniques, IET Contr. Theory Appli, vol.9, pp.1011-1021, 2015. ,
Model-free predictive current control of PMSM drives based on extended state observer using ultra-local model, IEEE Trans. Indus. Electron, 2020. ,
Data-driven tuning of feedforward controller structured with infinite impulse response filter via iterative learning control, IET Contr. Theory Appli, vol.13, pp.1062-1070, 2019. ,
Model-free predictive current control of power converters based on ultra-local model, IEEE Int. Conf. Indust.Techno, 2020. ,
Model-free based neural network control with time-delay estimation for lower extremity exoskeleton, Neurocomput, vol.272, pp.178-188, 2018. ,
When does reinforcement learning stand out in in control? A comparative study on state representation. npj Quantum Inf, 2019. ,
Deep neural networks based real-time optimal control for lunar landing, Azimuth velocity (blue ??), reference trajectory (red ??) (b) Control, vol.608, p.1, 2019. ,
, Azimuth (a) Pitch position (blue ??), reference trajectory (red ??) (b) Control u2, Fig. 3. Scenario, vol.1
, Fig. 6. Scenario, vol.2, p.Pitch