[Solved]Module Activation/Deactivation Start - Stop Scripts.
Posted: 11 May 2012, 10:01
Just a thought...
modules should have a special folder that can be removed from the "Live Filesystem" after execution like Slackware's /install/do-inst.sh script in order to bring up or take down services and perform cleanup when a module is activated or deactivated. For instance, it would be BAD to deactivate the nfs-utils module without first umount -t nfs -a. Apache Webserver should stop itself before the module is deactivated or Apache would crash ultimately because its files went missing in the blink of an eye. Apache should start if that module is included for activation during startup or is activated manually. The script could be ran directly from the modules mountpoint to ensure conflicts in filenames, etc don't occur. Once mounted the folder would be rm -rf'ed from the live filesystem side hiding the startup scripts but of course they are still accessible from the module mountpoint which is where activate and deactivate would run them from. The folder could be deep in the system anyway and would likely be proper to put in something like "/etc/rc.d/rc.live" subfolder with "startmodule" and "stopmodule" as the "executable" names. I didn't include .sh because that file could be ANY executable file including a ELF Binary that might handle first time setup of the module if certain configuration files on the live system were missing indicating that was the first time the module has ever been activated. This procedure should take place for modules placed in /modules subfolder and could be ignored for modules residing in the /base subfolder as these modules construct the main system, libraries, access to shell interpreters, etc. Even if the module stopped the module loader during startup in order to ask questions, you can very easily answer those questions or interact with whatever ncurses interface or shell script interface that comes from running startmodule and it detected a first time run after which the script, ELF or NCurses interface would end and the module activation script during startup would resume...
At the current moment the only way to ensure start of service is to include the rc file in something like /etc/rc.d/rc3.d, but that would only execute if the module was activated at startup time, not if the module is started manually. So far I've yet to see a Live Distro based on this system that implements such a plan (Slax or what have you)... However, I think it is needed to prevent problems in certain instances. Older modules not implementing the code would simply be activated and no startup or stop script is executed. My only complaint is this could open a small security hole. As module activation is ran as root the shell script, ELF Binary, etc would be ran as root which could lead to a module that could literally destroy the system. A "cheatcode" could be implemented to prevent module startup actions from being executed. But really this isn't any worse than using any other module as a bad module could be constructed to autostart if it put the right RC File in place during mount causing execution during startup anyway which once again gets run as root.
For further security, modules that aren't +x shouldn't mount and become apart of the live filesystem. I realize the flag is used for executing files but since we don't execute modules, we can simply +x or -x the modules and leave them in the modules folder but only modules that are +x would activate on startup. This method wouldn't affect manual activation from the command line... Testing a file to see if it has execute permissions is easy as pie... There are MANY examples of it in /etc/rc.d/rc.M...
Posted after 22 minutes 30 seconds:
Keep in mind this could also be used to optimize module activation... At the moment the activate runs a set of commands to ensure libraries, KDE Application Menus, etc get put in place... However, not all applications will have a place in KDE or have libraries associated.. To overcome this issue simply use "flag" files. if /etc/rc.d/rc.live/run-ldconfig (which is 0 bytes) exists, then the module has libraries and needs ldconfig ran to ensure proper access. During startup you mount all modules first, then check the live filesystem directory instead of the module directory. If even 1 module has /etc/rc.d/rc.livfe/run-ldconfig it will show up on the live filesystem. Same if 4 modules have that file. But if none of them do then the file will be missing on the live filesystem. Modules that are completely missing /etc/rc.d/rc.live directory should follow the old rules and just run everything to ensure what needed to be done, got done. Once again performing cleanup by rm -rf /etc/rc.d/rc.live from the Live Filesystem.
modules should have a special folder that can be removed from the "Live Filesystem" after execution like Slackware's /install/do-inst.sh script in order to bring up or take down services and perform cleanup when a module is activated or deactivated. For instance, it would be BAD to deactivate the nfs-utils module without first umount -t nfs -a. Apache Webserver should stop itself before the module is deactivated or Apache would crash ultimately because its files went missing in the blink of an eye. Apache should start if that module is included for activation during startup or is activated manually. The script could be ran directly from the modules mountpoint to ensure conflicts in filenames, etc don't occur. Once mounted the folder would be rm -rf'ed from the live filesystem side hiding the startup scripts but of course they are still accessible from the module mountpoint which is where activate and deactivate would run them from. The folder could be deep in the system anyway and would likely be proper to put in something like "/etc/rc.d/rc.live" subfolder with "startmodule" and "stopmodule" as the "executable" names. I didn't include .sh because that file could be ANY executable file including a ELF Binary that might handle first time setup of the module if certain configuration files on the live system were missing indicating that was the first time the module has ever been activated. This procedure should take place for modules placed in /modules subfolder and could be ignored for modules residing in the /base subfolder as these modules construct the main system, libraries, access to shell interpreters, etc. Even if the module stopped the module loader during startup in order to ask questions, you can very easily answer those questions or interact with whatever ncurses interface or shell script interface that comes from running startmodule and it detected a first time run after which the script, ELF or NCurses interface would end and the module activation script during startup would resume...
At the current moment the only way to ensure start of service is to include the rc file in something like /etc/rc.d/rc3.d, but that would only execute if the module was activated at startup time, not if the module is started manually. So far I've yet to see a Live Distro based on this system that implements such a plan (Slax or what have you)... However, I think it is needed to prevent problems in certain instances. Older modules not implementing the code would simply be activated and no startup or stop script is executed. My only complaint is this could open a small security hole. As module activation is ran as root the shell script, ELF Binary, etc would be ran as root which could lead to a module that could literally destroy the system. A "cheatcode" could be implemented to prevent module startup actions from being executed. But really this isn't any worse than using any other module as a bad module could be constructed to autostart if it put the right RC File in place during mount causing execution during startup anyway which once again gets run as root.
For further security, modules that aren't +x shouldn't mount and become apart of the live filesystem. I realize the flag is used for executing files but since we don't execute modules, we can simply +x or -x the modules and leave them in the modules folder but only modules that are +x would activate on startup. This method wouldn't affect manual activation from the command line... Testing a file to see if it has execute permissions is easy as pie... There are MANY examples of it in /etc/rc.d/rc.M...
Posted after 22 minutes 30 seconds:
Keep in mind this could also be used to optimize module activation... At the moment the activate runs a set of commands to ensure libraries, KDE Application Menus, etc get put in place... However, not all applications will have a place in KDE or have libraries associated.. To overcome this issue simply use "flag" files. if /etc/rc.d/rc.live/run-ldconfig (which is 0 bytes) exists, then the module has libraries and needs ldconfig ran to ensure proper access. During startup you mount all modules first, then check the live filesystem directory instead of the module directory. If even 1 module has /etc/rc.d/rc.livfe/run-ldconfig it will show up on the live filesystem. Same if 4 modules have that file. But if none of them do then the file will be missing on the live filesystem. Modules that are completely missing /etc/rc.d/rc.live directory should follow the old rules and just run everything to ensure what needed to be done, got done. Once again performing cleanup by rm -rf /etc/rc.d/rc.live from the Live Filesystem.