![]() The definition I usually see is that a system is causal if #x_0(t)=x_1(t)# for #a\leq t < b# implies #y_0(t)=y_1(t)# for #a\leq t < b#. Let #x_0(t)# and #x_1(t)# be two inputs with corresponding outputs #y_0(t)# and #y_1(t)#. This discussion has made me go back and look at the more precise definitions of causality that the more mathematical treatments of linear system theory often use, since the 'output doesn't depend on future inputs' notion provided in introductory treatments leaves a lot of room for interpretation. The first derivative of your input is a well-defined step and the second derivative is #\delta(t-t_0)#. If we allow generalized functions, as EEs very often do (usually without explicitly mentioning it), then there is no problem. Have you? A system has a domain and a range - if we are using classical analysis then the logical domain of the differentiator would be a space of differentiable functions. You can simply do synthetic division to make the system function look something like #H(s) = c_0\, s^p c_1\, s^# before. If they are not considered causal then then the what the OP wants to prove is pretty trivial. It's normally defined by the integral whose limits include t0#) so I had assumed they are causal.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |