First set-up (fixed laser)
A target sits on a rail 1Km from a laser that’s fixed at the FAR end of the rail. The laser fires a pulse that takes a fixed time ‘t’ to reach the target.
Now the target is placed close to the laser then sent at high speed away from the laser. At the precise instant the target passes that same 1Km point, the laser again fires a pulse at the now receding target.
This time, when the pulses reaches the 1Km point, the target will no longer be there. Instead the target will detect the pulse at t + ‘x’ … moreover, when the pulse is detected by the receding target it will be stretched, as too will the wavelength of the light of the pulse. The speed of that light, though, as measured at the target as the pulse passes, will remain at ‘c’, as ever.
Second set-up (fixed target)
A laser sits on a rail 1Km from a target that’s fixed at the FAR end of the rail. The laser fires a pulse that takes a fixed time ‘t’ to reach the target (as before).
Now the laser is placed close to the target then sent at high speed away from the target. At the precise instant the laser passes that same 1Km point, the laser again fires a pulse back towards the target.
This time, when the pulse reaches the fixed target, conventional theory predicts that, as when both are fixed, ONLY time ‘t’ will have elapsed (NOT t + ‘x’)…. yet the pulse will be stretched, as too will the wavelength of the light of the pulse, exactly as when the target was receding. The speed of that light passing, as measured at the target, will again remain at ‘c’, of course.
Now here’s the question: What's the difference between a receding laser and a receding target? Why should the outcome be different when these two objects move away from each other depending on which one we consider fixed and which moving? Is ‘conventional theory’ wrong?