2

I implemented a first draft of the Adams-Bashforth multistep-solver in C++. To make this method work, you need coefficients. In Wikipedia these are the bj in this formula:

y_n+1 = y_n + h * sum_over_j(bj * f(t_n-j, y_n-j))

I couldn't find any tables online that go beyond order 6 for these coefficients, so I tried calculating them myself with this Matlab script:

    function bj = adamsBashforthCoefficients(order)
        h = 1;

        nodes = -(0:(order-1));

        bj = zeros(1, order);
        
        for j = 1:order
            L = @(x) arrayfun(@(xi) prod((xi - nodes([1:j-1, j+1:end])) ./ (nodes(j) - nodes([1:j-1, j+1:end]))), x);
            
%             bj(j) = integral(L, 0, 1) / h;
            bj(j) = integral(L, 0, 1, 'AbsTol', 1e-12, 'RelTol', 1e-12) / h; % Test if tolerances were to high - they weren't

        end
    end

Up to order 6 this gives the same values as found online, so I assumed they are correct. However: Up to order 6 Adams-Bashforth seems to work fine, but order 7 or higher give frequent instabilities. Meaning: Some variables explode to huge values.

Can higher orders in Adams-Bashforth create instabilities where lower orders work? Should that already happen at order 7 (Matlabs ode113 goes up to order 11 or 12)? Or do I simply have the wrong coefficients?

If the coefficients are wrong, I would greatly appreciate it, if somebody could show me code how to calculate them correctly or simply point me to a table that goes to higher orders (at least 11-12). For comparison: Here is, what my script calculated for orders 6-8:

     // Order 6
{ 2.970138888888889, -5.502083333333333, 6.931944444444445, -5.068055555555556, 1.997916666666667, -0.329861111111111 }

        // Order 7
{ 3.285730820105820, -7.395634920634921, 11.665823412698412, -11.379894179894180, 6.731795634920635, -2.223412698412698, 0.315591931216931 }

        // Order 8
{ 3.589955357142857, -9.525206679894177,  18.054538690476193, -22.027752976190477, 17.379654431216931, -8.612127976190475, 2.445163690476190, -0.304224537037037 }
5
  • An exact method to systematically compute the coefficients using algebraic means was presented in my answer in math.stackexchange.com/questions/3473300/…. Check the literature for specifics, in short, you will find that the stability region for these methods shrinks very rapidly. The consequence is that the usable step-sizes may reach the boundaries of the floating-point format, the increasing number of steps giving a corresponding increase in the amplitude of the floating-point noise. Commented Oct 12, 2024 at 16:52
  • For large step sizes, the leading error term, which behaves according to the error, competes with the higher-degree terms. For small step-sizes the numerical error competes with the accumulated floating-point error. You want the segment between these extremes. For stiff ODE or methods with small stability regions this segment might shrink to nothing, which has effects like you described. Commented Oct 12, 2024 at 17:58
  • @LutzLehmann Indeed: Further tests revealed, that it depends on the step-size wether or not orders above 6 are stable. At a bit smaller step-sizes higher orders were stable, so my coefficients are correct I think. Do you think it makes sense to use Adams-Moulton (with the next timestep estimated by Adams-Bashforth) to increase stability? My ODE-function is very expensive, so the usual 2 evaluations with Adams-Bashforth-Moulton are inefficient (compared to one eval with Adams-Bashforth). But I also could just interpolate y_(n+1) at first. Do you think, this would be precise enough? Commented Oct 13, 2024 at 20:04
  • If you need implicit methods, then the extrapolation will be lacking whatever method you use, as the solution is too curvy. The stability regions for the implicit methods are larger, but also finite and shrinking for higher orders. I'm not sure if I remember this right, but the comparison in terms of function evaluations between comparable Runge-Kutta and multi-step is rather tight. And using Newton-like methods is always better than the simple fixed-point iteration. Commented Oct 13, 2024 at 20:29
  • I just discovered, that there is also a hybrid method, which is basically a multistep-Runge-Kutta (called Runge-Kutta-Nyström). As I understand it, it needs N ode-evals to achieve order N (maybe +/- 1). But it should be more stable than the Adams-methods at higher stepsizes. Can someone tell me, what increase in stepsize one can expect (if that is possible)? Say I do 5 ode-evals/order 5. Stability aside, RKN would have to permit a 5 times bigger stepsize to compete with Adams efficiency-wise... Commented Oct 14, 2024 at 17:00

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.