Term Paper 3
This work is a submission for the course Modelling Complex Systems(IDC-631) for the year 2020-21 Spring (Even) Semester by Dhruva Sambrani, MS18163
Abstract
This paper mainly attempts to recreate Kuramoto's model of coupled oscillators as defined in [strogatz]. This paper consists of four sections-
§1. Describe and implement the model as in the paper
§2. Check that the model evolves as we expect it to and make pretty animations
§3. Show that simulated results agree with certain analytical properties
§4. Study the robustness of the model to different distributions
How to read the paper
This paper is made using the Pluto[pluto] Package which provides an interface which can show both code and theory side by side, which will hopefully enhance the understanding of the work and also provide a reproducable implementation without having to go into the details of it. Those who wish to read the paper for the theory and results can easily skip over the code and still understand everything. Those who are attempting to recreate the work can use the code as a guide. However, in the interest of keeping the code aligned to the topic at hand, certain auxillary functions have been hidden. The content of these hidden cells can be found in the executable version of this notebook here
§0 Packages used
The Plots.jl[plots] package is used for plotting and the Statistics[stats] Package for some statistical analysis. Distributions.jl[distributions] is used to sample the . PlutoUI.jl[plutoui] is used for some UI elements such as the ToC. JLD[jld] is used for storing data between runs. Images.jl[images] is used for Gaussian smoothing. LsqFit.jl[lsqfit] is used to fit the
hidden code cell
§1 - The Kuramoto Model
This is a model which attempts to simulate coupled oscillators. Consider a set of
If the coupling is weak, and symmetric, this can be reduced to
Which can be recast into the following equation
Agent
Let us define a structure called Agent which holds two variables R and V, which correspond to
xxxxxxxxxxmutable struct Agent R::Float64 V::Float64endNow let's define how an agent updates, given κ, r, Ψ and dt, as defined in the equations before
update! (generic function with 1 method)xxxxxxxxxxfunction update!(agent::Agent, κr::Float64, Phi::Float64, dt::Float64) agent.R += (agent.V + κr * sin(Phi - agent.R)) * dt # θ = θ + dθ/dt * Δt = θ + (ω + κr⋅sin(Ψ-θ))⋅dtendHelper Functions
init (generic function with 1 method)xxxxxxxxxxbegin Statistics.mean(arr::Array{Agent})::Complex = mean(map(arr) do ag; cis(ag.R); end) init(n::Int, σ::Float64) = Agent.(1., rand(Normal(0., σ), n))endinit (generic function with 2 methods)xxxxxxxxxxinit(n::Int, d::Distribution) = Agent.(1., rand(d, n))Evolution and Simulation
As always, we define a single step evolution along with a multistep simulate function.
The evolve function takes a list of agents and updates them according to the dynamics explained above. The simulate function sets the parameters and records certain variables for plotting and further analysis
evolve! (generic function with 1 method)xxxxxxxxxxfunction evolve!(agents::Array{Agent}, κ::Float64, dt::Float64) meanvec = mean(agents) update!.(agents, κ*abs(meanvec), angle(meanvec), dt) meanvecendsimulate (generic function with 2 methods)xxxxxxxxxxbegin function simulate(steps::Int, n::Int, σ::Float64, κ::Float64, time::Int64) dt = time/steps agents = init(n, σ) means = Array{Complex,1}(undef, Int(steps)) for step in 1:steps means[step] = evolve!(agents, κ, dt) end means end function simulate(steps::Int, agents::Array{Agent,1}, κ::Float64, time::Int64) dt = time/steps agents = deepcopy(agents) means = Array{Complex,1}(undef, Int(steps)) for step in 1:steps means[step] = evolve!(agents, κ, dt) end means endend§2 Visualizing the model
Let us first make sure that the model is running correctly. We can either visualize the positions of the oscillators with time, or we can plot
depends on κ
xxxxxxxxxxbegin no_corr_means = simulate(100_000, 1000, 3., 1., 40) corr_means = simulate(100_000, 1000, 3., 7., 40)end;Animated visualization of the dynamics
For this, we define a simulate2 function which is of the same form as the previous function, but it also records the agents for each time step. Then we plot the locations of the oscillators and the mean vector for each time step and make an animation. Due to computational limitations, only 100 particles for 10000 timesteps are taken, and animation is made for every 20th frame. Also, for reproducability, the random seed is fixed. The seed was selected so that the results were clearer to show. However all observations are observed for any number of unseeded runs. To test, simply comment out the seed line.
simulate2 (generic function with 1 method)xxxxxxxxxxbegin Random.seed!(1901) ms, ags = simulate2(10000, 100, 3., 7., 40)end;hidden code cell
hidden code cell
§3 Simulations agree with analysis
In section 3.3 of [strogatz], the author mentions the following observations of the simulations.
r∞ vs κ
For all
less than a certain threshold , the oscillators act as if they were uncoupled: the phases become uniformly distributed around the circle, starting from any initial condition. Then decays to a tiny jitter of sizeWhen
, this incoherent state becomes unstable and grows exponentially, reflecting the nucleation of a small cluster of oscillators that are mutually synchronized, thereby generating a collective oscillation. Eventually saturates at some level , though still with fluctuations.
Both of these claims are shown to be true in the first plot. For a "high"
Individual Oscillators
The population splits into two groups: the oscillators near the center of the frequency distribution lock together at the mean frequency and co-rotate with the average phase ψ(t), those in the tails run near their natural frequencies and drift relative to the synchronized cluster.
This mixed state is often called partially synchronized. With further increases in K, more and more oscillators are recruited into the synchronized cluster, and
tends to 1
Both of these claims are also shown to be true in the animation and in the plots of the effective velocities. In the animation, we clearly see that most of the oscillators are in a clump, but there are a few that drift away from the mean and stay out of the cluster. In the velcity plot, this is almost clearly evident. Also, if looked at carefully, it shows that the particles which are not in sync are the ones which where far away from the mean in the initial distribution
κc vs g(ω)
The paper also analyses
None the less, we can be reasonably sure that Kuramoto and Nishikawa's second attempt (§6.2 in [strogatz]) was more accurate, and by extension so was Strogatz's own analysis. From their analysis,
In the case of the gaussian distribution,
Hence,
Computational implementation
We define a function simulate_for_sigma which searches by bisection for
In the ideal case, we'd average
get_rinf (generic function with 1 method)xxxxxxxxxxget_rinf(arr::Array{Complex}) = mean(abs.(last(arr, 10_000)))simulate_for_sigma (generic function with 1 method)xxxxxxxxxxfunction simulate_for_sigma(σ::Float64, κmn::Float64, κmx::Float64, κdiff::Float64, rdiff::Float64) function inner_function!(d::Dict{Float64, Float64}, κmn::Float64, κmx::Float64) if ((κmx - κmn) < κdiff) || ((d[κmx] - d[κmn]) < rdiff) return end κ = (κmx + κmn)/2 d[κ] = get_rinf(simulate(100_000, 1000, σ, κ, 100)) inner_function!(d, κ, κmx) inner_function!(d, κmn, κ) end d = Dict{Float64, Float64}() d[κmn] = get_rinf(simulate(100_000, 1000, σ, κmn, 100)) d[κmx] = get_rinf(simulate(100_000, 1000, σ, κmx, 100)) inner_function!(d, κmn, κmx) return dendboxsmooth (generic function with 1 method)x
boxsmooth(arr, b) = [mean(arr[i-b:i+b]) for i in b+1:length(arr)-b]Run new simulation?
begin ks = 0 rinfs = 0 if runsim d = simulate_for_sigma(3, 0., 10., 0.01, 0.01) ks = sort(keys(d)) rinfs = map(ks) do k; d[k]; end else d = load("save_sigma_3_nice.jld") ks, rinfs = d["ks"], d["rinfs"] end plot(boxsmooth(ks, 3), boxsmooth(rinfs, 3), label="Data") vline!([√(8/π)*3], label="Theoretical prediction") vline!([1.56*3], label="\\kappa_{c} from fitting below") plot!(xlabel="\\kappa", ylabel="\\ r_∞", title="\\kappa vs r_∞", legend=:right)endStability of κc over σ
Using the powerful computers at our disposal, we can run the simulation on multiple
We measure the points of phase transition by taking a derivative along
Code to generate data
function graph(κs::Array{Float64}, σs::Array{Float64})
arr = zeros(Float64, length(κs), length(σs))
for (i, κ) in enumerate(κs), (j, σ) in enumerate(σs)
arr[i,j] = mean(
abs,
simulate(Int(1e5), 1000, σ, κ, 1e-4)[end - 1000:end]
)
end
arr
end
κs, σs = collect(0.0:0.1:2.5), collect(0.0:0.1:2.5)
zs = graph(κs, σs)
save("save.jld", "zs", zs, "ks", κs, "ss", σs)
xxxxxxxxxxbegin kcs_data = load("save.jld") kcszs, kcsκs, kcsσs = kcs_data["zs"], kcs_data["ks"], kcs_data["ss"] kcsnzs = imfilter(kcszs, Kernel.gaussian(0.7)) deriv = kcsnzs[2:end, :] - kcsnzs[1:end-1, :] j = findall(deriv .> 0.09) ys = map(j) do i; kcsκs[i[1]]; end xs = map(j) do i; kcsσs[i[2]]; end @. model(x, m) = m*x fit = curve_fit(model, xs, ys, [0.])end;Interactive plot below
Interactive plot below
Observations
The single run and the extended analysis both show the same result that the
§4 Robustness of results to distributions
As mentioned in the previous subsection, let's run the same simulation with some other distributions
0.0:0.5:10.0xxxxxxxxxxkappas = 0:0.5:10Gaussian Distribution
We've already run this before, but let's do it again.
xxxxxxxxxxgaussian_rs = map(kappas) do kappa mean(abs.(last(simulate(100_000, 1000, 3., kappa, 40), 1000)))end;Lorenz-Cauchy distributions
0.026854
0.0435163
0.0455273
0.0468461
0.0508459
0.116506
0.137484
0.207589
0.412664
0.421009
0.521084
0.547144
0.557127
0.607083
0.625113
0.681921
0.696298
0.683513
0.711064
xxxxxxxxxxbegin cauchy_kappas = 3:0.5:12 cauchy_rs_1 = map(cauchy_kappas) do kappa mean(abs.(last(simulate(50_000, init(1000, Cauchy(0,3)), kappa, 40), 1000))) end; cauchy_rs_2 = map(cauchy_kappas) do kappa mean(abs.(last(simulate(50_000, init(1000, Cauchy(0,3)), kappa, 40), 1000))) end; cauchy_rs_3 = map(cauchy_kappas) do kappa mean(abs.(last(simulate(50_000, init(1000, Cauchy(0,3)), kappa, 40), 1000))) end; cauchy_rs = (cauchy_rs_1 + cauchy_rs_2 + cauchy_rs_3)/3endReflected exponential decay
red (generic function with 1 method)xxxxxxxxxxred_ags = Agent.(0.0, rejectionSample(red(3), 1000));xxxxxxxxxxbegin red_ags_2 = deepcopy(red_ags) red_rs = map(kappas) do kappa mean(abs.(last(simulate(100_000, red_ags_2, kappa, 40), 1000))) endend;Observations
Like the Gaussian, the other distributions also have a slight discrepancy from the expected value. However by a visual analysis, we can say that the theoretical results are close to what we would get in the infinite limit. For a more accurate analysis, one could fit the function as mentioned in the paper to the data, and then look at the value of
Another observation that we can see is that distributions that are tail heavy are much more prone to variations in the partially coherent part of the graph. In order to reduce this, we average results over 3 runs with random initial conditions for the Lorenz Cauchy distribution
References
strogatz
Steven H. Strogatz, From Kuramoto to Crawford: exploring the onset of synchronization in populations of coupled oscillators, Physica D: Nonlinear Phenomena, Volume 143, Issues 1–4, https://doi.org/10.1016/S0167-2789(00)00094-4
julia
Julia: A Fresh Approach to Numerical Computing. Jeff Bezanson, Alan Edelman, Stefan Karpinski, Viral B. Shah. (2017) SIAM Review, 59: 65–98. doi: 10.1137/141000671
plots
Plots - powerful convenience for visualization in Julia. Thomas Breloff. docs.juliaplots.org
pluto
Pluto - Reactive Notebooks for Julia. Fons van der Plas and Mikołaj Bochenski. https://github.com/fonsp/Pluto.jl
plutoui
PlutoUI. Fons van der Plas. https://github.com/fonsp/PlutoUI.jl
stats
Statistics - Julia stdlib for Statistics. https://docs.julialang.org/en/v1/stdlib/Statistics/
images
JuliaImages: image processing and machine vision for Julia. https://juliaimages.org/stable/
lsqfit
LsqFit.jl: Basic least-squares fitting in pure Julia under an MIT license. https://github.com/JuliaNLSolvers/LsqFit.jl
distributions
1.Dahua Lin. JuliaStats/Distributions.jl: v0.24.15. (2021). doi:10.5281/zenodo.4577319
jld
JLD.jl. JuliaIO. https://github.com/JuliaIO/JLD.jl