- Calculate & Visualize the Hopalong Attractor with Python
The "Hopalong" attractor*, invented by Barry Martin of Aston University in Birmingham, England, was popularized by A.K. Dewdney in the September 1986 issue of Scientific American. In Germany, it gained further recognition through a translation titled "Hüpfer" in Spektrum der Wissenschaft.
*Nicknamed by A.K. Dewdney.
The two Python programs provided calculate and visualize the “Hopalong” attractor by iterating the following recursive functions:
Where:
- The sequence starts from the initial point (x0 , y0) = (0 , 0)
- xn and yn represent the coordinates at the n-th iteration.
- a, b and c are parameters influencing the attracto's dynamics.
- sgn is the sign (signum) function, but math.copysign() is used, which is defined as follows:
- Representation of the attractor as a density map representing a probability density function (PDF)
- Calculation with a very high number of iterations at high processing speed and low memory requirements
Two-pass algorithm with separate calculation of:
-
The spatial extent of the attractor trajectory (first pass)
-
Direct mapping of the sequentially generated floating point values in continuous space to a discrete image while tracking the number of pixel hits to display the density map (second pass)
Just-in-time (JIT) compilation supported by a low complexity code structure.
For further hints regarding two-pass approach, see Two-Pass Approach
To run the programs, the following Python libraries or Modules must be installed / imported:
- matplotlib *
- numpy *
- numba * (as of October 26, 2024, Numba only supports Python versions up to 3.12.7. Versions 3.13.x are currently not supported)
- math *
(* mandatory)
Optional (for performance tracking):
- time
- resource
Import the time
and resource
libraries if you want to track process time and system memory usage.
Otherwise, please comment out the relevant code snippets in the import section and the main() function.
...
#import time
#import resource
...
...
# Start the time measurement
#start_time = time.process_time()
...
# End the time measurement
#end_time = time.process_time()
# Calculate the CPU user and system time
#cpu_sys_time_used = end_time - start_time
# Calculate the memory resources used
#memMb=resource.getrusage(resource.RUSAGE_SELF).ru_maxrss/1024.0/1024.0
#print(f'CPU User&System time: {cpu_sys_time_used:.2f} seconds')
#print (f'Memory (RAM): {memMb:.2f} MByte used')
...
When you run the programs, you will be prompted to enter the following parameters, which are crucial for determining the behavior of the Hopalong Attractor:
- a (float or integer): The first parameter affecting the attractor's dynamics.
- b (float or integer): The second parameter affecting the attractor's dynamics.
- c (float or integer): The third parameter affecting the attractor's dynamics.
- n (integer): The number of iterations to run (e.g. 1e6, 1_000_000 or 1000000).
Example parameters:
- a = -2
- b = -0.33
- c = 0.01
- n = 2e8
Experimenting with different values of these parameters will yield diverse and intricate visual patterns.
The programs produce a visual representation of the Hopalong Attractor. The image displays the trajectory as a density map, where color intensity represents the frequency of points visited. Lighter areas indicate regions of higher density. This provides a striking visual of the attractor's complex structure.
Basic Version 2D
Basic Version 3D
Extended Version
Two variants of the program are available:
-
Basic 2D Version: Calculates and displays the Hopalong Attractor along with pixel density represented by a color bar.
-
Basic 3D Version: Calculates and displays the Hopalong Attractor along with pixel density (normalized) represented by Z-axis.
-
Extended Version: Includes all features of the basic version, excluding the color bar, plus additional statistics and visualization of the pixel hit counts distribution.
The code of the Basic Version
is prepared for both 2D and 3D, just comment out the corresponding "render_trajectory_image" function.
See Recent Code Changes
Examples of outputs can be found in the "Usage" section above.
In both program variants, pixels are color-coded based on the frequency of trajectory point "hits," referred to as the "pixel hit count."
Point to Pixel Mapping:
- The trajectory points generated are represented as floating-point values in a two-dimensional continuous space. To visualize these points on a discrete image, they must be mapped to integer pixel coordinates using scaling factors based on the trajectory's extents (minimum and maximum values) and the image dimensions. This scaling ensures that continuous coordinates fit within the pixel grid.
Integer Conversion and Density Representation:
- The floating-point coordinates are converted to integers to determine pixel locations. This conversion is "lossy," as closely spaced trajectory points can be assigned to the same pixel, leading to multiple hits for that pixel. The image array is initialized to zero, and each time a pixel is hit, the count at that pixel's index is incremented. Pixels with more hits represent higher density areas, indicating a greater concentration of trajectory points. The varying hit counts provide a discrete measure of point concentration, with the total number of hits reflecting the number of iterations.
Visualization with Colormap:
- Matplotlib's "hot" colormap is used to represent the hit count information. It applies normalization to scale the hit counts within the color range, producing a gradient* from dark colors (low hit counts) to light colors (high hit counts). This effectively visualizes areas of greater activity within the attractor.
Remarks:
-
METHOD:
The method of mapping trajectory points to pixel coordinates and counting hits provides a discrete representation of point density, but it does not yield a true Probability Density Function (PDF). Instead, it offers an approximation that resembles a PDF, particularly in visualizing areas of higher concentration. -
VERIFICATION:
This is illustrated in the following two pictures: the first shows the output from integer conversion and image mapping, while the second displays the results from np.histogram2d(...density=True), which serves as a true PDF. Although both visualizations are similar, this does not imply that the original floating-point data follows a PDF. Rather, it indicates that the distribution exhibits regions of higher concentration, a characteristic common in chaotic systems, effectively captured by both methods. -
*
GRADIENT:
The intensity of the gradient when visualizing the density of a trajectory depends on the image resolution (number of pixels) or the number of bins used in a histogram (pixels per bin). A lower image resolution or fewer bins will result in a more intense gradient because more trajectory points are concentrated in a smaller area.
The programs now utilizes the math.copysign function "copysign(x,y)"
which returns a float with the magnitude (absolute value) of x but the sign of y.
On platforms that support signed zeros, copysign(1.0, -0.0) returns -1.0.
This adjustment alters the behavior of certain parameter sets, resulting in intricate patterns instead of periodic orbits or fixed point( (0, 0) with a = 0), which is the case when using the standard signum function.
Periodic orbits are trajectories in which the system returns to the same state after a fixed number of iterations.
For example, the following parameter combinations may yield complex patterns:
-
a = 1, b = 2, c = 3 or
-
a = 0, b = 1, c = 1 or
-
a = 1, b =1, c = 1
Despite using the Copysign function, some parameter sets will still lead to periodic orbits instead of intricate patterns, such as:
-
Set 1: a = p , b = 0, c = 0
-
Set 2: a = p, b = 0, c = p
-
Set 3: a = p, b = p, c = 0
Here, ( p ) is a constant parameter that remains the same within each of these sets.
In these cases, you may observe high-density cycles, where a relatively small number of pixels are hit repeatedly, suggesting that the system is likely settling into periodic orbits. Additionally, many of these "high-density cycle pixels" tend to be located at the boundaries of the attractor extents.
For example, with parameter set (3) the Hopalong equations are given by:
and we observe the 3-cycle: (0, 0), (0, p), and (p, p). The pixel density is: n / 3
start: (x0 , y0) = (0 , 0)
--> (x1 , y1) = (0 , p)
--> (x2 , y2) = (p , p)
--> (x3 , y3) = (0 , 0), cycle completed
Example a=5, b=5, c=0 (1_200_000 iterations):
If you want to experiment with this, it is recommended to reduce the total number of pixels by lowering the image resolution (e.g., 100x100) to achieve a clearer visual representation of the pixels bordering the minimum and maximum extents of the trajectory.
def main(image_size=(100, 100), color_map='hot'):
By the way, this scenario is an ideal use case for the extended version of the program with pixel hit count statistics.
Execution time and resources: Measurement starts after user input and records the CPU time for the entire process, including image rendering. It also tracks the system memory used.
Note: Since interactions with the plot window, such as zooming, panning, or mouse movements, are also measured, it is recommended to close the plot window automatically. This can be achieved using the commands plt.pause(1) followed by plt.close(fig). As long as there is no interaction with the plot window, the pause time from plt.pause() is not recorded by the time.process_time() function.
#plt.show()
plt.pause(1)
plt.close(fig)
The program leverages the Numba JIT just-in-time compilation for performance optimization. This avoids the overhead of Python's interpreter, providing a significant speedup over standard Python loops. JIT compilation translates Python code into machine code at runtime, allowing for more efficient execution of loops and mathematical operations.
Dummy Calls
Dummy calls are preliminary invocations of JIT-compiled functions that prompt the Numba compiler to generate machine code before the function is used in the main execution flow. This ensures that the function is precompiled, avoiding compilation overhead during its first actual execution. This process is akin to "eager compilation," as it occurs ahead of time, but it does not require explicit function signatures in the header.
Parallelization and Race Conditions
The parallel loop function prange
from the Numba library is not suitable for cross-iteration dependencies, such as those encountered when calculating trajectory points of recursive functions. While it is possible to restructure the second pass to use prange for populating the image array, this could introduce race conditions—situations where multiple threads access and modify shared data simultaneously, leading to inconsistent or unpredictable results. Therefore, this approach was not implemented.
By separating the extent calculation (first pass) from trajectory point mapping (second pass), this approach allows for efficient sequential processing. Knowing the overall trajectory extents in advance enables direct and efficient mapping of points to image pixels, optimizing memory usage and maintaining consistent performance.
Advantages:
-
Memory Efficiency: The two-pass approach reduces memory requirements by recalculating trajectory points, eliminating the need for large-scale caching.
-
JIT Compatibility: The simple, sequential structure is well-suited for Just-In-Time (JIT) compilation, enhancing execution speed.
-
Scalability: As the number of iterations grows, the two-pass approach’s efficiency in memory usage and processing speed becomes much more advantageous.
Disadvantage: Trajectory points must be computed in both passes, but this trade-off is minimal. As the number of iterations increases, the benefits of memory efficiency and processing speed outweigh this drawback
@njit #njit is an alias for nopython=True
def compute_trajectory_extents(a, b, c, n):
# Dynamically compute and track the minimum and maximum extents of the trajectory over 'n' iterations.
x = np.float64(0.0)
y = np.float64(0.0)
min_x = np.inf # ensure that the initial minimum is determined correctly
max_x = -np.inf # ensure that the initial maximum is determined correctly
min_y = np.inf
max_y = -np.inf
for _ in range(n):
# selective min/max update using direct comparisons avoiding min/max function
if x < min_x:
min_x = x
if x > max_x:
max_x = x
if y < min_y:
min_y = y
if y > max_y:
max_y = y
# signum function respecting the behavior of floating point numbers according to IEEE 754 (signed zero)
xx = y - copysign(1.0, x) * sqrt(fabs(b * x - c))
yy = a-x
x = xx
y = yy
return min_x, max_x, min_y, max_y
# Dummy call to ensure the function is pre-compiled by the JIT compiler before it's called by the interpreter.
_ = compute_trajectory_extents(1.0, 1.0, 1.0, 2)
@njit
def compute_trajectory_and_image(a, b, c, n, extents, image_size):
# Compute the trajectory and populate the image with trajectory points
image = np.zeros(image_size, dtype=np.uint64)
# pre-compute image scale factors
min_x, max_x, min_y, max_y = extents
scale_x = (image_size[1] - 1) / (max_x - min_x) # column
scale_y = (image_size[0] - 1) / (max_y - min_y) # row
x = np.float64(0.0)
y = np.float64(0.0)
for _ in range(n):
# map trajectory points to image pixel coordinates
px = np.uint64((x - min_x) * scale_x)
py = np.uint64((y - min_y) * scale_y)
# populate the image arrayy "on the fly" with each computed point
image[py, px] += 1 # respecting row/column convention, update # of hits
# Update the trajectory "on the fly"
xx = y - copysign(1.0, x) * sqrt(fabs(b * x - c))
yy = a-x
x = xx
y = yy
return image
# Dummy call to ensure the function is pre-compiled by the JIT compiler before it's called by the interpreter.
_ = compute_trajectory_and_image(1.0, 1.0, 1.0, 2, (-1, 0, 0, 1), (2, 2))
While the two-pass approach is the primary solution, it’s valuable to consider alternative one-pass methods, each with unique trade-offs in performance, memory usage, and complexity. Here’s an overview:
Description: This method computes all trajectory points in a single pass and stores them in memory, enabling efficient calculation of trajectory extents and mapping to image pixels.
-
Advantages: Leveraging NumPy’s vectorized operations, this approach efficiently computes and maps points in a single pass, potentially increasing performance.
-
Disadvantages:
Full caching requires substantial memory, especially for high iteration counts. This may lead to performance issues from system memory swapping or even memory overflow.
Description: These approaches attempt to reduce memory consumption by either processing the trajectory in chunks or not caching trajectory points at all. However, since the full trajectory extents are unknown at the outset, each variation faces the same limitation: pixel mappings require recalculating because trajectory extents change (floating points in continuous space).
Chunked: The trajectory is divided into manageable chunks, each cached temporarily.
No Caching: Points are computed and mapped to pixels directly without storing them.
-
Advantages: Limits memory usage.
-
Disadvantages: Both approaches become impractical due to the following major limitations: Data Loss and Inaccuracy: As previously computed floating-point values are irrecoverably mapped to integer pixel coordinates, it becomes impossible to retrieve the exact values for remapping, leading to data loss and inconsistencies.
*This also applies analogously to any versions that only process floating point values.
No other one-pass method solutions have been investigated or considered to date. More sophisticated solutions would also contradict the minimum complexity design approach unless a significant performance improvement in calculations with a high number of iterations makes them worth considering.
Overall, the two-pass approach strikes the best balance between speed, efficiency and simplicity and is therefore ideal for attractor calculations with a high number of iterations. Although the trajectory points need to be computed in both passes, the pitfalls of alternative solutions are avoided.
OPTIONAL: Using a 3D graph to display pixel density (normalized) on the Z axis (Basic 3D version).
You can use or experiment with ax.contourf3D
or `ax.contour3D
""""
def render_trajectory_image(image, extents, params, color_map):
# Render the trajectory image in 3D
# Create a meshgrid for X and Y coordinates
x = np.linspace(extents[0], extents[1], image.shape[1])
y = np.linspace(extents[2], extents[3], image.shape[0])
x, y = np.meshgrid(x, y)
# Plot with normalized density (hit count) as Z values
z = image / np.max(image) if np.max(image) > 0 else image
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111, projection='3d')
ax.contourf3D(x, y, z, levels=100, cmap=color_map)
# Customize the plot
ax.set_title(f'Hopalong Attractor - 3D Density (Z) Plot\nParams: a={params["a"]}, b={params["b"]}, c={params["c"]}, n={params["n"]:_}')
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
ax.view_init(elev=75, azim=-95) # Adjust angle for better view
plt.show()
"""
Experiment with different image resolutions, color maps, or ways of populating the image array beyond using the hit count to explore new visual perspectives.
Also check out my simpler Rust version
© Ralf Becker
Nuernberg: November 2024
Computers in Art, Design and Animation (J. Landsdown and R. A. Earnshaw, eds.), New York: Springer–Verlag, 1989.
Barry Martin, "Graphic Potential of Recursive Functions," in Computers in Art, Design and Animation pp. 109–129.
ISBN-13: 978-1-4612-8868-8, e-ISBN-13: 978-1-4612-4538-4
A.K. Dewdney in Spektrum der Wissenschaft "Computer Kurzweil" 1988, (German version of Scientific American)
ISBN-10: 3922508502, ISBN-13: 978-3922508502
- NumPy Documentation: NumPy is a fundamental package for scientific computing in Python.
- Matplotlib Documentation: A library for creating static, interactive, and animated visualizations.
- Numba Documentation: Numba is a just-in-time compiler for optimizing numerical computations.
- Python Built-in Functions: Overview of built-in functions available in Python.
- Python Math Module: Access mathematical functions defined by the C standard.
- Python Time Module: Time access and conversions.
- Python Resource Module: Interface for getting and setting resource limits.