Size of this preview: 600 × 600 pixels. Other resolutions: 240 × 240 pixels | 480 × 480 pixels | 1,000 × 1,000 pixels.
Original file (1,000 × 1,000 pixels, file size: 27 KB, MIME type: image/png)
This is a file from the Wikimedia Commons. Information from its description page there is shown below. Commons is a freely licensed media file repository. You can help. |
Summary
DescriptionGeneralized Hebbian algorithm on 8-by-8 patches of Caltech101.png |
English: Generalized Hebbian algorithm, running on 8-by-8 patches of Caltech101.
Matplotlib codeimport torch
import torchvision
import matplotlib.pyplot as plt
import numpy as np
from torchvision import transforms
from PIL import Image
from tqdm import trange
# Load the Caltech101 dataset
caltech101_data = torchvision.datasets.Caltech101('/content/', download=True)
data_loader = torch.utils.data.DataLoader(caltech101_data, batch_size=16, shuffle=True)
# Initialize GHA parameters
input_size = 8 * 8
output_size = 8 * 8
weights = torch.randn(output_size, input_size) * 0.01
def extract_random_patch(image, patch_size=8):
"""Extract a random patch from an image."""
# Convert PIL Image to tensor and handle grayscale conversion
if isinstance(image, Image.Image):
# Ensure the image is large enough
if image.size[0] < patch_size or image.size[1] < patch_size:
image = image.resize((patch_size*2, patch_size*2))
# Convert to tensor
to_tensor = transforms.ToTensor()
image = to_tensor(image)
# Convert to grayscale if it's RGB
if image.shape[0] == 3:
image = 0.299 * image[0] + 0.587 * image[1] + 0.114 * image[2]
elif image.shape[0] == 1:
image = image.squeeze(0)
# Ensure we have valid dimensions
assert image.dim() == 2, f"Expected 2D tensor, got shape {image.shape}"
h, w = image.shape
assert h >= patch_size and w >= patch_size, f"Image too small: {h}x{w}, need at least {patch_size}x{patch_size}"
# Get valid patch coordinates
i = np.random.randint(0, h - patch_size + 1)
j = np.random.randint(0, w - patch_size + 1)
# Extract and flatten patch
patch = image[i:i+patch_size, j:j+patch_size].reshape(-1)
# Normalize patch
patch_mean = patch.mean()
patch_std = patch.std()
if patch_std == 0:
patch_std = 1e-8
patch = (patch - patch_mean) / patch_std
assert patch.shape[0] == patch_size * patch_size, f"Patch size mismatch. Expected {patch_size * patch_size}, got {patch.shape[0]}"
return patch
def train_gha(weights, data_loader, epochs, patches_per_epoch, learning_rate_base):
"""Train the GHA network."""
device = weights.device
for epoch in trange(epochs):
learning_rate = learning_rate_base / (epoch + 1)
for _ in range(patches_per_epoch):
# Get random image from dataset
idx = np.random.randint(len(caltech101_data))
image, _ = caltech101_data[idx]
# Extract random patch and ensure it's a column vector
x = extract_random_patch(image)
x = x.reshape(-1, 1).to(device)
# Forward pass
y = torch.matmul(weights, x)
# Update weights using GHA rule
for i in range(output_size):
# Calculate sum term
sum_term = sum(weights[k] * y[k] for k in range(i+1))
# Update weights for this output neuron
weights[i] += learning_rate * (y[i].item() * x.squeeze() - y[i].item() * sum_term)
if (epoch + 1) % 10 == 0:
print(f"Completed epoch {epoch + 1}/{epochs}")
return weights
def plot_learned_features(weights):
"""Plot learned features in an 8x8 grid."""
fig, axes = plt.subplots(8, 8, figsize=(10, 10))
for i in range(8):
for j in range(8):
idx = i * 8 + j
feature = weights[idx].reshape(8, 8)
axes[i, j].imshow(feature.detach(), cmap='gray')
axes[i, j].axis('off')
plt.tight_layout()
return fig
# Training parameters
epochs = 50
learning_rate_base = 0.01
patches_per_epoch = 100
# Train the network
print("Starting training...")
trained_weights = train_gha(weights, data_loader, epochs, patches_per_epoch, learning_rate_base)
# Plot the results
fig = plot_learned_features(trained_weights)
plt.show() |
Date | |
Source | Own work |
Author | Cosmia Nebula |
Licensing
I, the copyright holder of this work, hereby publish it under the following license:
This file is licensed under the Creative Commons Attribution-Share Alike 4.0 International license.
- You are free:
- to share – to copy, distribute and transmit the work
- to remix – to adapt the work
- Under the following conditions:
- attribution – You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- share alike – If you remix, transform, or build upon the material, you must distribute your contributions under the same or compatible license as the original.
This media file is uncategorized.
Please help improve this media file by adding it to one or more categories, so it may be associated with related media files (how?), and so that it can be more easily found.
Please notify the uploader with {{subst:Please link images|File:Generalized Hebbian algorithm on 8-by-8 patches of Caltech101.png}} ~~~~ |
Items portrayed in this file
depicts
some value
18 November 2024
File history
Click on a date/time to view the file as it appeared at that time.
Date/Time | Thumbnail | Dimensions | User | Comment | |
---|---|---|---|---|---|
current | 20:36, 18 November 2024 | 1,000 × 1,000 (27 KB) | Cosmia Nebula | Uploaded while editing "Generalized Hebbian algorithm" on en.wikipedia.org |
File usage
The following 2 pages use this file:
Metadata
This file contains additional information, probably added from the digital camera or scanner used to create or digitize it.
If the file has been modified from its original state, some details may not fully reflect the modified file.
Software used | |
---|---|
Horizontal resolution | 39.37 dpc |
Vertical resolution | 39.37 dpc |