roundabout,
created on Saturday, 23 September 2023, 07:26:30 (1695453990),
received on Tuesday, 26 March 2024, 16:49:13 (1711471753)
Author identity: vlad <vlad.muntoiu@gmail.com>
453a1f5ca751c94b15996b31ae53643670209349
.ipynb_checkpoints/COPYING-checkpoint.md
@@ -0,0 +1,65 @@
Unless otherwise stated, the software (source code) in this repository is licenced under the GNU Affero General Public License, with an additional permission in section 7. A copy of the licence is located in the file `LICENCE-GNU-AGPL.md` in this directory. > This program is free software, you can redistribute it and/or modify it under the terms of GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. > This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of or FITNESS FOR A PARTICULAR PURPOSE. See the General Public License for more details. > You should have received a copy of the the GNU Affero General Public License, along with this program. If not, see <https://www.gnu.org/licenses/>. > ##### Additional permission under the GNU Affero GPL version 3 section 7: > If you modify this program, or any covered work, by linking or combining it with other code, such other code is not for that reason alone subject to any of the requirements of the GNU Affero GPL version 3. Unless otherwise stated, the annotations in this repository are dual-licensed under either of these licences: * [Creative Commons Attribution Share-Alike 4.0](https://creativecommons.org/licenses/by-sa/4.0/). To view a copy of this license, read the `LICENCE-CC-BY-SA.md` file, visit http://creativecommons.org/licenses/by-sa/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. * [Creative Commons Attribution Non-Commercial 4.0](https://creativecommons.org/licenses/by-nc/4.0/). To view a copy of this license, read the `LICENCE-CC-BY-NC.md` file, visit http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. You do not have to comply with the terms of both licences. Choose one and comply only with it. The images in the repository are licensed depending on their source: * If the file name has the `original` prefix, the images are licenced under the same terms as the annotations (see above). The VIA application shipped in this repository (located at the `/via` path) is licensed under the BSD licence. > Copyright (c) 2016-2021, Abhishek Dutta, Visual Geometry Group, Oxford University and VIA Contributors. > All rights reserved. > Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: > Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. > Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. > THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
.ipynb_checkpoints/main-checkpoint.ipynb
@@ -0,0 +1,520 @@
{ "cells": [ { "cell_type": "markdown", "id": "9a67092a", "metadata": {}, "source": [ "# Training notebook" ] }, { "cell_type": "markdown", "id": "4dfbc71a-a78d-4ebb-ab77-276dab5821c9", "metadata": {}, "source": [ "## Please install the requirements\n", "(if you don't have them already)" ] }, { "cell_type": "code", "execution_count": null, "id": "f586162e-4998-4250-a025-d6450fabcda5", "metadata": {}, "outputs": [], "source": [ "%pip install Pillow\n", "%pip install matplotlib\n", "%pip install tqdm\n", "%pip install ipywidgets\n", "%pip install tensorrt" ] }, { "cell_type": "code", "execution_count": null, "id": "57bc5faa", "metadata": {}, "outputs": [], "source": [ "import os # for file operations\n", "import json # for loading the annotations file\n", "from PIL import Image, ImageDraw, ImageOps, ImageEnhance # for processing the image data\n", "import numpy as np\n", "from random import shuffle\n", "import nvidia.cudnn\n", "import tensorflow as tf\n", "from tensorflow.keras.applications import VGG19\n", "from tensorflow.keras.layers import Dense, GlobalAveragePooling2D\n", "from tensorflow.keras.models import Model\n", "from tensorflow.keras.optimizers import Adam\n", "from tensorflow.keras.utils import to_categorical\n", "from tqdm.keras import TqdmCallback" ] }, { "cell_type": "markdown", "id": "be56a194", "metadata": {}, "source": [ "## constants" ] }, { "cell_type": "code", "execution_count": null, "id": "8cecf74e", "metadata": {}, "outputs": [], "source": [ "DATA_PATH = \"./data/\" # must contain multiple subdirectories - one for each class\n", "ANNOTATIONS_PATH = \"./via-annotations.json\" # relative to the notebook\n", "\n", "BATCH_SIZE = 32\n", "IMAGE_RESOLUTION = (224, 224) # standard for VGG19\n", "EPOCHS = 80\n", "\n", "CLASS_NAMES = sorted([os.path.basename(f) for f in os.scandir(DATA_PATH) if f.is_dir()]) # automatically generated\n", "CLASS_COUNT = len(CLASS_NAMES)" ] }, { "cell_type": "code", "execution_count": null, "id": "2a907fb6", "metadata": {}, "outputs": [], "source": [ "with open(ANNOTATIONS_PATH, \"r\") as via_data:\n", " via_annotations = json.load(via_data)" ] }, { "cell_type": "code", "execution_count": null, "id": "a47f8d82", "metadata": {}, "outputs": [], "source": [ "print(CLASS_NAMES)" ] }, { "cell_type": "markdown", "id": "7ab8b487", "metadata": {}, "source": [ "## segmentation" ] }, { "cell_type": "code", "execution_count": null, "id": "932ddf4e", "metadata": { "scrolled": true }, "outputs": [], "source": [ "annotations = []\n", "\n", "num_images = len(via_annotations[\"_via_img_metadata\"])\n", "from tqdm.notebook import tqdm # the progress bar\n", "import random\n", "for i, image_data in tqdm(enumerate(via_annotations[\"_via_img_metadata\"].items()), total=num_images):\n", " image_info = image_data[1]\n", " image_id = image_data[0]\n", " filename = os.path.join(DATA_PATH, image_info[\"filename\"])\n", "\n", " # Load the image\n", " try:\n", " img = Image.open(filename)\n", " # Rotate JPEGs according to the data provided by the camera\n", " img = ImageOps.exif_transpose(img)\n", " except:\n", " print(f\"File not found: {filename}. Skipping.\")\n", " continue\n", " \n", " WIDTH, HEIGHT = img.size # original ones\n", " img = img.resize(IMAGE_RESOLUTION, resample=Image.NEAREST)\n", "\n", " # Create the region mask (1 means transparent)\n", " if image_info[\"regions\"]:\n", " mask = Image.new(\"1\", (WIDTH, HEIGHT))\n", " else:\n", " # If there aren't any regions, keep the whole image for now\n", " mask = Image.new(\"1\", (WIDTH, HEIGHT), \"white\")\n", " black = Image.new(\"RGB\", IMAGE_RESOLUTION, (0, 0, 0))\n", "\n", " # Extract regions (polygons, circles, rectangles, ellipses) from VIA annotations and draw them on the mask\n", " for region_info in image_info[\"regions\"]:\n", " shape_attributes = region_info[\"shape_attributes\"]\n", " region_shape = shape_attributes[\"name\"]\n", "\n", " pencil = ImageDraw.Draw(mask)\n", "\n", " if region_shape == \"polygon\":\n", " points = [(x, y) for x, y in zip(shape_attributes[\"all_points_x\"], shape_attributes[\"all_points_y\"])]\n", " pencil.polygon(points, \"white\")\n", "\n", " elif region_shape == \"circle\":\n", " x = shape_attributes[\"cx\"]\n", " y = shape_attributes[\"cy\"]\n", " r = shape_attributes[\"r\"]\n", " points = [(x-r, y-r), (x+r, y+r)]\n", " pencil.ellipse(points, \"white\")\n", "\n", " elif region_shape == \"ellipse\":\n", " ellipse = Image.new(\"RGBA\", IMAGE_RESOLUTION, (0, 0, 0, 0))\n", " pencil_ellipse = ImageDraw.Draw(ellipse)\n", " x = shape_attributes[\"cx\"]\n", " y = shape_attributes[\"cy\"]\n", " rx = shape_attributes[\"rx\"]\n", " ry = shape_attributes[\"ry\"]\n", " angle = shape_attributes[\"theta\"]\n", " \n", " # Since ellipses can't be easily rotated in PIL, I'll have to paste a rotated version on top of the mask\n", " points = [(x-rx, y-ry), (x+rx, y+ry)]\n", " pencil_ellipse.ellipse(points, \"white\")\n", " ellipse = ellipse.rotate(angle, expand=False, center=(x, y))\n", " mask.paste(ellipse, (0, 0))\n", "\n", " elif region_shape == \"rect\":\n", " x = shape_attributes[\"x\"]\n", " y = shape_attributes[\"y\"]\n", " w = shape_attributes[\"width\"]\n", " h = shape_attributes[\"height\"]\n", "\n", " points = [(x, y), (x+w, y+h)]\n", " pencil.rectangle(points, \"white\")\n", " \n", " # Resize the mask as well\n", " mask = mask.resize(IMAGE_RESOLUTION, resample=Image.NEAREST).convert(\"1\")\n", " masked = Image.composite(img, black, mask)\n", "\n", " # Plot the three images\n", " import matplotlib.pyplot as plt\n", " f, axarr = plt.subplots(1, 3)\n", " plt.axis(\"on\")\n", " axarr[0].imshow(img)\n", " axarr[0].set_title(\"Original\")\n", " \n", " axarr[1].imshow(masked)\n", " axarr[1].set_title(\"Combined\")\n", "\n", " axarr[2].imshow(mask, cmap=\"copper\")\n", " axarr[2].set_title(\"Mask\")\n", "\n", " plt.show()\n", " \n", " # Get the class label from the directory structure\n", " class_label = os.path.basename(os.path.dirname(filename))\n", "\n", " # Convert class label to one-hot encoding\n", " class_index = CLASS_NAMES.index(class_label)\n", " class_one_hot = to_categorical(class_index, num_classes=CLASS_COUNT)\n", " \n", " # Preprocess\n", " n_copies = random.randint(1, 6)\n", " f, axarr = plt.subplots(1, n_copies + 1)\n", " plt.axis(\"off\")\n", " for j in range(0, n_copies):\n", " random.seed(i+j // random.randint(1, 4) + random.randint(0, 16777216))\n", " \n", " preprocessed = masked.rotate(random.randint(0, 3) * 90 + random.randint(-22, 22))\n", " sharpness = ImageEnhance.Sharpness(img)\n", " color = ImageEnhance.Color(img)\n", " contrast = ImageEnhance.Contrast(img)\n", " brightness = ImageEnhance.Brightness(img)\n", " \n", " sharpness.enhance(random.uniform(0.5, 1.5))\n", " color.enhance(random.uniform(0.75, 1.25))\n", " contrast.enhance(random.uniform(0.625, 1.375))\n", " contrast.enhance(random.uniform(0.5, 1.5))\n", "\n", " annotations.append((np.asarray(preprocessed) / 255.0, class_one_hot))\n", " axarr[j].imshow(preprocessed)\n", "\n", "# Shuffle the data\n", "shuffle(annotations)\n", "\n", "# Convert them to Numpy arrays\n", "images = np.array([i[0] for i in annotations])\n", "labels = np.array([i[1] for i in annotations])\n" ] }, { "cell_type": "markdown", "id": "b139c0c2", "metadata": {}, "source": [ "## model definition" ] }, { "cell_type": "code", "execution_count": null, "id": "09fea309", "metadata": { "scrolled": true }, "outputs": [], "source": [ "base_model = VGG19(weights=\"imagenet\", include_top=False)\n", "x = base_model.output\n", "x = GlobalAveragePooling2D()(x)\n", "x = Dense(1024, activation=\"relu\")(x)\n", "predictions = Dense(CLASS_COUNT, activation=\"softmax\")(x)\n", "model = Model(inputs=base_model.input, outputs=predictions)\n", "\n", "for layer in base_model.layers:\n", " layer.trainable = False\n" ] }, { "cell_type": "markdown", "id": "34882c79", "metadata": {}, "source": [ "## compilation" ] }, { "cell_type": "code", "execution_count": null, "id": "b12bdb60", "metadata": {}, "outputs": [], "source": [ "model.compile(optimizer=Adam(learning_rate=0.0002),\n", " loss=\"categorical_crossentropy\",\n", " metrics=[\"accuracy\"])" ] }, { "cell_type": "markdown", "id": "d09d6b3f", "metadata": {}, "source": [ "## training" ] }, { "cell_type": "code", "execution_count": null, "id": "cf9bbf00", "metadata": { "scrolled": true }, "outputs": [], "source": [ "history = model.fit(\n", " images,\n", " labels,\n", " batch_size=BATCH_SIZE,\n", " epochs=EPOCHS,\n", " verbose=0,\n", " callbacks=[TqdmCallback()],\n", " validation_split=0.2\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "7dd90e06", "metadata": {}, "outputs": [], "source": [ "model.save(\"model.h5\")" ] }, { "cell_type": "markdown", "id": "88d7aa16-d745-4bde-9da8-5dfcde5ca686", "metadata": {}, "source": [ "### fine-tuning" ] }, { "cell_type": "code", "execution_count": null, "id": "f7e8abe0-e2df-4bb0-915d-59625a8ea7b3", "metadata": {}, "outputs": [], "source": [ "from tensorflow.keras.models import load_model\n", "\n", "model = load_model(\"model.h5\")" ] }, { "cell_type": "code", "execution_count": null, "id": "034120f8-a403-4930-9cd3-5f219dd34870", "metadata": {}, "outputs": [], "source": [ "for layer in model.layers:\n", " layer.trainable = True" ] }, { "cell_type": "code", "execution_count": null, "id": "1e508600-1585-4c6b-877c-161ee91fceb6", "metadata": {}, "outputs": [], "source": [ "from tensorflow.keras.optimizers import Adam\n", "\n", "model.compile(\n", " optimizer=Adam(learning_rate=0.00001),\n", " loss='categorical_crossentropy',\n", " metrics=['accuracy']\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "2d6b22f3-1a2b-4094-aecf-b6bd8f5fba02", "metadata": { "scrolled": true }, "outputs": [], "source": [ "history = model.fit(\n", " images,\n", " labels,\n", " batch_size=BATCH_SIZE,\n", " epochs=EPOCHS,\n", " verbose=0,\n", " callbacks=[TqdmCallback()],\n", " validation_split=0.2\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "2f599431-2446-44da-9456-5ecb540688d9", "metadata": {}, "outputs": [], "source": [ "model.save(\"model.h5\")" ] }, { "cell_type": "markdown", "id": "f23c9496", "metadata": {}, "source": [ "## inferencing" ] }, { "cell_type": "code", "execution_count": null, "id": "4ee9c4c1", "metadata": {}, "outputs": [], "source": [ "from tensorflow.keras.models import load_model\n", "from tensorflow.keras.preprocessing.image import load_img, img_to_array\n", "import numpy as np\n", "import matplotlib.pyplot as plt" ] }, { "cell_type": "code", "execution_count": null, "id": "25342e64", "metadata": {}, "outputs": [], "source": [ "model = load_model(\"model.h5\", compile=False)" ] }, { "cell_type": "code", "execution_count": null, "id": "dec0324f", "metadata": {}, "outputs": [], "source": [ "input_image = load_img(\"../tester.jpg\", target_size=(224, 224))\n", "input_image = img_to_array(input_image)\n", "input_image = input_image / 255.0\n", "input_image = input_image.reshape((1,) + input_image.shape)" ] }, { "cell_type": "code", "execution_count": null, "id": "216cb87a", "metadata": {}, "outputs": [], "source": [ "predictions = model.predict(input_image)\n", "predicted_class = np.argmax(predictions, axis=1)[0]\n", "predicted_label = CLASS_NAMES[predicted_class]" ] }, { "cell_type": "markdown", "id": "0e038fbc", "metadata": {}, "source": [ "## plot" ] }, { "cell_type": "code", "execution_count": null, "id": "2bd664d5", "metadata": { "scrolled": true }, "outputs": [], "source": [ "plt.figure(figsize=(8, 6))\n", "plt.imshow(input_image[0])\n", "plt.axis(\"off\")\n", "plt.title(predicted_label, fontsize=18)\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "id": "2e6ff4fb", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.17" } }, "nbformat": 4, "nbformat_minor": 5 }