From f4e1c2dd700edc7654fc74afe25217fc3ba321f2 Mon Sep 17 00:00:00 2001 From: mohammadnaseri Date: Sun, 4 Feb 2024 13:08:45 +0000 Subject: [PATCH] Docs improvment (#2900) * Improve doc * Improve doc --- ...al-series-get-started-with-flower-pytorch.ipynb | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/doc/source/tutorial-series-get-started-with-flower-pytorch.ipynb b/doc/source/tutorial-series-get-started-with-flower-pytorch.ipynb index bbd916b3237..704ed520bf3 100644 --- a/doc/source/tutorial-series-get-started-with-flower-pytorch.ipynb +++ b/doc/source/tutorial-series-get-started-with-flower-pytorch.ipynb @@ -83,7 +83,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "It is possible to switch to a runtime that has GPU acceleration enabled (on Google Colab: `Runtime > Change runtime type > Hardware acclerator: GPU > Save`). Note, however, that Google Colab is not always able to offer GPU acceleration. If you see an error related to GPU availability in one of the following sections, consider switching back to CPU-based execution by setting `DEVICE = torch.device(\"cpu\")`. If the runtime has GPU acceleration enabled, you should see the output `Training on cuda`, otherwise it'll say `Training on cpu`." + "It is possible to switch to a runtime that has GPU acceleration enabled (on Google Colab: `Runtime > Change runtime type > Hardware accelerator: GPU > Save`). Note, however, that Google Colab is not always able to offer GPU acceleration. If you see an error related to GPU availability in one of the following sections, consider switching back to CPU-based execution by setting `DEVICE = torch.device(\"cpu\")`. If the runtime has GPU acceleration enabled, you should see the output `Training on cuda`, otherwise it'll say `Training on cpu`." ] }, { @@ -368,14 +368,14 @@ "metadata": {}, "outputs": [], "source": [ - "def get_parameters(net) -> List[np.ndarray]:\n", - " return [val.cpu().numpy() for _, val in net.state_dict().items()]\n", - "\n", - "\n", "def set_parameters(net, parameters: List[np.ndarray]):\n", " params_dict = zip(net.state_dict().keys(), parameters)\n", " state_dict = OrderedDict({k: torch.Tensor(v) for k, v in params_dict})\n", - " net.load_state_dict(state_dict, strict=True)" + " net.load_state_dict(state_dict, strict=True)\n", + "\n", + "\n", + "def get_parameters(net) -> List[np.ndarray]:\n", + " return [val.cpu().numpy() for _, val in net.state_dict().items()]" ] }, { @@ -485,7 +485,7 @@ ")\n", "\n", "# Specify the resources each of your clients need. By default, each\n", - "# client will be allocated 1x CPU and 0x CPUs\n", + "# client will be allocated 1x CPU and 0x GPUs\n", "client_resources = {\"num_cpus\": 1, \"num_gpus\": 0.0}\n", "if DEVICE.type == \"cuda\":\n", " # here we are asigning an entire GPU for each client.\n",