lingo.lol is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for linguists, philologists, and other lovers of languages.

Server stats:

66
active users

#tensor

0 posts0 participants0 posts today
Anselm "Two Sheds" Schüler<p>tensors arent really real<br>tensors are just vectors<br>covectors are just vectors</p><p>you can just interpret a vector as a tensor, which basically involves using different bases<br>what makes a covector a covector is that you use the "original" basis (actually the dual basis) for them<br>you can just as easily treat covectors as normal vectors with a normal basis (made up of covectors)</p><p><a href="https://ieji.de/tags/math" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>math</span></a> <a href="https://ieji.de/tags/vector" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>vector</span></a> <a href="https://ieji.de/tags/tensor" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tensor</span></a></p>
Christos Argyropoulos MD, PhD<p>3 hours before the talk about going small and towards the <a href="https://mstdn.science/tags/edge" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>edge</span></a> with <a href="https://mstdn.science/tags/nanopore" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nanopore</span></a> <a href="https://mstdn.science/tags/RNAseq" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RNAseq</span></a> <a href="https://mstdn.science/tags/sequencing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>sequencing</span></a> <a href="https://mstdn.science/tags/flongle" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>flongle</span></a> and <a href="https://mstdn.science/tags/tensor" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tensor</span></a> processing units!</p>
Open Risk<p><a href="https://mastodon.social/tags/ActivityPub" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ActivityPub</span></a> is the technical specification towards decentralized (more precisely, federated) social networking (<a href="https://mastodon.social/tags/Fediverse" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Fediverse</span></a>) that underpins this very toot.</p><p>Our focus in the latest white paper in the "Connect the Dots" series is on mathematical (multilayer <a href="https://mastodon.social/tags/tensor" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tensor</span></a>) representations of federated networks to encode succinctly certain important elements of their structure with a view to enable various analyses and simulations of network performance.</p><p><a href="https://www.openriskmanagement.com/2024-02-05-tensor-representations-activitypub-networks/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">openriskmanagement.com/2024-02</span><span class="invisible">-05-tensor-representations-activitypub-networks/</span></a></p><p><a href="https://www.openriskmanagement.com/wp-content/uploads/2024/02/OpenRiskWP15_020224.pdf" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">openriskmanagement.com/wp-cont</span><span class="invisible">ent/uploads/2024/02/OpenRiskWP15_020224.pdf</span></a></p>
Anna Konstorum<p>I am excited to announce a call for papers for a new topical collection on tensor methods in La Matematica, the flagship journal of the Association for Women in Mathematics (AWM). I am serving as co-guest editor, along with Anna Ma (UC Irvine) and Jamie Haddock (Harvey Mudd College). </p><p>The purpose of this topical collection is to invite research, survey, and review articles on novel theoretical, computational, and real-world application progress in <a href="https://mathstodon.xyz/tags/tensor" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tensor</span></a> analysis. </p><p>The manuscript submission deadline is June 1, 2024. For more information, please see <a href="https://link.springer.com/journal/44007/updates/26489454" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">link.springer.com/journal/4400</span><span class="invisible">7/updates/26489454</span></a>.<br>Please share widely!</p>
Benjamin Carr, Ph.D. 👨🏻‍💻🧬<p><a href="https://hachyderm.io/tags/Chinese" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Chinese</span></a> <a href="https://hachyderm.io/tags/Loongson" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Loongson</span></a> <a href="https://hachyderm.io/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> Promises <a href="https://hachyderm.io/tags/RX550" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RX550</span></a>-Level Performance, Likely Arriving in 2025<br>Hu Weiwu, Loongson's founder, asserted the <a href="https://hachyderm.io/tags/9A1000" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>9A1000</span></a> delivers comparable performance to the Radeon RX 550, an entry-level AMD GPU from six years ago. The Radeon RX 550 provides around 1.2 TFLOPS of FP32 performance. Weiwu emphasized that the 9A1000 will also support scientific computing and AI acceleration. The statement somewhat hints at an implementation similar to <a href="https://hachyderm.io/tags/Nvidia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Nvidia</span></a>'s <a href="https://hachyderm.io/tags/Tensor" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Tensor</span></a> cores. <br><a href="https://www.tomshardware.com/pc-components/gpus/chinese-loongson-gpu-promises-rx-550-level-performance-likely-arriving-in-2025" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">tomshardware.com/pc-components</span><span class="invisible">/gpus/chinese-loongson-gpu-promises-rx-550-level-performance-likely-arriving-in-2025</span></a></p>
Pierre<p>Which one do you plan to get?</p><p>Preorder Google Pixel 8 with Pixel Buds Pro for $699 or Google Pixel 8 Pro with Pixel Watch 2 for $999</p><p><a href="https://mastodon.social/tags/google" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>google</span></a> <a href="https://mastodon.social/tags/pixel" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>pixel</span></a> <a href="https://mastodon.social/tags/googlepixel" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>googlepixel</span></a> <a href="https://mastodon.social/tags/pixel8" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>pixel8</span></a> <a href="https://mastodon.social/tags/pixel8pro" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>pixel8pro</span></a> <a href="https://mastodon.social/tags/budspro" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>budspro</span></a> <a href="https://mastodon.social/tags/pixelwatch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>pixelwatch</span></a> <a href="https://mastodon.social/tags/tensor" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tensor</span></a> <a href="https://mastodon.social/tags/madebygoogle" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>madebygoogle</span></a> <a href="https://mastodon.social/tags/android" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>android</span></a> <a href="https://mastodon.social/tags/android14" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>android14</span></a> <a href="https://mastodon.social/tags/photography" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>photography</span></a> <a href="https://mastodon.social/tags/nightsight" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nightsight</span></a> <a href="https://mastodon.social/tags/teampixel" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>teampixel</span></a></p>
Neuromatch<p>Graphics processing unit <a href="https://neuromatch.social/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> (W1D1)</p><p>When using Colab notebooks, by default, will not have access to a GPU. In order to start using GPUs we need to request one. We can do this by going to the runtime tab at the top of the page.</p><p>By following Runtime - › Change runtime type and selecting GPU from the Hardware Accelerator dropdown list, we can start playing with sending tensors to GPUs</p><p>Once you have done this your runtime will restart and you will need to rerun the first setup cell to re-import <a href="https://neuromatch.social/tags/PyTorch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PyTorch</span></a>. Then proceed to the next cell.</p><p>For more information on the GPU usage policy you can view in the Appendix.</p><p>Compute Unified Device Architecture <a href="https://neuromatch.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a></p><p>CUDA is an API developed by Nvidia for interfacing with GPUs. PyTorch provides us with a layer of abstraction, and allows us to launch CUDA kernels using pure Python.</p><p>In short, we get the power of parallelizing our <a href="https://neuromatch.social/tags/tensor" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tensor</span></a> computations on GPUs, whilst only writing (relatively) simple Python!</p><p>Here, we define the function "set_device”, which returns the device use in the notebook, i.e. ‘cpu’ or ‘cuda’. Unless otherwise specified, we use this function on top of every tutorial, and we store the device variable such as</p><p>DEVICE = set_device()</p><p>Let's define the function using the PyTorch package ‘torch.cuda’ which is lazily initialized, so we can always import it, and use 'is_available()' to determine if our system supports CUDA. </p><p>Operations between CPU tensors and CUDA tensors<br>Note that the type of the tensor changed after calling .to(). What happens if we try and perform operations on tensors on devices? We cannot combine CUDA tensors and CPU tensors in this fashion. If we want to compute an operation that combines tensors on different devices, we need to move them first! We can use the .to() method as before, or the .cpu() and .cuda() methods. Note that using the .cuda() will throw an error if CUDA is not enabled in your machine.</p><p>Generally. in this course, all <a href="https://neuromatch.social/tags/DeepLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DeepLearning</span></a> is done on the GPU, and any computation is done on the CPU, so sometimes we have to pass things back and forth.</p><p><a href="https://neuromatch.social/tags/NeuromatchAcademy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NeuromatchAcademy</span></a> <a href="https://neuromatch.social/tags/neuromatch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuromatch</span></a> <a href="https://neuromatch.social/tags/neuromatchstodon" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuromatchstodon</span></a></p>
Neuromatch<p><a href="https://neuromatch.social/tags/Tensor" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Tensor</span></a> Operations in <a href="https://neuromatch.social/tags/PyTorch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PyTorch</span></a> </p><p><a href="https://neuromatch.social/tags/neuromatch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuromatch</span></a> <a href="https://neuromatch.social/tags/NeuromatchAcademy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NeuromatchAcademy</span></a> <a href="https://neuromatch.social/tags/neuromatchstodon" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuromatchstodon</span></a> <a href="https://neuromatch.social/tags/DeepLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DeepLearning</span></a></p>
Mark Dunk<p>AI chip race: Google says its Tensor chips compute faster than Nvidia's A100 </p><p><a href="https://interestingengineering.com/innovation/google-faster-greener-supercomputer?utm_source=join1440&amp;utm_medium=email&amp;utm_placement=newsletter" rel="nofollow noopener" target="_blank"><span class="invisible">https://</span><span class="ellipsis">interestingengineering.com/inn</span><span class="invisible">ovation/google-faster-greener-supercomputer?utm_source=join1440&amp;utm_medium=email&amp;utm_placement=newsletter</span></a></p><p><a href="https://mastodon.education/tags/google" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>google</span></a> <a href="https://mastodon.education/tags/tensor" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tensor</span></a> <a href="https://mastodon.education/tags/nvidia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nvidia</span></a> <a href="https://mastodon.education/tags/technology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>technology</span></a> <a href="https://mastodon.education/tags/microchip" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>microchip</span></a></p>