What is the extension that you’re using Sanyam, it looks like you could do a lot of things with the cells…
Also peter has reallly explained it well here who is a moderator in pytorch forums (Basicallly it’s function vs class implementation)
What is the extension that you’re using Sanyam, it looks like you could do a lot of things with the cells…
Also peter has reallly explained it well here who is a moderator in pytorch forums (Basicallly it’s function vs class implementation)
Module classes are in nn, and the same are available as functions in Functional. Not sure of why, maybe to support different coding styles.
AFAIK the object oriented modules are just wrappers around the functional APIs so they behave exactly the same way like you said.
But I think it’s easier to use the functional API when prototyping since they’re easier to use. And the OO APIs make code cleaner and more maintainable by promoting reusability (and other OO patterns)?
Just to add sometimes OO APIs make certain tasks easy for us, for example, if we use torch.functional.conv2d, we have to pass things like params, etc. while torch.nn.Conv2d does that automatically for us.
Found this useful
Does Cuda only work on nvidia cards? What support is available for tensor on AMD Radeon egpu?
Yeah, currently it works only on Nvidia Cards.
I thought I saw different shim layers for tensors on Radeon, but perhaps I was mistaken. Since Mac OS only has Radeon drivers I suppose there is no good way to do tensor flow work on Macos. Linux or Windows is required.
Twitter: @ElisonSherton
Blog: elisonsherton.github.io
I guess vinayak is facing system issues, he is replying in youtube chat
Not an expert in medical imaging so I’d first probably try to understand what the data looks like and kind of output we expect from a model - is it going to be a classification problem? or do we have to find areas in an image that MAY be of interest?
Also, what does our input look like? Is it a 3xHxW image as well or does it have more/fewer dimensions?
Basically, I’d want to understand the data first.
I have a little experience from Kaggle. They are 3d images, and the methods of normalizing the data are quite different and require a little domain knowledge. But once we get past that, we can mostly get away with treating them like a regular image tensor.
why here the cancer apply conv3D rather than conv2D, i am not quite understand? what is the principle ?thanks!