Storage resource

Hi all,
      In  process Industry we got intermediate storage is there for the phase.
In a process Order operation wise confirmation happens.
But we want to capture the phase wise qty in a specific storage location. with that we can calculate the exact consumption of input material
the tank level will be taken in the morning and with that level difference plant manager can calculate the consumption figure of input material. how can ai map that storage tank and level in SAP.
Regards
suresh

HI
A storage resource is a storage location or production unit that has both the characteristics of a resource and a storage location. It is used for intermediate storage of materials. Temporary storage of material in a production process can either be required between operations that follow each other chronologically within a production stage or between several production stages.
There are two types of storage resources:
Storage resource, on which no production process takes place (resource category Storage)
This storage resources can have an available capacity and material stock assigned to it. Its available capacity is defined in the unit volume/quantity.Storage resources that are only for storage are assigned to the material flow directly in the master recipe or in the process order.
Storage resource, at which a production process takes place (resource category Processing unit / Storage)
This storage resources can also be planned as far as time is concerned and material movements take place on it. Its available capacity is defined both in a time unit and in volume/quantity.
Storage resources that are also processing units are copied to the master recipe (or process order) as primary resources.
Use
A storage resource that is only used for storage serves as an intermediate storage location for materials and is used to avoid bottlenecks in production planning.
Regards,
Krishna Mohan

Similar Messages

  • Local Disk as Storage Resource in Oracle VM 3

    Hi
    We have a couple of servers which we are going to install Oracle VM 3 and two guests per server. We are not going to use any HA features so we will create server pool, each consisting of one VM server.
    The problem is each server only have one local disk (physically 2 but mirrored) and from documentation it seems that if we want to use local disk as storage resource this disk must not contain partition which is impossible in our case because we are installing the VM Server in one of partitions.
    We wonder how can we workaround this problem? We dont have external storage array neither a NFS Server.
    Thanks all

    info_oraux wrote:
    We wonder how can we workaround this problem?The only way I can think of is to use a small USB key for the OS/booting and dedicate the entire RAID1 set to VM storage, i.e. don't partition it at all.

  • Retired server still showing in Storage Resources

    We removed a old server from Edir and it continues to show in storage resources with "Unable to read attribute value; Result = 15".. any idea on how to get it out of the list ???.. I did do a storage rebuild and its still there

    roehmdo,
    It appears that in the past few days you have not received a response to your
    posting. That concerns us, and has triggered this automated reply.
    Has your problem been resolved? If not, you might try one of the following options:
    - Visit http://support.novell.com and search the knowledgebase and/or check all
    the other self support options and support programs available.
    - You could also try posting your message again. Make sure it is posted in the
    correct newsgroup. (http://forums.novell.com)
    Be sure to read the forum FAQ about what to expect in the way of responses:
    http://forums.novell.com/faq.php
    If this is a reply to a duplicate posting, please ignore and accept our apologies
    and rest assured we will issue a stern reprimand to our posting bot.
    Good luck!
    Your Novell Product Support Forums Team
    http://forums.novell.com/

  • Storage Resource question

    Hi,
    I have a question, Can I use Storage Resources in Discrete Manufacturing? If so, is there a specific configuration for it?
    This question is because I have Discrete Manufacturing in my plant and there is a process where I have to make a liquid base preparation which then I keep it in a tank so I can bottle it later in different sizes and flavours, and in different machines. Is there a way in Discrete Manufacturing to model this tank so I can link my liquid base preparation which is made with a PPM to the final product from another PPM?
    Thank you in advance,
    Fernando

    Hi,
    I have written an article about storage resources in discrete manufacturing in SNP Optimizer. Please take a look.
    http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/bpx-community/supply-chain-management/how%20to%20plan%20with%20constrained%20storage%20in%20apo%20supply%20network%20planning%20with%20optimization-based%20planning.pdf
    In PP/DS it is a different story.
    Regards
    Frank

  • Local storage resource need and use. How it difference from actual VM instance drives?

    Hi,
    I am not able to quite understand the use of local storage resource that we configure from service definition file. The local storage is not durable and provides access same as file storage we would typically use in local environment like c:\my.txt.
    As this local storage is not durable; so is the information stored on (for example C drive) role instance VM. So what is the advantage we get by using local storage resource? Instead we can save it on C drive of role instance VM. Why local storage
    is recommended instead of using VM's drive?
    Please let me know your views.
    Mark As Answer if it helps you |
    My Blog

    Hi,
    >>So what is the advantage we get by using local storage resource?
    Because of the local storage is not durable, On a cloud service, we can create a small local storage where you can save temporary files, this is a powerful model for some scenarios, , especially highly scalable applications, please have
    a look at below articlesfor more details.
    #http://vkreynin.wordpress.com/2010/01/10/learning-azure-local-storage-with-me/
    #http://www.intertech.com/Blog/windows-azure-local-file-storage-how-to-guide-and-warnings/
    Best Regards,
    Jambor
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Cluster Vols not appearing properly in Storage Resource List

    Still getting started with NSM4.0......
    When running GSR, seeing lots of errors for " Error#28 (File system path could not be found.)" when I can actually find the path in the browsers, looked in the Storage Resource List and instead of seeing clusterresource\volumename (which is the path in the string that is erroring above) I'm seeing clusternode\volumename. This is what is causing the issue in the previous thread. Tried to do a rebuild and got the same results. Users see the volumes as clusterresource\volumename, and that is how they are mapped in the login script,
    How/where do I look to correct this?
    thanks in advance

    On 12/5/2014 9:44 AM, NFMS Support Team wrote:
    > On 12/5/2014 6:16 AM, dbgallo wrote:
    >>
    >> Still getting started with NSM4.0......
    >>
    >> When running GSR, seeing lots of errors for " Error#28 (File system path
    >> could not be found.)" when I can actually find the path in the browsers,
    >> looked in the Storage Resource List and instead of seeing
    >> clusterresource\volumename (which is the path in the string that is
    >> erroring above) I'm seeing clusternode\volumename. This is what is
    >> causing the issue in the previous thread. Tried to do a rebuild and got
    >> the same results. Users see the volumes as clusterresource\volumename,
    >> and that is how they are mapped in the login script,
    >>
    >> How/where do I look to correct this?
    >>
    >> thanks in advance
    >>
    >>
    > Make sure that the nsmproxy object has Supervisor rights on the virtual
    > server and its resources. We've very rarely seen cases where inheritance
    > of Supervisor rights from the root of the tree stops at that level on
    > cluster volumes.
    >
    > -- NFMS Support Team
    After that, rebuild the Storage Resources cache.
    -- NFMS Support Team

  • The usage of storage resources

    Hi gurus,
    Suppose i have PP-PI scenario like this :
    The milk will be processed every operation and phase in a "can". This "can" will be transported in every resources center, so we can say that one "can of processed milk" equal to one process order. The problem is that company have 2 various "can". The one is small can and the other is medium can. If the process order quantity small enough than small can is used, but if the process order quantity large enough than medium can is used. The company want to monitor management of this can in their system. If i use fixed lot size procedure, than it will be possible for 1 type of can.
    Maybe storage resources can be a solving for this problem, but i don't know how to use it and its logic. I need advice for this scenario..
    Thanks for your attention.

    Hello,
    I think that your scenario can be managed with standard resource as well. Imagine that you have Lot Size Key that is not Fix but Ex with a min ( for the small can)  and a rounding, being rounding + min = max = (medium can)
    Then you can define two different resources and two production versions with different lot-size ranges (different master recipes pointing to each resource).
    When the system creates a process order, it will select the suitable production version and then, the desired resource, will be loaded (in capacity).
    I think that you can also work with storage resource ( and add some features such as capacity ) but the basic issue can be solved according to my first advice.

  • Add Storage Resource missing

    I have a set of shares on a Windows server missing from my Resource List, under Scan Collection. The documentation says to click the Storage Resources drop-down and choose Add Storage Resource, but this option is missing. The only option I see is Rebuild Storage Resource List. Any ideas?

    Stober,
    You are correct, when running the NSM integrated mode, the storage
    resources are provided via NSM. What version of NSM are you using?
    thanks,
    NFR Development
    On 2/27/2011 11:06 PM, stober wrote:
    >
    > Sorry, I've been out of the office. We're also running Storage Manager,
    > so there was no choice for Automatic or Manual modes. I'm assuming since
    > Add Storage Resource is only available with Manual, that the default
    > installation with Storage Manager is Automatic? Is there any way to
    > change this?
    >
    > The volumes I'm trying to get to are DFS namespaces, not "true" shares.
    > Is that why they don't show up in my Storage List?
    >
    > Novell File Management Suite Team;2079132 Wrote:
    >> stober,
    >> Is the DSI installed in Automatic or Manual mode? The Add Storage
    >> Resource is only available in Manual mode.
    >>
    >> thanks,
    >> NFR Development
    >>
    >> On 2/21/2011 11:06 AM, stober wrote:
    >>>
    >>> I have a set of shares on a Windows server missing from my Resource
    >>> List, under Scan Collection. The documentation says to click the
    >> Storage
    >>> Resources drop-down and choose Add Storage Resource, but this option
    >> is
    >>> missing. The only option I see is Rebuild Storage Resource List. Any
    >>> ideas?
    >>>
    >>>
    >
    >

  • Where does javafx.io.Storage resources end up?

    My first post here ...
    Well I have made an Desktop application where I use javafx.io.Storage and javafx.io.Resurse. First I thought that I would find the file in my directory tree of my JavaFx desktop app, but it did not. I am using Mac os X and I have tried to look everywhere on my disk and I can not find anything on my disk that has have the same file name as my Storage:
    m_entry = Storage {
       source: "bldconfig.xml"
    }I can load and save data to the file, that is no problem. But it would be interesting to know where the data is ending up if I ever need to delete the file or the data. It would be good to know this for all platforms actually.
    Thank you in advance

    I played a bit with this API when JavaFX 1.2 was out. I found out where the data was stored, but for Windows XP... I used Sysinternals' ProcMon to watch hard disk activity and spot where data was written... Perhaps you have a similar tool for Mac? Or can deduct destination from my findings...
    Here is the code I used:
    // Introduced in JavaFX 1.2
    import javafx.data.Pair;
    import javafx.util.Properties;
    import javafx.io.Storage;
    import javafx.io.Resource;
    import java.io.FileOutputStream;
    import java.io.FileInputStream;
    def FILE_NAME = "properties.java.txt";
    def propFile = Storage { source: "properties.jfx.txt" }
    var pairSequence: Pair[] =
         Pair { name: "WWW", value: "HTML" }
         Pair { name: "Sun", value: "JavaFX" }
         Pair { name: "Adobe", value: "Flex" }
         Pair { name: "Microsoft", value: "Silverlight" }
         Pair { name: "WWW", value: "Ajax" }
         Pair { name: "French", value: "Àvéc dès âccênts àrbîtrâïrês !" }
         Pair { name: "Français", value: "Yes, I am..." }
         Pair { name: "A=B", value: "true" }
         Pair { name: "Foo Bar", value: "false" }
         Pair { name: "#Foo", value: "bar" }
    // Note that WWW is twice in the sequence!
    var propsOut: Properties = Properties {};
    function PutPair(props: Properties, pair: Pair)
         props.put(pair.name, pair.value);
    for (pair in pairSequence)
         print("{pair.name} ");
         PutPair(propsOut, pair);
    println("");
    // Writes information in source's path
    var fosJ: FileOutputStream = new FileOutputStream(FILE_NAME);
    propsOut.store(fosJ);
    fosJ.close();
    // Writes information in C:\Documents and Settings\PhiLho\Sun\JavaFX\Deployment\storage\muffin\
    // with a 78e9b9eb-1b6c06c9 (for exampe) file holding the data and a 78e9b9eb-1b6c06c9.muf
    // pointing where to extract temporarily the file (in the place where the .class files are).
    // Obviously, path is for Windows XP, will vary for other systems
    var fosJFX = propFile.resource.openOutputStream(true);
    propsOut.store(fosJFX);
    fosJFX.close();
    // We loose one WWW
    var propsIn: Properties = Properties {};
    var fisJ: FileInputStream = new FileInputStream(FILE_NAME);
    propsIn.load(fisJ);
    fisJ.close();
    println("{propsIn.get("Sun")} / {propsIn.get("WWW")} / {propsIn.get("French")} / {propsIn.get("Français")} / {propsIn.get("A=B")} / {propsIn.get("#Foo")}");
    var fisJFX = propFile.resource.openInputStream();
    propsIn.load(fisJFX);
    fisJFX.close();
    println("{propsIn.get("Sun")} / {propsIn.get("WWW")} / {propsIn.get("French")} / {propsIn.get("Français")} / {propsIn.get("A=B")} / {propsIn.get("#Foo")}");Not rocket science, but interesting... :-)

  • Which is better ASM or file system storage

    Hi all, I need a urgent help from all u gr8 DBAs.
    I have to give justification to my client that which is better to use between file based option ASM and why??
    So can anyone give me some write up in this line?

    Ok, how about this
    Today's large databases demand minimal scheduled downtime, and DBAs are often required to manage multiple databases with an increasing number of database files. Automatic Storage Management lets you be more productive by making some manual storage management tasks obsolete.
    The Oracle Database provides a simplified management interface for storage resources. Automatic Storage Management eliminates the need for manual I/O performance tuning. It simplifies storage to a set of disk groups and provides redundancy options to enable a high level of protection. Automatic Storage Management facilitates non-intrusive storage allocations and provides automatic rebalancing. It spreads database files across all available storage to optimize performance and resource utilization. It also saves time by automating manual storage tasks, which thereby increases their ability to manage more and larger databases with increased efficiency. Different versions of the database can interoperate with different versions of Automatic Storage Management. That is, any combination of release 10.1.x.y and 10.2.x.y for either the Automatic Storage Management instance or the database instance interoperate transparently.
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14220/mgmt_db.htm

  • Switching resource group in 2 node cluster fails

    hi,
    i configured a 2 node cluster to provide high availability for my oracle DB 9.2.0.7
    i have created a resource and named it oracleha-rg,
    and i crated later the following resources
    oraclelh-rs for logical hostname
    hastp-rs for the HA storage resource
    oracle-server-rs for oracle resource
    and listener-rs for listener
    whenever i try to switch the resource group between nodes is gives me the following in dmesg:
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 224900 daemon.notice] launching method <hafoip_stop> for resource <oraclelh-rs>, resource group <oracleha-rg>, node <DB1>, timeout <300> seconds+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 784560 daemon.notice] resource oraclelh-rs status on node DB1 change to R_FM_UNKNOWN+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 922363 daemon.notice] resource oraclelh-rs status msg on node DB1 change to <Stopping>+
    +Feb  6 16:17:49 DB1 ip: [ID 678092 kern.notice] TCP_IOC_ABORT_CONN: local = 010.050.033.009:0, remote = 000.000.000.000:0, start = -2, end = 6+
    +Feb  6 16:17:49 DB1 ip: [ID 302654 kern.notice] TCP_IOC_ABORT_CONN: aborted 0 connection+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 784560 daemon.notice] resource oraclelh-rs status on node DB1 change to R_FM_OFFLINE+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 922363 daemon.notice] resource oraclelh-rs status msg on node DB1 change to <LogicalHostname offline.>+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 515159 daemon.notice] method <hafoip_stop> completed successfully for resource <oraclelh-rs>, resource group <oracleha-rg>, node <DB1>, time used: 0% of timeout <300 seconds>+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 443746 daemon.notice] resource oraclelh-rs state on node DB1 change to R_OFFLINE+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 224900 daemon.notice] launching method <hastorageplus_postnet_stop> for resource <hastp-rs>, resource group <oracleha-rg>, node <DB1>, timeout <1800> seconds+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 784560 daemon.notice] resource hastp-rs status on node DB1 change to R_FM_UNKNOWN+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 922363 daemon.notice] resource hastp-rs status msg on node DB1 change to <Stopping>+
    +Feb  6 16:17:49 DB1 SC[,SUNW.HAStoragePlus:8,oracleha-rg,hastp-rs,hastorageplus_postnet_stop]: [ID 843127 daemon.warning] Extension properties FilesystemMountPoints and GlobalDevicePaths and Zpools are empty.+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 515159 daemon.notice] method <hastorageplus_postnet_stop> completed successfully for resource <hastp-rs>, resource group <oracleha-rg>, node <DB1>, time used: 0% of timeout <1800 seconds>+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 443746 daemon.notice] resource hastp-rs state on node DB1 change to R_OFFLINE+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 784560 daemon.notice] resource hastp-rs status on node DB1 change to R_FM_OFFLINE+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 922363 daemon.notice] resource hastp-rs status msg on node DB1 change to <>+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 529407 daemon.error] resource group oracleha-rg state on node DB1 change to RG_OFFLINE_START_FAILED+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 529407 daemon.notice] resource group oracleha-rg state on node DB1 change to RG_OFFLINE+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 447451 daemon.notice] Not attempting to start resource group <oracleha-rg> on node <DB1> because this resource group has already failed to start on this node 2 or more times in the past 3600 seconds+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 447451 daemon.notice] Not attempting to start resource group <oracleha-rg> on node <DB2> because this resource group has already failed to start on this node 2 or more times in the past 3600 seconds+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 674214 daemon.notice] rebalance: no primary node is currently found for resource group <oracleha-rg>.+
    +Feb  6 16:19:08 DB1 Cluster.RGM.global.rgmd: [ID 603096 daemon.notice] resource hastp-rs disabled.+
    +Feb  6 16:19:17 DB1 Cluster.RGM.global.rgmd: [ID 603096 daemon.notice] resource oraclelh-rs disabled.+
    +Feb  6 16:19:22 DB1 Cluster.RGM.global.rgmd: [ID 603096 daemon.notice] resource oracle-rs disabled.+
    +Feb  6 16:19:27 DB1 Cluster.RGM.global.rgmd: [ID 603096 daemon.notice] resource listener-rs disabled.+
    +Feb  6 16:19:51 DB1 Cluster.RGM.global.rgmd: [ID 529407 daemon.notice] resource group oracleha-rg state on node DB1 change to RG_OFF_PENDING_METHODS+
    +Feb  6 16:19:51 DB1 Cluster.RGM.global.rgmd: [ID 529407 daemon.notice] resource group oracleha-rg state on node DB2 change to RG_OFF_PENDING_METHODS+
    +Feb  6 16:19:51 DB1 Cluster.RGM.global.rgmd: [ID 224900 daemon.notice] launching method <bin/oracle_listener_fini> for resource <listener-rs>, resource group <oracleha-rg>, node <DB1>, timeout <30> seconds+
    +Feb  6 16:19:51 DB1 Cluster.RGM.global.rgmd: [ID 515159 daemon.notice] method <bin/oracle_listener_fini> completed successfully for resource <listener-rs>, resource group <oracleha-rg>, node <DB1>, time used: 0% of timeout <30 seconds>+
    +Feb  6 16:19:51 DB1 Cluster.RGM.global.rgmd: [ID 529407 daemon.notice] resource group oracleha-rg state on node DB1 change to RG_OFFLINE+
    +Feb  6 16:19:51 DB1 Cluster.RGM.global.rgmd: [ID 529407 daemon.notice] resource group oracleha-rg state on node DB2 change to RG_OFFLINE+
    and the resource group fails to switch...
    any help please?

    Hi,
    this forum is for Oracle Clusterware, not Solaris Cluster. You probably should close this thread and open your question in the corresponding Solaris Cluster forum, to get help.
    Regards
    Sebastian

  • Failover Cluster Manager 2012 Showing Wrong Disk Resource - Fix by Powershell

    On Server 2012 Failover Cluster Manager, we have one Hyper-V virtual machine that is showing the wrong storage resource.  That is, it is showing a CSV that is in no way associated with the VM.  The VM has only one .vhd, which exists on Volume 16. 
    The snapshot file location and smart paging file are also on Volume 16.  This much is confirmed by using the Failover Cluster Manager to look at the VM settings.  If you start into the "Move Virtual Machine Storage" dialog, you can see
    the .vhd, snapshots, second level paging, and current configuration all exist on Volume 16.  Sounds good.
    However, if you look at the resources tab for the virtual machine, Volume 16 is not listed under storage.  Instead, it says Volume 17, which is a disk associated with a different virtual machine.  That virtual machine also (correctly) shows Volume
    17 as a resource.
    So, if everything is on Volume 16, why does the Failover Cluster Manager show Volume 17, and not 16, as the Storage Resource?  Perhaps this was caused by an earlier move with the wrong tool (Hyper-V manager), but I don't remember doing this.
    In Server 2003, there was a "refresh virtual machine configuration" option to fix this, but it doesn't appear in Failover Cluster Manager in Server 2012.
    Instead, the only way I've found to fix the problem is in PowerShell.
      Update-ClusterVirtualMachineConfiguration "put configuration name here in quotes"
    You would think that this would be an important enough operation to include GUI support for it, possibly in the "More Actions" right-click action on the configuration file.

    Hi,
    Thanks for sharing your experience!
    You experience and solution can help other community members facing similar problems.
    Please copy your post and create a new reply, then we can mark the new reply as answer.
    Thanks for your contribution to Windows Server Forum!
    Have a nice day!
    Lawrence
    TechNet Community Support

  • Access blob storage files by specific domain. (Prevent hotlinking in Azure Blob Storage)

    Hi,
    My application deployed on azure, and I managed all my file to blob storage.
    When i created container with public permission then it accessible for all anonymous users. When i hit URL of file (blob) from different browser, then i will get that file.
    In Our application we have some important file and images that we don't want to expose. When we render HTML page then in <img> tag we define src="{blob file url}" when i mention this then public file are accessible, but same URL i copied
    and hit to anther browser then still it is visible. My requirement is my application domain only able to access that public file in blob storage.
    Amazon S3 which provide bucket policy where we define that for specific domain only file will accessible. see http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
    Restricting Access to a Specific HTTP Referrer

    hi Prasad,
    Thanks for your post back.
    All of SAS and CORS could work, but not comprehensive.
    For your requirement, " My requirement is my application domain only able to access that public file in blob storage.", If you want to stop the other domain site access your blob, you may need set the CORS for your blob. When
    the origin domain of the request is checked against the domains listed for the
    AllowedOrigins element. If the origin domain is included in the list, or all domains are allowed with the wildcard character '*', then rules evaluation proceeds. If the origin domain is not included, then the request fails. So other domain didn't access
    your resource. You also try the Gaurav's blog:
    http://gauravmantri.com/2013/12/01/windows-azure-storage-and-cors-lets-have-some-fun/
    If you access CROS resource, you also need use SAS authenticated.
    However SAS means that you can grant a client limited permissions to your blobs, queues, or tables for a specified period of time and with a specified set of permissions, without having to share your account access keys. The SAS is a URI that encompasses
    in its query parameters all of the information necessary for authenticated access to a storage resource (http://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-shared-access-signature-part-1/
    ).  So if your SAS URI is available and not expired ,this URI could be used to other domain site. I think you can try to test it.
    If I misunderstood, please let me know.
    Regards,
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • How to assign container resource to routing

    Hi,
    We have three container resources representing 3 storage tanks with capacity in kilograms. I am not able to assign the storage resources to routing. We are using discrete routing.
    Please help.
    Thanks,
    Andy

    Hi andrew3,
    Container resource functionality is available only in PP-PI manufacturing and not in Discrete manufacturing, because of which you are not able to assign this resources in routings
    Thanks and regards
    Sravan maturu

  • Storage capacity to be considered in planning

    Hello PP Gurus,
    A liquid product is produced and has to be stored in a specific storage tank. The product has to be produced as per the requirements from SOP. At the same time, it has to consider the capacity of storage tank. (Goods issue from the tank happens in a irregular fashion & I am using PP-PI).
    Do you have any suggestions on how to consider this storage constraint in the planning cycle?
    Do we have any workaround such as a message during planned order/process order?
    Thanks in advance,
    G.Madhvaraj.

    Hello,
    I tried it. My understanding is storage resource cannot be used in PP-PI alone and it can be used only when PP-PI  with PFS/APO is used. I use only PP-PI.
    Am I correct here? Do you have any suggestions to my issue?
    Thanks in Advance,
    G.Madhvaraj.

Maybe you are looking for