FD & FSD

Hi gurus,
what is Free of charge delivery and free of charge subsequent delivery for. how they are different.
hemant

Hi hemant,
Free of charge subsequent delivery : A sales document used in complaints processing for making a subsequent delivery of goods to the customer, free of charge.
Use
You can create a free-of-charge subsequent delivery if, for example, a customer receives too few goods, or if the goods have been damaged in the shipment. The system uses the free-of-charge subsequent delivery to create a delivery.
Free of charge delivery : A sales document for delivering goods to a customer free of charge.
Use
You can create free of charge deliveries for sending samples of your products to the customer. The system will then generate a delivery based on the free-of-charge delivery
Pl. reward if it helps.
Thanks & Regards
Sadhu Kishore

Similar Messages

  • App-V 4.6 - fsdfs.fsd or cache cleanup without reboot

    Hello,
    I seem to be having a bit of an issue with app-v cache setup.
    the app-v cache and the "fsdfs.fsd" file is located on a persistent disk in a Citrix PVS environment, so following the suggestions on
    http://technet.microsoft.com/en-us/library/cc843790.aspx is a no-go:
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SoftGrid\4.5\Client\AppFS\State to 0.
    It doesn't do anything, after server reboot the cache file is still there.
    I'm trying to find if there is another mechanism to clean the cache while the machine is booted, so far it doesn't look good:
    "Rebooting the Staging Client VM is necessary, because it is not the App-V Client Service that locks the file. Instead, it is the driver that still is active after the Service has been stopped. After the reboot, the driver can’t be initialized, because
    the Service has been disabled."
    The only crazy idea is to modify the image itself, boot from it while the app-v service is disabled and proceed to cleanup the old cache files, which doesn't seem like a good scenario given the number of server machines involved. Or boot affected servers
    from an image that has the registry state set to 0 so they reset on each reboot.
    Initially my idea was to use a script to clean up the cache from individual servers which are near the cache size cap.
    I would appreciate suggestions for this scenario.
    Thank you.

    You mentioned Citrix, PVS, and Server machines, but you didn't mention your OS.  Since you said Server machines, I'm going to assume you're using 2008 R2 or 2012/2012 R2, which are all 64bit operating systems.  In which case, you're pointing to
    the wrong registry value.  Since App-V 4.6 is a 32bit process, it's configurations are all in the WOW6432node location, so instead try the following:
    HKEY_LOCAL_MACHINE\SOFTWARE\wow6432node\Microsoft\SoftGrid\4.5\Client\AppFS\State = 0
    If this doesn't fix the issue for you, report back with the OS that you are using.

  • FSD Value getting rounded off in ECC

    Hi Experts
    I am  facing an issue at the time of transferring the FSD document  from TM to ECC. My FSD has 3000 USD value (approx) but at the time when it goes to ECC this value gets converted to 30 USD only i.e. the Service PO & Service entry sheet gets created for 30 USD only.I am using only one charge type FB00 is my TCM.
    I am not able to figure out how is the automatic rounding taking place. Even the payload from TM is bringing this value (3000) to ECC.
    Note: My FSD doc currency and company code currency are same.(USD)
    Would request your comments or inputs on this
    Thanks
    Edited by: Sanjeev Bose on Jan 20, 2012 9:57 PM

    Hi Michael
    Thanks for your reply
    Well somehow i was able to resolve this but this is not supposed to be required.Anyhow i feel there might be a bug.
    Anyways, i went down to the ECC>IMG>SAP Netweaver>general settings>Currencies>Set decimal places for currencies>Maintain blank OR zero for USD.
    That somehow resolved our issue.But i seriously dont think this should have been done. Maybe it just worked.After going into deep we found that if no entries are being maintained in the table TCURX then the system will be default taking a rounding upto 4 decimal places for the document currency that you are using.
    Thanks
    Sanjeev Bose

  • Tables for FWSD and FSD

    can someone provide me the se16 tables name for viewing FWSD and FSD.
    Aalso if I want to view the party roles for FSD creation, from which table I will get the data

    Hi,
    You can view the data by transaction BOBT and enter the relevant BO name of the FWSD/FSD, which should be /SCMTMS/CUSTFREITINVREQ and query by attributes and user the association to retrieve other nodes' data.
    Hope this can help.
    BR, Dawson

  • SAP TM Credit Memo (FSD and FWSD)

    hi,
    i am trying to send the SAP TM credit memo to ECC, and they both fail, FWSD xml does not have the charges,not sure why and the FSD does not pick the vendor, is there something i am missing in config? please help, this is urgent

    The xml needs the partner number

  • FSD creation after PGI

    hi,
    There is a requirement to create FSD document after the PGI (of the delivery) automatically. Could you suggest a way to do that? Please note this is for a Shipper scenario.
    Options that was explored & discarded.
    1. Use txn: /n/scmtms/sfir_create - report does not have anything related to the PGI status
    2. Use selection profile to filter out freight orders (using additional selection attributes using selection values) for which the deliveries are PGI'ed- the
    3. Conditions in selection profile - could not find a condition type appropriate for the freight order selections.
    4. Use txn: /n/scmtms/sfir_create with Freight order life cycle status - Life cycle status are not directly dependent on PGI status
    Could you suggest me the options?
    Thank You,
    Vasu

    Hi Vasudevan,
    An ouput could be triggered at shipmemt which create a idoc (shmpnt05) to send the PGI status to TM. Further create a fsd with this program /SCMTMS/SFIR_CREATE_BATCH
    Code can written in the save strategy in the FO that will call the action CREATE_SFIRwhich is at TOR root level. check this classs /SCMTMS/CL_CREATE_SAVE_METHODS
    Also check the lifecycle staus in /SCMTMS/TOR-ROOT . I Hope this help
    Reagrds,

  • How can I run a script last?

    Hiya,
    I have a single backup job running a single dataset. That single dataset has an include statement to include a dataset directory, which holds individual datasets for the several hosts I want to back up, each with potentially different parameters. This works well, the hosts back up in some order (I don't care), everything's happy.
    However, what I want to do is add a step on the end of this process to run a report on the admin server after all the backups are complete. That is, to run a script once, last.
    If I add an "after backup" command in the dataset, it runs it (as stated in the manual) on the client servers, after every backup, which is no good to me. I've tried creating a separate job and scheduling it at a lower priority a few minutes later than the backup job, but that seems to just run when it feels like it, I presume because there aren't any conflicts so why shouldn't it?
    Can anyone advise how I should be doing this?
    Related question... I've tried putting an "after backup" step on the Catalog backup job, but that appears to get run twice (I presume because there are two "include" steps in the catalog dataset)! Is this expected? How can I get it to just run once, after the catalog backup is complete?
    Thanks,
    David.

    Thanks Donna, but... no. It didn't work.
    We've actually purchased a license for OSB, so I've fired this question at our official support channel now, and they came back with the same answer at the same time (are you the Oracle techie looking at my problem, by any chance?). For the record here's (with some tweaks for readability in this context) what I've sent back in response to that article.
    From the article, I thought perhaps the following might do the trick:
    include dataset /Hosts {
    after backup optional "/usr/local/osb/pre-n-post after-fsd-hosts mail"
    But, from the log:
    2010/11/05.07:26:58 trouble parsing dataset in FSdaily line 1 - open brace appears where not allowed (OB dataset mgr)
    "010/11/05.07:26:58 offending line is "include dataset /Hosts {
    2010/11/05.07:26:58 trouble parsing dataset in FSdaily line 3 - bad nesting (OB dataset mgr)
    "010/11/05.07:26:58 offending line is "}
    So it appears I’m not actually allowed to do that. I don’t understand why that would be.
    The only other idea I have is to include a dummy backup step, backing up something really small (because OSB won’t let you run a backup without backing anything up - sigh!), tack a script onto that, and hope like heck OSB decides that it should run that last. All the documentation I’ve read gives the impression that datasets are all about scope, not order, so I’m not altogether confident that this will work. In any case, it seems a pretty kludgy way of doing it! And, given the next paragraph below, I’m not all that sure it’ll work in any case.
    My idea of scheduling a catalog backup for five minutes later than the client backups, with a lower priority, so that it runs when the client backups finish, also has a flaw - if I use two tape drives in parallel, it runs the catalog job in parallel with the last client job, which is completely useless. I want to put on the end of the tape a backup of the catalog as at just after the client backups, so that in the case of a disaster I can restore that and I’ll be good to restore everything else.
    In addition to being completely useless for the purposes of putting an “after” catalog backup on the end of the tape, it’s also completely useless for the purposes of running a script last - I tried the following:
    include catalog {
    after backup optional "/usr/local/osb/pre-n-post after-catalog mail"
    This ran the pre-n-post script twice, once for each component of the catalog, which is altogether not what I want it to do.
    I can’t think of any way to achieve a catalog backup on the end of the script except for scheduling it for some time later and hoping the dataset backups always finish by then. Ugly.
    The only way I can think to achieve the run-a-script-last is to munge all the datasets together into one humongous dataset file, and do stuff as in the article to try to bend OSB to my will (again, hoping that OSB obeys the order of statements in the dataset). Which, when I’m given the ability to use “include dataset /Hosts” to make it easy to maintain, seems a bit of a mean trick to pull on me. And, again, with two tape drives available I’m not at all sure it’ll work in any case.
    I'll post further results as they come to light...
    David.

  • What is the Roles Of Bridging Account

    *What Are The Roles Of These Accounts Bridging, Non-Invoiced Sales Orders And Non-Invoiced Revenue in Periodic Account Assignments form? [ID 1335054.1]*
    The Expense account is there to accommodate European accounting practices.
    The Bridging account (as seen in WIP accounting classes) is used only in France.
    The bridging account (and possibly others) was included at the request of the Global Accounting Engine team on behalf of France in the setup form at design time; but it has never been used as it was decided that transfer To GL would remain manual for the moment. In other words, today PAC only generates distributions based on GAAP/Brazil accounting.
    In other words: In US accounting, when you put something in inventory it is accounted as an asset, in France it is accounted as an expense; so during a period you total all expenses and at the end of a period in order to do a fiscal Profit and Loss, you "convert" to assets by using a bridging account : you debit the inventory account and credit the bridging account. This is done by Product Nature: So you have a Bridging account for raw materials, for finished goods and semi-finished.
    The fields 'Non-invoiced Sales Orders' and 'Non-invoiced Revenue' was set up with a view to replacing the functionality (used in Europe) that is currently provided by the Global Accounting engine (AX). However, this has not yet been implemented in the apps - which currently support only US GAAP & Brazilian accounting as methods of generating distributions from PAC.
    User Guide
    2 – 72 Oracle Cost Management User’s Guide
    Bridging: This account is optional.
    You can also optionally enter an Analytical Invoice Price Variance, Analytic a l Purchase Mirror, Non–Invoiced Sales Order, Non–Invoiced Revenue, Analytical Revenue Mirror, Analytical Margins of Goods Sold, and Average Cost Variance account.
    Edited by: Deepkumar Sivanandan on Aug 25, 2012 11:56 PM

    Hi Thaman,
    Responsibility of functional consultant in ESS/MSS:- Functional configuration under SPRO > Employee self service and manager self service, then preparation of Functional specification design (FSD) if any deviation from standard SAP, Unit test plan preparation and all functionailty testing
    Responsibility of Technical consultant:- Preparation of Technical specification design document (TSD), activation of work flow and creation of workflow if additional is required. Creatio of form like vacancy requisition form, new position creation form etc.
    Also some kind of technical configuration is there in SPRO for workflow and form
    Regards,
    Purnima

  • Synchronization issues on OneDrive for Business

    Hi,
    Our OneDrive for Business experience is very confusing here. We are in the middle of a few synchronization problems that our users don't understand.
    Could not insert my screenshots, so please pardon me if the description is not clear. Full topic with screenshots can be found here: http://community.office365.com/en-us/f/172/p/273549/836906.aspx
    Description of the problem
    We have a lot of users using OneDrive for Business to synchronize three shared libraries stored on a on-premise installation of SharePoint 2013. The first setup has run (almost) smoothly, and all my users had a synchronized version of the files on the server
    (around 2GB of data). We are in the beginning of our deployment, so the users have not used it very intensively yet, and probably this is why problems start appearing.
    On computer 1, a file/folder reorganization has been realized within one folder (folder "TRAININGS") of one the libraries: creating new folders, moving files from one folder to another, renaming other folders and deleting old folders. Changes have
    been processed, and Computer 1 is now in sync with server: they now share the same file/folder organization under the "TRAININGS" folder. Also, on Computer 1, all folders and files show the "green tick" on their icon, showing that sync
    is completed.
    On computer 1, the OneDrive icon in the Windows task bar shows the synchronization is complete and does not show any sync issues.
    On computer 2, the folder "TRAINING" is a mess. The folder "TRAINING"'s icon has been the "blue arrows" for a while. Inside, I have a mix of "green tick", "blue arrows" and no icon status.
    Every now and then, the folder CATEGORIES turns to "green tick", then goes back to "blue arrows".
    If we drill down inside the sub-folders, I have the same bahaviour, some folders have "green tick", some other "blue arrows" and some other no icon status. I do not understand the logic.
    On Computer 2, no files or folders inside "TRAININGS" folder have never been opened by user "Fra".
    However, on SharePoint, I see that some files in the tree of the "TRAININGS" folder that have "last modified by" with name of the user in Computer 2, even though this user "Fra" never modified that precise file. Only user "Cla"
    did some modifications on the files and folders.
    For example, we are missing folder "TRAININGS\CATEGORIES\CATALOG STRUCTURE" on Computer 2.
    I do not understand what is going wrong there...
    Questions already answered:
    To save time for diagnosis:
    1. Does the issue only happen on the Computer 2? Please choose a PC and login with a problematic which have different issue to see what will happen.
    No, it happens also on a different PC, but with different status for each file.
    2. Have the files they upload met the restrictions of syncing SharePoint libraries? 
    No they have not:
    No special character in File Name
    524 files and 257 folders in the library
    Max length of full folder and file name is 157 characters.
    No invalid file types
    I have no idea if any file is open on any computer, and have no way of knowing it, with users all over the world.
    3. Have you tried to repair OneDrive for Business via right-clicking the OneDrive for Business icon then clicking repair? If not, please give it a shot and provide us with the result.
    Repair has been tested and does not help.
    4. Could you please clean the cache of OneDrive for Business to see if the issue persists?
    I have already tried deleting the Office Offline Cache on a similar problem one week ago. It is a pain in the neck since I have to do it on all my users who have problem (starts to be a lot), it involves re-downloading the whole contents of the libraries
    (2GB) and it does NOT solve the problem since it reappears one week later on a different set of files.
    Full delete of Office Offline Files temporarily solves the incident until the problem occurs again later. So doesn't help either.
    Please keep in mind I have over 30 other computers that have these libraries synced in 4 different countries, and I do not know whether they are properly synced or not. Repairing or resetting on the client computers is not an option if I am not sure I have
    completely solved the problem and I know for sure it will not happen again.
    5. How many files in these shared libraries? Generally, when suing OneDrive for Business, we don't recommend uploading/syncing a lot of files at a time.
    The folder "TRAININGS" where all changes have been made contains only 64 files and 42 folders.
    The library which contains the "TRAININGS" folder contains 524 files and 257 folders. But these files and folders have not been touched, except for the "TRAININGS" folder.
    I believe I am way under the 100 files that have moved at the same time.
    6. How long has the problem been waiting for? Have you tried letting it solve by itself?.
    Today, after 12 hours, Computer 2, 3 and 4 are still in the same state, synchronization is stuck.
    Can you please help me diagnose the issue and make sure it does not happen again?
    Thanks a lot.

    New update that might help.
    I have noticed on Computer 2 an anormal growth of the "%USERPROFILE%\AppData\Local\Microsoft\Office\15.0\OfficeFileCache" folder.
    The total synced libraries (both  with Office365 and with on-premise SharePoint) have a size of 5.5 GB.
    The OfficeFileCache folder has a size of 21.6GB and is growing by the minute.
    If I pause the sync, the folder stops growing. If I resume the sync, the folder grows.
    The changes are the following:
    File CentralTable21873.accdb has different date modified, but size remains the same.
    Files FSD-{id}.FSD (131,072 bytes) and  FSF-{id}.FSF  (114 bytes) are created. These files are created by pair, almost every second.
    Please find below extract of the contents of the OfficeFileCache on Computer 2. I stopped the sync at 12:07.
    2014-10-29 14:13:25.137808900 +0700 CentralTable21873.accdb
    2014-10-29 14:06:39.464552500 +0700 CentralTable21873.laccdb
    2014-10-29 13:49:47.221305400 +0700 FSD-{61B6B74C-BA6C-46A1-9CA0-F1031E1D4477}.FSD
    2014-10-29 13:49:43.225225000 +0700 FSD-{FF593C23-B2F6-4857-9B5D-3C278C2DF524}.FSD
    2014-10-29 13:49:43.225225000 +0700 FSF-{19C2F7EF-66A1-47D2-90C6-21104002B292}.FSF
    2014-10-29 13:46:23.720925000 +0700 FSD-{FF57C26D-73A3-41A9-B7A1-09BE0E94487F}.FSD
    2014-10-29 13:46:23.720925000 +0700 FSF-{FC35858F-8532-4D17-A94B-BC7A003D7750}.FSF
    2014-10-29 13:46:23.000910600 +0700 FSD-{B6715853-B58A-4538-9D40-C285355AA959}.FSD
    2014-10-29 13:46:23.000910600 +0700 FSF-{C367A019-6343-4998-BBDB-EEFBEFC48A53}.FSF
    2014-10-29 13:46:15.510760800 +0700 FSD-{2A81B7AF-6835-416C-9D39-759C69BCC6F9}.FSD
    2014-10-29 13:46:15.510760800 +0700 FSF-{D5DD04B0-3C47-4CB2-920A-1E61DC1C5987}.FSF
    2014-10-29 13:46:14.700744600 +0700 FSD-{B1588DF5-6985-4C3E-87C1-23259AFF8B3D}.FSD
    2014-10-29 13:36:29.610487100 +0700 FSD-{BA96AD01-7552-4893-840D-2FD804956CCC}.FSD
    2014-10-29 13:36:29.500484900 +0700 FSF-{41A2C30E-D125-471F-8885-2F50843B8218}.FSF
    2014-10-29 13:36:29.490484700 +0700 FSD-{B74D8923-28BD-4B5A-ACA2-CCD7D3A7ABA5}.FSD
    2014-10-29 12:07:28.600679500 +0700 FSD-{9FBD798B-E954-4D31-B5B8-25C1C9B9EA11}.FSD
    2014-10-29 12:07:28.600679500 +0700 FSF-{6657D798-2B7E-46C1-AA1F-F88E5C2AE906}.FSF
    2014-10-29 12:07:28.370674900 +0700 FSD-{C7F4EB47-C23F-4D35-98BD-2AE73C6F3612}.FSD
    2014-10-29 12:07:28.370674900 +0700 FSF-{0F334A91-F4B5-424B-B79B-0EA03BC46859}.FSF
    2014-10-29 12:07:28.140670300 +0700 FSD-{15A8F28E-502C-414C-B38C-1E3A9975468E}.FSD
    2014-10-29 12:07:28.140670300 +0700 FSF-{1C0AF415-373F-4C01-8C23-62E942E56E7E}.FSF
    2014-10-29 12:07:27.920665900 +0700 FSD-{58828162-A8D6-403F-AD42-889FA5F8D193}.FSD
    2014-10-29 12:07:27.920665900 +0700 FSF-{11C27630-BBD0-4797-94F0-81332B1BE598}.FSF
    2014-10-29 12:07:27.730662100 +0700 FSD-{879D25AE-37F5-47FD-9CF3-8DFBCF6C9A91}.FSD
    2014-10-29 12:07:27.730662100 +0700 FSF-{ACE9056D-ECE1-45FF-8569-437A8386001D}.FSF
    2014-10-29 12:07:27.176650700 +0700 FSD-{2E4495AE-5C09-4201-9E83-A6BDC83968EE}.FSD
    2014-10-29 12:07:27.176650700 +0700 FSF-{55EA3316-D2BE-4041-9599-E7030D5166D6}.FSF
    2014-10-29 12:07:26.986646900 +0700 FSD-{81883BA5-0FC1-4A01-91C4-B1B27930A20E}.FSD
    2014-10-29 12:07:26.986646900 +0700 FSF-{EFA9EF58-0DC0-44B1-A520-379ADAC135E5}.FSF
    2014-10-29 12:07:26.796643100 +0700 FSD-{765935BA-542C-4970-9EE3-5A6098E60D62}.FSD
    If I delete those files, and restart OneDrive, they just keep being recreated.

  • Issue in Credit Memo Posting in ECC against SAP TM Freight Order

    Hello Experts,
    I have created Credit Memo against TM Freight Order and Transferred to ECC.
    Sender Outbound Service Interface on TM Side "TransportationOrderSUITEInvoicingPreparationRequest_Out" was successful.
    But Receiver Inbound Service Interface on ECC Side is failing "TransportationOrderSUITEInvoicingPreparationRequest_In"
    XML analysis gives below error message:
    No instance of object type Purchase Order has been created. External reference
    PO header data still faulty
    Enter a vendor
    Please enter items first
    I was able to create a Service PO and Service Entry Sheet for FSD created against same Freight Order, so I don't see any issue in Config.
    Thanks and Regards,
    Arnab Biswas

    Hi Arnab,
    As understood, standard interface creates Service PO in ECC for your Freight order in SAP TM.
    Then you are trying to create Invoicing document linked to the created service PO, which is just like FWSD of Freight order. Inbound XML data will give you thorough picture of this issue.
    General Analysis :
    When FWSD created in TM for Freight Order interface "TranspOrderSUITEInvoicingPreparationRequest_Out" is triggered and it will pass the same TM data of Freight Order to create invoice in ECC and not like the Service PO.
    Kindly check your inbound XML with data in it.
    You might need to change in outbound interface BADI while sending data out from SAP TM, or in inbound BADI to modify incoming data in ECC.
    In SAP TM SPRO search with below content.
    BAdI for TranspOrderSUITEInvoicingPreparationRequest_Out
    or
    you need to search inbound BADI in ECC side to make changes in inbound data.
    Reward if found useful..
    Thanks and Regards,
    Kalpesh

  • Receive against the PO upon Docs Reviewed

    Inbound delivery is created in SAP through the interface I14 (FSD ASN Creation) and quantity maintained as per the ASN document.
    2. Delivery document table LIKP extended to capture the additional data that is transmitted by the freight forwarder, Document reviewed status also maintained in LIKP extension. This is a separate enhancement (E23).
    3. Reviewer will check the inbound delivery document in VL32N.
    4.After checking, the reviewer set the field XXXX status into u201CDocument reviewedu201D.
    5. Save the inbound delivery document.
    6. After saving the inbound delivery document, an output type will be triggered only if document reviewed field is set. The steps to do output type has been mentioned below
    7. The enhancement will use the BAPI u201CBAPI_GOODSMVT_CREATEu201D to create goods receipt for each purchase order.
    Steps for output type determination:
    1. Go to transaction NACE , under Application area E1 for inbound delivery choose Access sequence. Define an access sequence.
    2. Define condition record.
    3. Create a new output type ZDLV as specified in the below screenshot. Enter a new Z program name. This program will be triggered on each save and needs to validate the 'Doc reviewed' status on the delivery.
    4. In Processing routines change the value of the field 'Transmission medium' to '8 - Special function'. Specify the Z program and subroutine name here.
    Within the Z program, use the BAPI BAPI_GOODSMVT_CREATE for each line item within the inbound delivery. GR need to be against the PO specified on the inbound delivery item details. PO no. to be taken from LIPS-VGBEL field.
    Could any one throw highlight on this .. I want to know whether we need to include this bapi in print pgm when document reviw field is set..Pls any throw the desing approach on this ..
    Moderator message: "spec dumping", please work yourself first on your requirement.
    Edited by: Thomas Zloch on Nov 13, 2011 1:26 PM

    I have a custom subscription to that business event that should call a pl/sql package when that business event is raised.
    However since the business event is not being raised, I am not able to call that pl/sql package

  • QFS mount options not getting noticed.

    Hi I have a small issue it seems that none of my QFS directives are getting noticed im not sure if this is true or not but this is what I have tried and i am unable to verify that these mount options are actually taking effect..
    This is QFS Version 4 revision 4.0.26FA
    in my /etc/opt/SUNWsamfs/samfs.cmd I have the following options
    fs = qfs1
    nosuid
    sync_meta = 0
    qwrite
    readahead = 4096
    writebehind = 1024
    after mounting my file system why are these options not listed in mnttab
    qfs1 /qfs1 samfs dev=1d80036 1048280993
    using dd/time to write data to this file system im getting the exact same performance no matter what options I set. another reason why I think these options are being ignored is the following
    if I set for example fs = qfs1 and high = 70 low = 30 when I check samu [m] option I dont see those values reflected in the m output so what im asking is there anyway to verifty that my mount options are being set occordinly ?
    Yes I did use mount -o nosuid,qwrite,etc.. /qfs1 still no difference and even placing the options in vfstab seems to not make any difference.
    last but not lease if i do a samtrace -V while mounting my /qfs1 filesystem I see the following
    00100000 readahead (1048576) <<--- still seems to use the default values 1024
    00080000 writebehind (524288) <<-- 512 look @ my samfs.cmd file this should but not the value.
    Any assistance would be nice.

    Hi There! Sorry, it took so long to get back to you. This should help:
    The samu N display shows all the mount options. You can also use the samfsinfo command to see file system build information. If the samfs.cmd options are not being picked up, make sure the file is formatted correctly (no odd characters) and is in the proper location, /etc/opt/SUNWsamfs/samfs.cmd . Then "pkill -HUP sam-fsd" to assure the daemon has picked up the file's contents, and then mount the file system.

  • What is the roles of technical and functional consultants in ESS/MSS area?

    Gurus,
    What is the roles of technical consultants in ESS/MSS area?
    What is the roles of functional consultants in ESS/MSS area?
    Please help me see the differences.
    Thanks,

    Hi Thaman,
    Responsibility of functional consultant in ESS/MSS:- Functional configuration under SPRO > Employee self service and manager self service, then preparation of Functional specification design (FSD) if any deviation from standard SAP, Unit test plan preparation and all functionailty testing
    Responsibility of Technical consultant:- Preparation of Technical specification design document (TSD), activation of work flow and creation of workflow if additional is required. Creatio of form like vacancy requisition form, new position creation form etc.
    Also some kind of technical configuration is there in SPRO for workflow and form
    Regards,
    Purnima

  • Large OLTP data set to get through the cache in our new ZS3-2 storage.

    We recently purchased a ZS3-2 and are currently attempting to do performance testing.  We are using various tools to simulate load within our Oracle VM 3.3.1 cluster of qty5 Dell m620 servers-- swingbench, vdbench, and dd.  The OVM repositories are connecting via NFS.  The Swingbench load testing servers have a base OS disk mounted from the repos and NFS mounts via NFS v4 from within the VM (we would also like to test dNFS later in our testing). 
    The problem I'm trying to get around is that the 256G of DRAM (and a portion of that for ARC) is large enough where my reads are not touching the 7200 RPM disks.  I'd like to create a large enough data set so the amount of random reads cannot possible be stored within the ARC cache  (NOTE: we have no L2ARC at the moment).
    I've run something similar to this in the past, but have adjusted the "sizes=" to be larger than 50m.  My thought here is that, if the ARC is up towards around 200 or so MB's, if I create the following on four separate VM's and run vdbench at just about the same time, it will be attempting to read more data than can possibly fit in the cache.
    * 100% random, 70% read file I/O test.
    hd=default
    fsd=default,files=16,depth=2,width=3,sizes=(500m,30,1g,70)
    fsd=fsd1,anchor=/vm1_nfs
    fwd=fwd1,fsd=fsd*,fileio=random,xfersizes=4k,rdpct=70,threads=8
    fwd=fwd2,fsd=fsd*,fileio=random,xfersizes=8k,rdpct=70,threads=8
    fwd=fwd3,fsd=fsd*,fileio=random,xfersizes=16k,rdpct=70,threads=8
    fwd=fwd4,fsd=fsd*,fileio=random,xfersizes=32k,rdpct=70,threads=8
    fwd=fwd5,fsd=fsd*,fileio=random,xfersizes=64k,rdpct=70,threads=8
    fwd=fwd6,fsd=fsd*,fileio=random,xfersizes=128k,rdpct=70,threads=8
    fwd=fwd7,fsd=fsd*,fileio=random,xfersizes=256k,rdpct=70,threads=8
    rd=rd1,fwd=fwd1,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd2,fwd=fwd2,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd3,fwd=fwd3,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd4,fwd=fwd4,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd5,fwd=fwd5,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd6,fwd=fwd6,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    rd=rd7,fwd=fwd7,elapsed=900,interval=30,fwdrate=max,format=yes,pause=30,openflags=fsync
    However, the problem I keep running into is that vdbench's java processes will throw exceptions
    ... <cut most of these stats.  But suffice it to say that there were 4k, 8k, and 16k runs that happened before this...>
    14:11:43.125 29 4915.3 1.58 10.4 10.0 69.9 3435.9 2.24 1479.4 0.07 53.69 23.12 76.80 16384 0.0 0.00 0.0 0.00 0.0 0.00 0.1 7.36 0.1 627.2 0.0 0.00 0.0 0.00 0.0 0.00
    14:12:13.071 30 4117.8 1.88 10.0 9.66 69.8 2875.1 2.65 1242.7 0.11 44.92 19.42 64.34 16384 0.0 0.00 0.0 0.00 0.0 0.00 0.1 12.96 0.1 989.1 0.0 0.00 0.0 0.00 0.0 0.00
    14:12:13.075 avg_2-30 5197.6 1.52 9.3 9.03 70.0 3637.8 2.14 1559.8 0.07 56.84 24.37 81.21 16383 0.0 0.00 0.0 0.00 0.0 0.00 0.1 6.76 0.1 731.4 0.0 0.00 0.0 0.00 0.0 0.00
    14:12:15.388
    14:12:15.388 Miscellaneous statistics:
    14:12:15.388 (These statistics do not include activity between the last reported interval and shutdown.)
    14:12:15.388 WRITE_OPENS Files opened for write activity: 89 0/sec
    14:12:15.388 FILE_CLOSES Close requests: 81 0/sec
    14:12:15.388
    14:12:16.116 Vdbench execution completed successfully. Output directory: /oracle/zfs_tests/vdbench/output
    java.lang.RuntimeException: Requested parameter file does not exist: param_file
      at Vdb.common.failure(common.java:306)
      at Vdb.Vdb_scan.parm_error(Vdb_scan.java:50)
      at Vdb.Vdb_scan.Vdb_scan_read(Vdb_scan.java:67)
      at Vdb.Vdbmain.main(Vdbmain.java:550)
    So I know from reading other posts, that vdbench will do what you tell it (Henk brought that up).  But based on this, I can't tell what I should do differently to the vdbench file to get around this error.  Does anyone have advice for me?
    Thanks,
    Joe

    ah... it's almost always the second set of eyes.  Yes, it is run from a script.  And I just looked and realized that the list last line didn't have the \# in it.  Here's the line:
       "Proceed to the "Test Setup" section, but do something like `while true; do ./vdbench -f param_file; done` so the tests just keep repeating."
    I just added the hash to comment that out and am rerunning my script.  My guess is that it'll complete   Thanks Henk.

  • [Smartfroms] Problems in fetching text edit control text

    Hi gurus,
    I made an application using Screen programming (or dialog programming you must say). I used 3 tab strips. In last tab strip I used a text edit control to write remarks in multiple lines.
    I save those remarks (data element: TEXT255) in a custom table as they are saved as "sdfsdfsdf##df##sd##fsd##fsd##fsd##f##sd##f". Fine enough as "ENTER KEY" is interpreted as "##".
    I fetched them easily in a display mode using get_stream method. All good.
    But the problem arose when I needed to fetch those saved remarks in a smartform.
    When I fetched those remarks they came as "sdfsdfsdf##df##sd##fsd##fsd##fsd##f##sd##f" in samrtform's text_box.
    I tried a Find and replace command but ## couldn't process, it's because ## is just the interpretation of ENTER KEY.
    I need them as
    sdfsdfsdf
    df
    sd
    fsd
    fsd
    fsd
    f
    sd
    f
    Is there anyways to get multiple lines text box in smartforms or can I remove those "##" from the text?
    Regards,
    Mansoor Ahmed.

    I am saving it as a text in TEXT255 data element.
    But in text edit control when i press enter it adds ## to the text. And ## could not be processed via FIND and REPLACE commands.

Maybe you are looking for