Best practice file SMB18_SE37_O011_Q78_878.TXT

HI! I am going through the Best Practice site on Chemical industry. I am keen to find out how the data is structure for the PROACT01 IDOC processing. Does anyone happen to have the file or a sample file for the inbound handling stock data? I have gone through the BP website but could only find the configuration and scenario guide.
Regards
SF

No answer

Similar Messages

  • Best practice file management

    Hello everyone
    I'm hoping to receive some advice on the best way to store a large collection of music, pictures and video, whilst keeping my computer as empty as possible to maximise it's processing power for professional video editing in the future. The computer is for both personal and business use, and I share it with my partner who is a music fiend, but not computer savvy.
    I have recently purchased a new 27" iMac with the standard specs (3.1GHz Quad-Core Intel Core i5, 4GB memory, 1TB hard drive).  It is currently running Snow Leopard (v. 10.6.6), and I plan to update to Lion when I purchase a new broadband account.
    I also have:
    One external 1TB Western Digital drive, currently at 85% capacity, with music, videos and pictures
    One external 1TB Western Digital drive, currently empty
    A new 2TB time capsule which is not yet set up
    Apple TV, not yet set up
    A mobile USB modem and basic account, soon to be replaced by a fairly high speed broadband modem with a fairly large download cap
    So far, I have:
    created three user profiles - one administrator, and two users
    Set up iTunes so each user shares a single library and itunes media folder
    We would like to also digitise a large collection of CDs and records.
    I was thinking about using the time capsule not only for storage/back up, but also to create a wireless network, allowing it to be stored with my printer and the hard drives can be stored in different room to the computer, away from sight.  The computer is so very gorgeous on it's own after all.... I've been advised also to use it as a wireless router when we purchase our broadband account, so I'm assuming the modem should also be connected to the time capsule in the other room.
    Rather than droning on about what I think I should do, I wondered if one of you experts could advise me on the best way to set everything up? I'm not sure that it's the best idea to set it up so the computer is always having to find files wirelessly from the time capsule and connected drives...  Wouldn't that be slow?
    The advice I've read in the various forums has been rather confusing, so your advice would be really appreciated!!
    Cheers
    Fiona

    Hi Fiona,
    Like your question, I'm in same boat and new to iMAC all together and want to setup backup and sharing strategy via best practice right up front.  Did you get any response or any good best practice you ran across in your research you an share with me?  Thanks.

  • Best Practice: files that should never be moved or deleted

    Hi there.
    I've just gone through my computer for the first time in a couple of years and done a massive cleanup, but without really knowing the consequences I guess you could say and now things are acting a little strange.
    +(I feel silly even writing this, knowing that all the IT heads out there are sighing a unanimous 'what an idiot'.)+
    Can anyone please provide a sort of 'best practice' guide for the files/file types that should absolutely be on the harddrive, or in the apps folder, utilities folder, etc? and if there are any that should 'never' be deleted?
    .. or even some program that sort of saves you from yourself by keeping those files safe from being a.) moved and b.) deleted
    Would be a huge help!
    Thanks in advance ~
    Message was edited by: BluesySue
    Message was edited by: BluesySue

    Apple has already hidden about half of the folders on a typical OS X install. The ones that make up the Unix core of the OS. So unless you know how to find those, you'll never have to worry about them.
    Aside from that, you just want to keep away from the System folder and any of the Library folders. Nothing else should be too crucial to the OS running.
    Of course if you wanted to elaborate on "a little strange" we might be able to be a bit more helpful.

  • What are the best practices in naming a PDF

    Using underscores, spaces, dashes, commas, etc. What is the best way to name a PDF so it can be opened on all browsers and all platforms?

    Hi Gina D 3333,
    Best-practice file-naming conventions for PDF files are the same as for any other file. I found this article on Apple's website that contains some useful pointers to consider when you're naming files that will be used on multiple platforms: OS X: Cross-platform filename best practices and conventions
    I hope this helps!
    Best,
    Sara

  • Best Practice regarding using and implementing the pref.txt file

    Hi All,
    I would like to start a post regarding what is Best Practice in using and implementing the pref.txt file. We have reached a stage where we are about to go live with Discoverer Viewer, and I am interested to know what others have encountered or done to with their pref.txt file and viewer look and feel..
    Have any of you been able to add additional lines into the file, please share ;-)
    Look forward to your replies.
    Lance

    Hi Lance
    Wow, what a question and the simple answer is - it depends. It depends on whether you want to do the query predictor, whether you want to increase the timeouts for users and lists of values, whether you want to have the Plus available items and Selected items panes displayed by default, and so on.
    Typically, most organizations go with the defaults with the exception that you might want to consider turning off the query predictor. That predictor is usually a pain in the neck and most companies turn it off, thus increasing query performance.
    Do you have a copy of my Discoverer 10g Handbook? If so, take a look at pages 785 to 799 where I discuss in detail all of the preferences and their impact.
    I hope this helps
    Best wishes
    Michael Armstrong-Smith
    URL: http://learndiscoverer.com
    Blog: http://learndiscoverer.blogspot.com

  • BPC 5 - Best practices - Sample data file for Legal Consolidation

    Hi,
    we are following the steps indicated in the SAP BPC Business Practice: http://help.sap.com/bp_bpcv151/html/bpc.htm
    A Legal Consolidation prerequisit is to have the sample data file that we do not have: "Consolidation Finance Data.xls"
    Does anybody have this file or know where to find it?
    Thanks for your time!
    Regards,
    Santiago

    Hi,
    From [https://websmp230.sap-ag.de/sap/bc/bsp/spn/download_basket/download.htm?objid=012002523100012218702007E&action=DL_DIRECT] this address you can obtain .zip file for Best Practice including all scenarios and csv files under misc directory used in these scenarios.
    Consolidation Finance Data.txt is in there also..
    Regards,
    ergin ozturk

  • Attachement file "AdditionalJavaCode.zip" in SAP Best Practice BI V1.31

    Hi All,
    According documentation about integration of BP BI V1.31, we can download AdditionalJavaCode.zip on SAP note 1319671 and bypass the SSO between NWBC and BO...
    I am working on a demo environment of course.
    But I don't find this attachment zip file... Change location ? Url ?
    Thanks in advance
    Cédric

    Hello, i am installing Best practices for ECC 6.0 Ehp3 for Venezuela, after we upload the scope file (XML) we proceed to upload the TXT files, but it stops with error in the BC SETS
    TDC is not defined in corresponding ECATT/BC SET Object
    Can you tell me, how to solve this?
    Kind Regards

  • Best Practices:: How to generate XML file from a ResultSet

    Hi all,
    Could someone please suggest the best practices of how to generate an XML file from a resultset? I am developing a web application in Java with Oracle database and one of my tasks is to generate an XML file when the user, for example, click a "download as XML" button on the JSP. The application is basically like an Order with line items. I am using Struts and my first thought has been to have an action class which will extend struts's DownloadAction and through StAX's Iterator API to create an XML file. I intend to have a POJO which will have properties of all columns of my order and line items tables so that for each order I get all line items and:
    1. Write order details then
    2. Through an iterator write line items of that order to an XML file.
    I will greatly appreciate for comments or suggestions on the best way to do this through any pointers on the Web.
    alex

    Use a OracleWebRowSet in which an XML representation of the result set may be obtained.
    http://www.oracle.com/technology/sample_code/tech/java/sqlj_jdbc/files/oracle10g/webrowset/Readme.html
    http://download.oracle.com/docs/cd/B28359_01/java.111/b31224/jcrowset.htm

  • When I share a file to YouTube, where does the output file live? I want also to make a DVD. And is this a best practice or is there a better way?

    I want also to make a DVD, but can't see where the .mov files are.
    And is this a best practice or is there a better way to do this, such as with a master file?
    thanks,
    /john

    I would export to a file saved on your drive as h.264, same frame size. Then import that into youtube.
    I have never used FCP X to make a DVD but I assume that it will build the needed vob mpeg 2 source material for the disk.
      I used to use Toast & IDVD. Toast is great.
    To "see" the files created by FCP 10.1.1 for YouTube, rt. (control) click on the Library Icon in your Movies/show package contents/"project"/share.

  • Best Practice for Flat File Data Uploaded by Users

    Hi,
    I have the following scenario:
    1.     Users would like to upload data from flat file and subsequently view their reports.
    2.     SAP BW support team would not be involved in data upload process.
    3.     Users would not go to RSA1 and use InfoPackages & DTPs. Hence, another mechanism for data upload is required.
    4.     Users consists of two group, external and internal users. External users would not have access to SAP system. However, access via a portal is acceptable.
    What are the best practice we should adopt for this scenario?
    Thanks!

    Hi,
    I can share what we do in our project.
    We get the files from the WEB to the Application Server in path which is for this process.The file placed in the server has a naming convention based on ur project,u can name it.Everyday the same name file is placed in the server with different data.The path in the infopackage is fixed to that location in the server.After this the process chain trigers and loads the data from that particular  path which is fixed in the application server.After the load completes,a copy of file is taken as back up and deleted from that path.
    So this happens everyday.
    Rgds
    SVU123
    Edited by: svu123 on Mar 25, 2011 5:46 AM

  • Best Practice for Distributed TREX NFS vs cluster file systems

    Hi,
    We are planning to implement a distributed TREX, using RedHat on X64, but we are wondering which could be the best practice or approach to configure the "file server" used on the TREX distributed environment. The guides mention file server, that seems to be another server connected to a SAN exporting or sharing the file systems required to be mounted in all the TREX systems (Master, Backup and Slaves), but we know that the BI accelerator uses OCFS2 (cluster file systems) to access the storage, in the case of RedHat we have GFS or even OCFS.
    Basically we would like to know which is the best practice and how other companies are doing it, for a TREX distributed environment using either network file systems or cluster file systems.
    Thanks in advance,
    Zareh

    I would like to add one more thing, in my previous comment I assumed that it is possible to use cluster file system on TREX because BI accelerator, but maybe that is not supported, it does not seem to be clear on the TREX guides.
    That should be the initial question:
    Aare cluster file system solutions supported on plain TREX implementation?
    Thanks again,
    Zareh

  • What is the best practice in securing deployed source files

    hi guys,
    Just yesterday, I developed a simple image cropper using ajax
    and flash. After compiling the package, I notice the
    package/installer delivers the same exact source files as in
    developed to the installed folder.
    This doesnt concern me much at first, but coming to think of
    it. This question keeps coming out of my head.
    "What is the best practice in securing deployed source
    files?"
    How do we secure application installed source files from
    being tampered. Especially, when it comes to tampering of the
    source files after it's been installed. E.g. modifying spraydata.js
    files for example can be done easily with an editor.

    Hi,
    You could compute a SHA or MD5 hash of your source files on
    first run and save these hashes to EncryptedLocalStore.
    On startup, recompute and verify. (This, of course, fails to
    address when the main app's swf / swc / html itself is
    decompiled)

  • Best Practices for File Organizati​on/Project Explorer

    So we are finally getting SCC at my organization to manage our LabVIEW development, and that is good! 
    Now, we are starting in on discussions about how we should organize our files on disk and how we should use the Project Explorer. When I started here about 3 years ago, I wasn't very familiar with the project explorer, so I read the article at http://zone.ni.com/devzone/cda/tut/p/id/7197. Two of the main things I took away from that article are:
    1. Organize Files in a logical manner on disk. Whatever that is, it is not a flat file structure.
    2. The top level VI should be separate from other source code. Preferably, it should reside in the application folder.
    Push Back Against These Recommendations
    Before I was hired, most, if not all LabVIEW development was done utilizing a flat file structure and the top level VI lived with the source code. Since we didn't have a proper SCC, each individual organized files as he saw fit. So I started using the Project Explorer (not even its use is totally accepted right now) and I began follow recommendations 1 and 2 above. I didn't always follow #1 very strictly, but I have been working towards it, and I have always followed #2 religiously. 
    Since we are starting these discussions on how we should organize files on disk I'm starting to get some push back to following these two recommendations.
    The arguments I get in favor of using a flat file structure is that you always know where every file is; including the top-level VI. It is also argued that it is a lot of effort to organize and search for VIs when they all reside in different folders. I think the fear is that by getting "clever" and organizing our files in such a manner we'll make things complicated and we will somehow shoot ourselves in the foot. 
    The argument I get against separating the top level VI from the rest of the source code is that it:
    (a) Won't be clear where it is (like it is buried within hundreds of VIs). However, it is argued, you can just put a "!" in front of the file name and then it is always the at top of the flat file structure.
    (b) An extension of argument of (a) is that things either look or seem messy when VIs (including top level VI) don't live in a sub-folder and are just hanging out with the Project Explorer file. 
    (c) I think there may be some fear of breaking the VI by moving it and altering the dependencies for the VI. 
    Convincing Others its Good to Follow These Recommendations
    So, if I want to follow NI's recommendations, I need to come up with reasons we should follow these recommendations. Also, I should state that I care about following these recommendations because its what NI recommends. They've been around the block a few times and I'm sure there are good reasons why these are best practices. However, I don't think I've given a very compelling case for why these recommendations should be followed.
    So I'll tell you all what I think good reasons are for these recommendations and perhaps I can get some feedback or additional support? If I'm crazy for wanting to follow these recommendations maybe someone can point out why I'm crazy. 
    (a) Arguments for Following Both
    I. I passed the CLAD a couple of weeks ago, and I have started studying for the CLD. Part of the CLD is following both of these recommendations (see page 6 of http://ftp.ni.com/evaluation/certification/cld/cld​_exam_prep_guide_english.pdf). While this isn't a reason in and of itself, it suggests that if it important when being certified it is important in practice!
    II. If we hire new developers that are familiar with LabVIEW, they will most likely be familiar with these recommendations, especially if they are certified. That will lead to increased productivity out of the door because they won't have to learn our special way of doing things.
    (b) Arguments for Organized File Structure
    I. Unused VIs are easier to identify and remove. Right now we never remove VIs because we don't know if they are used or not. This leads to a lot of VI bloat.
    II. It is hard to know what a specific VIs function is in a flat file structure by looking at the name.
    (c) Arguments for Separating Top Level VI from Source Code
    I. Placing the top level VI is an intuitive place for this VI. As long as the top level VI is the only VI in the application folder there is no mistake it is the top level VI, especially once you open it. This makes it easy for new developers to find the top level VI. I'd argue it isn't very intuitive for new developers to know that a VI in the source code folder that is prefaced with a "!" is the top level VI.
    Summary
    So that is what I think so far. Is there anything else I am missing to support following those two recommendations or am I just being inflexible?
    Thanks!

    zenthoef,
    As a CLA, I have struggled with file structure over the years.  Here are my recommendations:
    1.  Put the top level VI and the project in the top-level folder.  This makes it very clear where to begin.
    2.  Put the remaining user interface VIs in a separate folder.  Again, it makes it very clear what the functionality of these VIs are.
    3.  If you are using object, put each object in a separate folder.  Place the family of objects in one folder, with each object in a subfolder.
    4.  Keep the remaining VIs either in a single folder.  This can contain a small number of subfolder if your project is large, but too many folders makes it hard to figure out where your VIs are.  For example, you might have a DAQ subfolder, an Analysis subfolder, and a Report subfolder.  But if you had a Test1 folder, a Test2 folder, and you had a VI that was used by both tests, where would it go?  Keep it simple.
    5.  You inferred that it is hard to figure out what a VI does by its name.  That implies that 1) you need better names, and 2) your VIs are too complicated.  A VI should do a single function which can be adequately described by its name.  That VI might be something like Analyze Data.vi, which would contain a bunch more subVIs (like Get 1st Harmonics.vi), but each VI would contain a single function.  You wouldn't save the data to a report in the Analyze Data.vi, for example.
    The most compelling reason for following these suggestions is that it is easier to figure out what the code is doing after you haven't looked at it for a while.  Once you have an application that is working and bug free, you shouldn't have to touch the code until you want to add features.  If that is even 6 months later, you will probably have forgotten how the code works.  As a consultant, I have had to update other people's code, and just figuring how where to start can be a challenge.
    Tom Brass
    Certified LabVIEW Architect
    Saint Bernard Engineering, Inc.
    www.saintbernardengineering.com
    Tom Brass
    Certified LabVIEW Architect
    Saint Bernard Engineering, Inc.
    www.saintbernardengineering.com

  • Best practices for file I/O within producer/c​onsumer loops

    I'm looking to add file recording and playback functionality to a pre-existing data collection program.  The original progam is based on a Moore-style state machine, which I have added four additional states to.  They are: Record Start, Record Stop, Playback Start, and Playback Stop.
    What I have done, and what has since been identified as poor programming practice, was to "initialize" (either create or load) the appropriate file within the state machine loop during the "Start" command (for record or playback functionality), and then provide the file reference as an indicator, which is linked to for the appropriate read or write operation(whether I'm playing back or recording).  The actual I/O occurs within the the Consumer Loop. (screenshots attached).
    This is my first labview project outside of tutorials or other small examples, so any advise and constructive criticisims are welcomed.  Specifically with regards to file IO and refnum routing (it gets a little hairy in the consumer loop)!
    I'm running Labview 8.6 on Vista business.
    Attachments:
    Playback Start.JPG ‏71 KB
    Consumer_Loop.JPG ‏122 KB
    File Playback subvi.JPG ‏14 KB

    jamoore84 wrote:
    Ben,
    Thanks for the suggestion.  I think it's a little outside my ken at this moment, but  I'll look it over.  Despite any grevious coding transgressions, I have experienced some limited success with the current setup.  While the use of Action Engines/Functional Globals may constitute the best practice, I might revise my post to read "acceptable and/or easily absorbable practices" instead.
    Are there any other opinions on this?  Let me start by listing a problem and posing a question:
    Problem:  I am able to playback a file only once.  Subsequent attempts at file playback do not work.
    Thanks in advance,
    jimmy 
    HI Jimmy,
    I don't give up easy.
    Let me try to exaplain the issue with race conditons with a contrived example, the
     "Command by Mail box" case.
    Imagine you had a job where you recieved your orders via an old fashioned mail box. You never really saw your supervisor but relied on getting orders via the mail box. Now imagine the mail box could only hold one message and any time a new message was inserted, the old one would fall into the trash.
    So you come to work each day check your mail box do what was ordered and everyone is happy.
    The next day you come in and without your knowing, you are assigned to do the work of two bosses. So as long as you check your mail box more often than the two of them assigne work everything is fine... until you take a day of vacation! So you come back in the day after vacation and check you mail box and work a way until you catch hell for ignoring orders !?! Wel it turn out the order from boss 1 was replaced by the order from boss 2 while you were on vacation. Oh bother!
    How can we fix it?
    1) Expand the mail box so it hold more than one order. You just process them in the oder they are recieved.
    2) Change to mail box to not accept a new order until you have removed the old.
    Now back to reality!
    Local variables act like the funky mail box. The last message insterted over-rides the previous.
    Multiple variable writer ar like multiple bosses.
    Queues operate like an expanded mail box, letting you handle each message in order.
    Action Engines operate like the "mutexed" mail box.
    Why I don't want to encorage you to "just patch up" what you have.
    All of the less than ideal solutions either over-sample (Check e-mail twice as often as bosses assign work, waste CPU, an exercise in futility if you are coding in a non-Real-Time envirionment like Windows) or use a mutex to control access to the shared resource ( in this case local varialbes).
    LV offers mutexes through semaphores (found on the syncronization pallete) but...
    WHY WORK SO HARD?
    In my AE Nugget I explain that the exection of an AE is automatically protected by LV. So in the long run it will actually be easier to learn how to use the AE programming construct than it will be to learn how to solve the problem without them.
    so GO FOR IT! Take the lazy route and learn how to use the AE construct. Use the Syncronization pallete for Queues.  
    Just trying to help,
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Best practice - updating figure numbers in a file, possibly to sub-sub-chapters

    Hi,
    I'm a newbie trying to unlearn my InDesign mindset to work in FrameMaker. What is best practice for producing figure numbers to accompany diagrams throughout a document? A quick CTRL+F in the Framemaker 12 Help book doesn't seem to point me in a particular direction. Do diagrams need to be inserted into a table, where there is a cell for the image and a cell for the figure details in another? I've read that I should  use a letter and colon in the tag to keep it separate from other things that update, e.g. F: (then figure number descriptor). Is there anything else to be aware of, such as when resetting counts for chapters etc?
    Some details:
    Framemaker12.
    There are currently 116 chapters (aviation subjects) to make.
    Each of these chapters will be its own book in pdf form, some of these chapters run to over 1000 pages.
    Figure number ideally takes the form: "Figure (a number from one of the 1-116 chapters used) - figure number" e.g. "Figure 34 - 6." would be the the 6th image in the book 'chapter 34'.
    The figure number has to cross reference to explaining text, possibly a few pages away.
    These figures are required to update as content is added or removed.
    The (aviation) chapter is an individual book.
    H1 is the equivalent of the sub-chapter.
    H2 is the equivalent of the sub-sub-chapter.
    H3 is used in the body copy styling, but is not a required detail of the figure number.
    I'm thinking of making sub-chapters in to individual files. These will be more manageable on their own. They will then be combined in the correct order to form the book for one of these (1 of 116) subject chapters.
    Am I on the right track?
    Many thanks.
    Gary

    Hi,
    Many thanks for the link you provided. I have implemented your recommendation into my file. I have also read somewhere about sizing anchored frames to an imported graphic using 'esc' + 'm' + 'p'.
    What confuses me, coming from InDesign is being able to import these graphics at the size they were made ( WxH in mm at 300ppi) and keeping them anchored to a point in the text flow.
    I currently have 1 and 2 column master pages built. When I bring in a graphic my process is:
    insert a single cell table on the next space after current text > drop the title below the cell > give the title a 'figure' format. When I import a graphic it either tries to fit it in the current 2 column layout with only part of it showing in a box which is half the width of a single column!
    A current example: page 1 (2 column page) the text flows for 1.5 columns. At the end of the text I inserted a single cell table, then imported and image into the cell.
    Page 2 (2 column page) has the last line of page 1's text in the top left column.
    Page 3 (2 page column)  has the last 3 words of page 1 in its top left column.  The right column has the table in it with part of the image showing. The image has also bee distorted, like it's trying to fit. These columns are 14 cm wide, the cell is 2 cm wide at this point. I have tried to give cells for images 'wider' attributes using the object style designer but with no luck.
    Ideally I'm trying to make 2 versions. 1) an anchored frame that fits in a 1 column width on a 2 column width page. 2) An anchored frame that fits the full width of my landscape pages (minus some border dimension),  this full width frame should be created on a new proceeding page. I'd like to be able drop in images to suit these different frames with as much automation as possible.
    I notice many tutorials tell you how to do a given area of the program, but I haven't been able to find one that discusses workflow order. Do you import all text first, then add empty graphic boxes and/or tables throughout and then import images? I'm importing text from Word,  but the images are separate, having been vectored or cleaned up in Photoshop - they won't be imported from the same word file.
    many thanks

Maybe you are looking for

  • Batch image merge? In need of automation

    I'm creating customized forms using Acrobat and some elements of LiveCycle. I've managed to get the first (of 20) form to work and look pretty great. My challenge is: The form is for a cooperative that has 28 members. Each member wants the identical

  • Problems with jpg export and color profiles

    I browsed the forum but I didn't find any recent thread about the issue I'm experiencing right now, I wonder if it's common or it just affects me. Mavericks, Aperture 3.5, all up-to-date. I imported a RAW file (I tried with both a Nikon and a Sony ca

  • Other Operating Systems

    I'm interested in putting together a poll on what other OS versions Archers use. But before I put together the poll, I'm looking for a list of what people are using or have used  both at home and at work besides Arch. Home: SuSE, RedHat, Windows. Wor

  • Confused Metadata after import from catalogue

    I have two catalogues on this machine. I just imported one into the other in order to merge them. The images in each catalogue have never been in the other. After the import, most, but not I think all, imported images have an icon either saying eithe

  • Script that runs MSI from discovered location.

    Hello everyone - Still a noob here in scripting with powershell... We use SCCM 2012, and i'd like a script that can parse through the ccmCache folder, find the desired MSI, and the use this path to work from(parameters, etc). Below is what i have att