File Creation - Best Practice

Hi,
I need to create a file daily based on a single query. There's no logic needed.
Should I just spool the query in a unix script and create the file like that or should I use a stored procedure with a cursor, utl_file etc.?
The first is probably more efficient but is the latter a cleaner and more maintainable solution?

I'd be in favour of keeping code inside the database as far as possible. I'm not dismissing scripts at all - they have their place - I just prefer to have all code in one place.

Similar Messages

  • Database creation best practices.

    Hi,
    We are planning to setup new database, oracle-10G on Sun and AIX. It is a datawarehose environment.
    Can anyone please share me the documents which speaks about best practices to be followed during database creation/setup. I googled and got some douments but not satisfied with them, so thought of posting this query.
    Regards,
    Yoganath.

    YOGANATH wrote:
    Anand,
    Thanks for your quick response. I went thru the link, but it seems to be a brief one. I need a sort of crisp/summary document for my presentation, which speaks about:
    1. Initial parameter settings for an datawarehouse to start with, like block_size, db_file_multiblock_read_count, parallel server etc...
    2. Memory parameters, SGA, PGA (say for an sever with 10GB RAM).
    3. How to split tablespaces, like Large, small.
    If someone has a just a crisp/outline document which speaks about the above mentioned points, it will be grateful.
    Regards,
    YoganathYou could fire up dbca, select the 'data warehouse' template, walk through the steps, and at the end do not select 'create a database' but simply select 'create scripts', then take a look at the results, especially the initialization file. Since you chose a template instead of 'custom database' you won't get a CREATE DATABASE script, but you should still get some stuff genned that will answer a lot of the questions you pose.
    You could even go so far as to let dbca create the database. Nothing commits you to actually using that DB. Just examine it to see what you got, then delete it.
    Edited by: EdStevens on Feb 10, 2009 10:41 AM

  • File import best practice

    I need some outside input on a process. We get a file from a bank and I have to take it and move it along to where it needs to go. Pretty straight forward.
    The complexity is the import process from the bank. It's a demand pull process where an exe needs to be written that pulls the file from the bank and drops it into a folder. My issue is they want me to kick the exe off from inside SSIS and then use a file
    watcher to import the file into a database once the download is complete. My opinion is the the SSIS package that imports the file and the exe that gets the file from the bank should be totally divorced from each other.
    Does anybody have an opinion on the best practice of how this should be done?

    Here it is:http://social.msdn.microsoft.com/Forums/sqlserver/en-US/bd08236e-0714-4b8f-995f-f614cda89834/automatic-project-execution?forum=sqlintegrationservices
    Arthur My Blog

  • Flat File load best practice

    Hi,
    I'm looking for a Flat File best practice for data loading.
    The need is to load a flat fle data into BI 7. The flat file structure has been standardized, but contains 4 slightly different flavors of data. Thus, some fields may be empty while others are mandatory. The idea is to have separate cubes at the end of the data flow.
    Onto the loading of said file:
    Is it best to load all data flavors into 1 PSA and then separate into 4 specific DSOs based on data type?
    Or should data be separated into separate file loads as early as PSA? So, have 4 DSources/PSAs and have separate flows from there-on up to cube?
    I guess pros/cons may come down to where the maintenance falls: separate files vs separate PSA/DSOs...??
    Appreciate any suggestions/advice.
    Thanks,
    Gregg

    I'm not sure if there is any best practise for this scenario (Or may be there is one). As this is more data related to a specific customer needs. But if I were you, I would handle one file into PSA and source the data according to its respective ODS. As that would give me more flexibility within BI to manipulate the data as needed without having to involve business for 4 different files (chances are that they will get them wrong  - splitting the files). So in case of any issue, your trouble shooting would start from PSA rather than going thru the file (very painful and frustating) to see which records in the file screwed up the report. I'm more comfortable handling BI objects rather than data files - coz you know where exactly you have look.

  • FDM file format best practice

    All, We are beginning to implement an Oracle GL and I have been asked to provide input as to the file format provided from the ledger to process through FDM (I know, processing directly into HFM is out..at least for now..).
    Is there a "Best Practice" for file formats to load through FDM into HFM. I'm really looking for efficiency (fastest to load, easiest to maintain, etc..)
    Yes, we will have to use maps in FDM, so that is part of the consideration.
    Questions: Fix or delimited, concatenate fields or not, security, minimize the use of scripts, Is it better to have the GL consolidate etc...?
    Thoughts appreciated
    Edited by: Wtrdev on Mar 14, 2013 10:02 AM

    If possible a Comma or Semi-Colon Delimited File would be easy to maintain and easy to load.
    The less use of scripting on the file, the better import performance.

  • DW:101 Question - Site folder and file naming - best practices

    OK - My 1st post! I’m new to DW and fairly new to developing websites (have done a couple in FrontPage and a couple in SiteGrinder), Although not new at all to technical concepts building PCs, figuring out etc.
    For websites, I know I have a lot to learn and I'll do my best to look for answers, RTFM and all that before I post. I even purchased a few months of access to lynda.com for technical reference.
    So no more introduction. I did some research (and I kind of already knew) that for file names and folder names: no spaces, just dashes or underscores, don’t start with a number, keep the names short, so special characters.
    I’ve noticed in some of the example sites in the training I’m looking at that some folders start with an underscore and some don’t. And some start with a capital letter and some don’t.
    So the question is - what is the best practice for naming files – and especially folders. And that’s the best way to organize the files in the folders? For example, all the .css files in a folder called ‘css’ or ‘_css’.
    While I’m asking, are there any other things along the lines of just starting out I should be looking at? (If this is way to general a question, I understand).
    Thanks…
    \Dave
    www.beacondigitalvideo.com
    By the way I built this site from a template – (modified quite a bit) in 2004 with FrontPage. I know it needs a re-design but I have to say, we get about 80% of our video conversion business from this site.

    So the question is - what is the best practice for naming files – and especially folders. And that’s the best way to organize the files in the folders? For example, all the .css files in a folder called ‘css’ or ‘_css’.
    For me, best practice is always the nomenclature and structure that makes most sense to you, your way of thinking and your workflow.
    Logical and hierarchical always helps me.
    Beyond that:
    Some seem to use _css rather than css because (I guess) those file/folder names rise to the top in an alphabetical sort. Or perhaos they're used to that from a programming environment.
    Some use CamelCase, some use all lowercase or special_characters to separate words.
    Some work with CMSes or in team environments which have agreed schemes.

  • Essbase unix file system best practice

    Is there such thing in essbase as storing files in different file system to avoid i/o contention? Like for example in Oracle, it is best practice to store index files and data files indifferent location to avoid i/o contention. If everything in essbase server is stored under one file directory structure as it is now, then the unix team is afraid that there may run into performance issue. Can you please share your thought?
    Thanks

    In an environment with many users (200+) or those with planning apps where users can run large long-running rules I would recommend you separate the application on separate volume groups if possible, each volume group having multiple spindles available.
    The alternative to planning for load up front would be to analyze the load during peak times -- although I've had mixed results in getting the server/disk SME's to assist in these kind of efforts.
    Some more advanced things to worry about is on journaling filesystems where they share a common cache for all disks within a VG.
    Regards,
    -John

  • Mapping creation best practice

    What is the best practice while designing OWB mappings.
    Is it best to have less number of complex mappings or more number of simple mappings particularly when accessing remote DB to
    extract the data.
    A simple mapping may be having lesser number of source tables and the complex mapping may be one
    which will have more source tables and more expresssions.

    If you're an experienced PL/SQL (or other language) developer then you should adopt similar practices when designing OWB mappings i.e. think reusability, modules, efficiency etc. Generally, a single SQL statement is often more efficient than a PL/SQL procedure therefore in a similar manner a single mapping (that results in a single INSERT or MERGE statement) will be more efficient than several mappings inserting to temp tables etc. However, it's often a balance between ease of understanding, performance and complexity.
    Pluggable mappings are a very useful tool to split complex mappings up, these can be 'wrapped' and tested individually, similar to a unit test before testing the parent mapping. These components can then also be used in multiple mappings. I'd only recommend these from 10.2.0.3 onwards though as previous to that I had a lot of issues with synchronisation etc.
    I tend to have one mapping per target and where possible avoid using a mapping to insert to multiple targets (easier to debug).
    From my experience with OWB 10, the code generated is good and reasonably optimised, the main exception that I've come across is when a dimension has multiple levels, OWB will generate a MERGE for each level which can kill performance.
    Cheers
    Si

  • Vendor Creation Best Practices?

    Can anyone that has the Purchasing dept. (one who is able to issue PO's) also has the ability to create vendors MK01(local) XK01(central). I'm trying to understand the risks with one who can issue PO's also can setup vendors.
    Thanks in advance!
    Best, Michael

    Hi,
    This depends totally on the organisational set up. Each company has its own checks for curtailing mal practices. There are also international standards available for controlling these.
    There is nothing wrong in the same person creating a vendor master and also issuing a P.O so long as the requisite checks and controls ( e.g release strategy or authorisation control ( to name a few )) are in place.
    If the organisation and the responsible / competant authorities are satisfied that a fool proof control mechanism in place to control mal practices it is fine for the same person to create vendormaster as well as P.O ( e.g a small organisation may not have the luxury to keep 2 different people for this activity ).
    I hope you are getting my point.  SAP is just an enabler for business to run. The checks and controls have to be there anyway.
    Regards,
    Rajeev

  • File naming best practice

    Hello
    We are building a CMS system that uses BDB XML to store the individual xhtml pages, editorial content, config files etc. A container may contain tens of thousands of relatively small (<20 Kb) files.
    We're trying to weigh up the benefit on meaningful document names such as "about-us.xml" or "my-profile.xml" versus integer/long file names such as 4382, 5693 without the .xml suffix to make filename indexing as efficient as possible.
    In both situations the document remain unique: appending '_1', '_2' etc where necessary to the former and always incrementing by 1 the latter.
    There is a 'lookup' document that describes the hierarchy and relationships of these files (rather like a site map) so the name of the required document will be known in advance (as a reference in the lookup doc) so we believe that we shouldn't need to index the file names. XQuery will run several lookups but only based upon the internal structure/content of the documents, not on the document names themeselves.
    So is there any compelling reason not to use meaningful names in the container, even if there are > 50,000 documents?
    Thanks, David

    George,
    I was interested in finding out whether document names made of integers would be much more efficient - albeit less intuitive - in the name index than something like 'project_12345.xml'.
    We may need to return all documents of type 'project' so putting the word in the document name seemed like a good idea, but on reflection perhaps we're better off putting that info in the metadata, indexing that and leave the document name as a simple integer/long such as '12345'.
    If so, is it worth rolling out my own inetger-based counter to uniquely name documents or am I better off just using the in-built method setGenerateName()? Is there likely to be much of a performance difference?
    Regards, David

  • Photo file management - best practice query

    A number of recent posts have discussed file management protocols, but I wondered if some of the experts on this forum would be so kind as to opine on a proposed system set up to better manage my photo files.
    I have an imac, time machine & various external hard drives.
    I run Aperture 3 on my imac (my main computer), with about 15k referenced images. Currently my photo masters are kept on my imac, as is my aperture library file, and vault. After editing in APerture, I then export the edited jpegs onto another part of my imac. The imac is backed up to time machine and an off site drive.
    Following some of the threads, the main message seems to be to take as many of my photo files as possible off my imac, and store on a separate drive. So does the folloing set up sound better?
    *Aperture run on imac, still using referenced images
    *Master images moved from imac to external drive 1
    *Aperture library file moved from imac to external drive 1
    *Aperture vault moved to external drive 1
    *External drive 1 backed up to external drive 2. Run idefrag on both drives regularly.
    *Edited exports from Aperture kept on imac, which then feed Apple TV, iphone, mobileme etc. Backed up to time machine.
    *If ever I ran Aperture on an additional machine, presumably I could just plug and play / synch with external hard drive 1.
    Is that a "good" set up? Any enhancements? The set up would seem to free up my boot volume, whilst hopefully maintaining the satefy / integrity of my files. But happy to be told it is all wrong!!
    Many thanks
    Paul

    Seems to be a good approach. However,
    Depending on how much disk space on the local drive and the speed of the external along with the iMac specs... you might keep the the library on the iMac instead of the external, but keep the masters on the external. Assuming referenced, then you could keep the vault on the external (as is I wouldn't put vault on the same external drive as both the masters and library).

  • Storing File path - Best Practice

    I have a db that stores the path of images that are loaded into a page.  All the files are stored in the same folder.  My question:  Is it better to store the entire file path in my db or should I just store the file name and make a Constant within the webpage calling the picture?  I would think it's the second option as it is less data in the db but I just wanted to make sure.
    Thanks!

    If the path is always the same, I would store just the filenames. Another option is to create a new field in your table for the path, and assign the current path as the default value. When inserting records, just add the file name; the path field will just take the default value.
    In your SQL:
    SELECT *, CONCAT(path, filename) AS image
    FROM mytable
    If you already have records stored in the table, you can update them very quickly with two SQL queries:
    UPDATE mytable SET path = '/path/to/images/folder/'
    UPDATE mytable SET filename = REPLACE(filename, 'path/to/images/folder/', '')

  • IS-Retail: Site Creation Best Practice

    Hi !
    I have 3 basic questions on this topic:
    1. In IS-Retail is a Site considered 'config' or 'master data'?
    2. Do all sites need to exist in the golden config client, or is it sufficient to have only the DCs (and not the Stores) in it?
    3. After go-live, is it ok to create new stores directly in production? (I hope the answer to this q is yes!)
    Thanks,
    Anisha.

    Hi my answers for your qestions as follows
    1) In IS Retail Site is a master data, but to create Site/DC/stores need to do some config in SPRO, Like...
    Number ranges for Sites, Account groups, create DC and site profiles and assign account groups to them.
    then you Create DC or site using WB01, now site master data will consider.
    2) yes you should have all Sites in golden client ( i am not clear this questn)
    3) After golive better need to create new stores on devleopment in then export (using WBTI/WBTE) to prodcution, in case any future changes requires to do something on stores like assign/delete merch category etc... you can do its easliy to aviod data mismatch in the production.
    regards
    satish

  • BPC 5 - Best practices - Sample data file for Legal Consolidation

    Hi,
    we are following the steps indicated in the SAP BPC Business Practice: http://help.sap.com/bp_bpcv151/html/bpc.htm
    A Legal Consolidation prerequisit is to have the sample data file that we do not have: "Consolidation Finance Data.xls"
    Does anybody have this file or know where to find it?
    Thanks for your time!
    Regards,
    Santiago

    Hi,
    From [https://websmp230.sap-ag.de/sap/bc/bsp/spn/download_basket/download.htm?objid=012002523100012218702007E&action=DL_DIRECT] this address you can obtain .zip file for Best Practice including all scenarios and csv files under misc directory used in these scenarios.
    Consolidation Finance Data.txt is in there also..
    Regards,
    ergin ozturk

  • SAP Best Practice for Water Utilities v 1.600

    Hi All,
    I want to install SAP Best Practice for Water Utilities v 1.600. I have downloaded the package (now  available only Mat.No. 50079055 "Docu: SAP BP Water Utilities-V1.600")  from Marketplace, but there is NO transport file included on it. It only contains documentation.  Should I use the transport file from Best Practice for Utilities v 1.500?
    Thank you,
    Vladimir

    Hello!
    The file should contain eCATTs with data according to best practice preconfigured scenarios and transactions to install them.
    Some information about preconfigured scenario you could find here:
    http://help.sap.com/bp_waterutilv1600/index.htm -> Business Information -> Preconfigured Scenarios
    Under the "Installation" path you could find "Scenario Installation Guide" for Water Utilities.
    I hope it would be helpful.
    Vladimir

Maybe you are looking for

  • I-tunes says 'The ipod "*****"  is synced with another i-tunes library

    I just downloaded itunes 7.3.1.3. I plugged in my ipod nano (software 1.3.1) and first of all, my library had vanished (I managed to get that back) but now when I click on my ipod in itunes I get the above message (subject line), and the next bit of

  • Sale Order flow to CO-PA

    Hi All, We have activated Sales Order value flow to CO-PA (COPA doc.type "A").  We also have credit management activated. When I create a sales order which has the credit status "not approved", then the order does not post any COPA document.  Once th

  • Need to create new users in Office 365 with custom attributes from a csv file

    I am exporting users from an active directory environment and then deleting them from AD. They are Alumni and will no longer be in AD. I have a csv file with the following fields that I need to use to create new Alumni email boxes in Office 365 for.

  • Usage decision printing problem

    Hi All, I have one issue while printing the rejected labels. After the usage decision is made rejected labels are to be printed automaticaly. But this is not happening it is going to spool. Thanks in advance, Babu

  • RM_COLLECTIVE_BACKFLUSH

    Hi. Please help me with the Function Module RM_COLLECTIVE_BACKFLUSH. Can anybody tell me the Functionality of this Function Module ans how does it work. It is collective entry for Back Flushhing and it deals with the MF42n Transaction.