What is the recommended way to truncate tables in ODI?

I want to create a separate step to truncate the result tables, before the start of the actual job. What is the recommended way of doing this?
I am currently putting the truncate statements in the ODI procedure, but that has a lot of typing. Is there a odi command in the toolbox that I can use?
Thanks.

Ok,
If the table will be loaded by interfaces, you have the "Truncate" option at the IKM's, just change it to "Yes".
If you need to Truncate but they won't be loaded by Interfaces a possible way is:
- requirements: it will be necessary to have some common "string" at tables names.
1) create a procedure
2) create an step
3) at source tab put:
Select table_name from user_tables where table_name in '%THE_STRING%'
4) at target tab put:
Truncate table #table_name
If you don't have a common "string" you, instead, can create a table with all table names that you need to truncate and change the select command at 3).
Does it help you?
Message was edited by:
Cezar_Santos

Similar Messages

  • What is the recommended way for persisting JMS messages?

    What is the recommended way for persisting JMS messages?. As per the IMQ admin documentation , using the default built-in persistence type which is through unix flat files is much efficient and faster, compared to the database persistence .
    Tried setting up the jdbc stuff for database persistence on iAS 6.5 . I am getting the following
    error .
    [24/Apr/2002:16:09:20 PDT] [B1060]: Loading persistent data...
    [24/Apr/2002:16:09:21 PDT] Using plugged in persistent store: database connection
    url=jdbc:oracle:thin:@dbatool.mygazoo.com:1521:qa1 brokerid=ias01
    [24/Apr/2002:16:09:23 PDT] [B1039]: Broker "jmqbroker" ready.
    [24/Apr/2002:16:11:56 PDT] ERROR [B4012]: Failed to persist interest
    SystemManager%3ASystemManagerEngine%2BiMQ+Destination%0AgetName%28%29%3A%09%09SM_Response%0AClass%3A%09%09%09com.sun.messaging.Topic%0AgetVERSION%28%29%3A%09%092.0%0AisReadonly%28%29%3A%09%09false%0AgetProperties%28%29%3A%09%7BJMQDestinationName%3DSM_Response%2C+JMQDestinationDescription%3DA+Description+for+the+Destination+Object%7D:
    java.sql.SQLException: ORA-01401: inserted value too large for column
    [24/Apr/2002:16:11:56 PDT] WARNING [B2009]: Creation of consumer SM_Response to destination 1
    failed:com.sun.messaging.jmq.jmsserver.util.BrokerException: Failed to persist interest
    SystemManager%3ASystemManagerEngine%2BiMQ+Destination%0AgetName%28%29%3A%09%09SM_Response%0AClass%3A%09%09%09com.sun.messaging.Topic%0AgetVERSION%28%29%3A%09%092.0%0AisReadonly%28%29%3A%09%09false%0AgetProperties%28%29%3A%09%7BJMQDestinationName%3DSM_Response%2C+JMQDestinationDescription%3DA+Description+for+the+Destination+Object%7D:
    java.sql.SQLException: ORA-01401: inserted value too large for column
    Any thoughts?

    From the output, you are using imq 2.0. In that release
    the key used to persist a durable subscriber in the database
    table has a limit of 100 characters. The output shows that
    your value is:
    SystemManager%3ASystemManagerEngine%2BiMQ+Destination%0AgetName%28%29%3A%09%09SM_Res
    ponse%0AClass%3A%09%09%09com.sun.messaging.Topic%0AgetVERSION%28%29%3A%09%092.0%0Ais
    Readonly%28%29%3A%09%09false%0AgetProperties%28%29%3A%09%7BJMQDestinationName%3DSM_R
    esponse%2C+JMQDestinationDescription%3DA+Description+for+the+Destination+Object%7D:
    which is much longer than 100 characters.
    You might want to shorten the string you use for the
    durable name.
    And yes, the default file-based persistence store is
    more efficient when compared to the plugged-in persistence
    through a database.

  • What is the recommended way to upgrade system ruby and python binaries and their libraries?

    I'd like to upgrade my system ruby to 1.9.2 instead of old 1.8.7. Also was wondering what's the recommended way of upgrading any scripting languages installed on system?

    Hi Sogaard,
    see below:
    How does SAP recommend to integrate Webi-report in the SAP Portal? Is it through an URL iview, the iview templates (thumbnail, folder and alert) or through the Master Iview?
    The sample iViews are just samples for a particular case. The iView template allows you to use any application on the BusinessObjects system - including the viewing of content. What you are looking for is the creation process for WebI. So you have 2 options: you can use the KM integration and you can build a java application that offers the workflow you looking for and integrate it with the iView template
    Depending on what method should be used, I'd like to know which settings to be focused on. For instance, if the Master Iview is to be used, should opendocument be used instead of reports? And what other customizations will have to be done in order to integrate a Webi-report instead of a Crystal-Report?
    OpenDocument is for viewing reports. The Installation Guide outlines the integration of the iView template and the KM part.
    What is the intended use for the BOBJ repositoy manager? Would that be the way to integrate the Info-view?
    This is the integration into the KM repository which provides a richer functionality then just an iView.
    Ingo

  • What is the recommended way of connecting to repository out of WebDAV, RMI, JNDI and JCA connector ?

    What is the recommended way of connecting to repository out of WebDAV, RMI, JNDI, and JCA connector possibilities provided by CQ 5.5?

    Hi dp_adusumalli,
    I recognized your list of ~8 questions you posted at around the same time, as I received that same list in our customer implementation from Arif A., from the India team, visiting San Jose. :-)
    I provided him feedback for most of the questions, so please check back with Arif for that info.
    For this particular question, can you provide specifics for the types of interactions you are interested in?
    Understanding the kinds of things you need to achieve will help determine which of the CQ/CRX interfaces is best suited for the task(s).
    I've collated a few points on this subject on this page:
    Manipulating the Adobe WEM/CQ JCR
    Regards,
    Paul

  • HT4796 I want to transfer my itunes from a Windows 7 machine to a new Mac, iTunes on Windows has all content on an external hard drive.  What is the recommended way to set up iTunes on my Mac so I don't lose access to my content

    I want to transfer my itunes from a Windows 7 machine to a new Mac, iTunes on Windows has all content on an external hard drive.  What is the recommended way to set up iTunes on my Mac so I don't lose access to my content

    iTunes: How to move your music to a new computer

  • What is the recommended way to do multiple channel, single point sampling for control with an NI PCI-6255 in RLP?

    Hello,
    I am writing a driver for the M-series NI PCI-6255 for QNX. I have downloaded the MHDDK and have all the examples working. I have also enhanced the examples to do interrupt handling (e.g. on AI_FIFO interrupt or DMA Ring Buffer interrupt). My ultimate goal is to write a driver that I can use for closed-loop control at 500 Hz using all 80 channels of the NI PCI-6255. I may also need to synchronize each scan with a NI PCIe-7841R card for which I've already written a driver. I want an interrupt-driven solution (be it programmed I/O on an interrupt or DMA that generates an interrupt) so that the CPU is available to other threads while the 80 analog inputs are being read (since it takes quite a while). I also want to minimize the number of interrupts. Basically, I will need to collect one sample from all 80 channels every 2 milliseconds.
    There are many different options available to do so, but what is the recommended technique for the NI PCI-6255 card? I tried using the AI FIFO interrupt without DMA, but it seems to interrupt as soon as any data is in the AI FIFO (i.e. not empty condition), rather than when all 80 channels are in the FIFO, so more interrupts are generated than necessary. I tried using DMA in Ring Buffer mode to collect a single sample of 80 channels and interrupting on the DMA Ring Buffer interrupt, which appears to work better except that this technique runs into problems if I cannot copy all the data out of the DMA buffer before the next AI scan begins (because the DMA will start overwriting the buffer as it is in ring buffer mode). If the DMA is not in ring buffer mode or I make the ring buffer larger than one 80-channel sample then I don't have a way to generate an interrupt when one sample has been acquired (which I need, because I'm doing control).
    I saw something in the documentation about a DMA Continue mode in which it looks like you can switch between two different buffers (by programming the Base Count/Address with a different address than the current address) automatically and thereby double-buffer the DMA but there is no real documentation or examples on this capability. However, I think it would work better than the Ring Buffer because I could interrupt on the DMA CONT flag presumably and be copying data out of one buffer while it is filling the other buffer.
    Another option would be DMA chaining, but again, I cannot find any information on these features specific to the NI DAQs.
    I tried interrupting on AI STOP figuring that I could get a single interrupt for each scan, but that doesn't appear to work as expected.
    I know that DAQmx on Windows has the ability to do such single sample, multiple channel tasks at a fixed rate so the hardware must support it.
    Any suggestions would be appreciated.
    Thanks.
    Daniel Madill

    Hello,
    The interrupt that will happen nearest the times that you need is the AI_Start_Interrupt in the Interrupt_A group. This interrupt will occur with each sample clock. By the second time this interrupt fires, the AI FIFO should have the samples from the first conversion. If it is easier to use programmed IO, you can read the samples out of the FIFO until you get all 80.
    Additionally, you can set the DMA to send samples as soon as the FIFO is no longer empty...instead of waiting for half full or full. This change will reduce latency for your control loop. You can set AI_FIFO_Mode in AI_Mode_3_Register to 0. By the second time this interrupt fires, you should be able to check how much data is in the DMA ring buffer and read the 80 samples when they are available. You can make the ring buffer larger than 80 samples if you see data getting overwritten.
    There is no interrupt associated with 80 samples being available in the FIFO or 80 samples being available/transferred by DMA to the host. X Series has much more flexibility with these interrupts.
    I hope this helps!
    Steven T.

  • What is the recommended way to obtain tracking data from carriers post XSI

    We currently run an old version of SAP Business Connector. We are in the process of migrating all interfaces off BC onto PI. The one remaining interface we have problems is the XSI (Express Delivery Interface) interface we have with ECC06 and UPS via the BC server. The interface works but is not stable and we would like to decommission it if we are able.
    I'm not 100% clear but it appears that XSI is no longer the recommended solution for obtaining tracking data from carriers. What is the recommend method today? We'd be happy to use a PI or ABAP solution but would prefer a standard solution that was supported by SAP and UPS.

    Using Time Machine is the simplest way to back up.
    debasencio wrote:
    They don't fit on a standard data DVD and when I try to back it up to my 500GB external hard drive it says its not formatted to files that size.
    How is the drive formatted?
    If it's DOS/FAT32, you can only size file sizes up to 4GB.
    If you are using it on a Mac only, format it Mac OS X HFS+.

  • What's the best way to reorganize table or tablespace?

    Hi,
    I have a db of Oracle 9ir2, and it works 24*7.
    So I can't shut it down to do reorganization.
    I have known some ways to do, like exp/imp, CTAS, alter table move ...etc.
    Anybody can tell me what is the best way to do reorganization online.
    And what advantages and disvantages in that ways.
    Thanks & regards

    every approch as its pros and cons.
    For an example, exp/imp, you need to take the export, drop the schema, create schema with all required privileages and then import.
    CATS, generats a lot of redo, and then, you need to re-create their intigrity constraints and other stuff.
    alter table move.. also generates a lot of redo, apart from them, you need to rebuild indexes.
    I have the same situation in office and I do the third option, which is alter table move. I wrote my own scrit as following description.
    In first loop I move a single and then, in the inner loop, I rebuild all indexes of this table, so that other tables are accessable and then so on. I have had no problems.
    If you want this script mail me @ [email protected], I will send you the script.
    Jaffar

  • What's the recommended way to run a WebLogic Server in the background?

    I'm new to WebLogic Server and I've been looking at the documentation. There's instructions for starting and stopping servers on Linux but they all seem to rely on foreground processes. What's the correct way to run these processes in the background in a production environment? Is it just by using nohup somewhere in a script /etc/init.d/ or is there some other way I should be running it?

    Meatwad,
    Of course, running the WLS processes using nohup would place the process in the background. However, the recommended way to run the WLS servers on a production system would be to configure node manager and use this. This places the servers as a background process but also provides some additional functionality, that would be useful - for instance, allowing starting and stopping via the admin console and the ability to auto restart failed or stuck servers.
    For more information, please consult the documentation.
    http://docs.oracle.com/cd/E17904_01/web.1111/e13740/starting_nodemgr.htm

  • What is the recommended way to migrate a RH8 project to another PC?

    I am hoping to get advice on the recommended way to migrate a RH8 project to another PC – the project has been upgraded from earlier version of RH and other users and appears to be carrying a lot of baggage (strange and exotic files). I looked for an “export” function but failed to see any guidance on this in the forums. Hope you can help.

    Zip the folder on PC1. Copy the zip to PC2. Unzip it.
    Any tidying up has to be done manually. See Reports > Unused Files.
    See www.grainge.org for RoboHelp and Authoring tips
    @petergrainge

  • What is the recommended way to connect my iMac to Fedora

    Hello,
    Ever since the OSX 10.8.2, NFS has vanished so I can't connect to Fedora where I have an NFS server. 
    So what is the best way to set up Fedora to connect to an iMac. 
    I don't want to have to continue to connect by going to "Finder -> Connect to Server" every single morning.
    Thank you,
    Kris

    Hello Kris,
    Easy way...
    http://www.bresink.com/osx/NFSManager.html
    Other ways...
    http://www.cyberciti.biz/faq/apple-mac-osx-nfs-mount-command-tutorial/
    http://wiki.xbmc.org/index.php?title=NFS

  • What is the recommended way to launch a web-start enabled Java application?

    Hello,
    I have a simple web-start enabled Java application, which I can launch from a brower by entering :
    https://xx.xx.x.xxx/MyApp/launch.html
    This method would show me a page. I then had to click on a link to run my application.
    I noticed that I could also launch my program by entering :
    https://xx.xx.x.xxx/MyApp/launch.jnlp
    This method would run my application right away.
    I wonder if there is a recommended way to launch/run a web-start enabled Java application?
    Thank you,
    Akino

    user8708553 wrote:
    ..to directly launch my application and bypass the HTML page, why is there a need to
    display the HTML page and make the user do a click?There are a number of advantages to using the web page. Including..
    <li> An explanation to the end-user of what the application does (a 'sales pitch').
    <li> Provision of screen shots of the app. (more 'sales pitch').
    <li> A description of what security environment it requires, and why.
    <li> Access to using the deployJava.js* to ensure the end-user actually has Java installed & has a suitable minimum version of Java, before they ever get access to the launch button/link.
    * http://download.oracle.com/javase/6/docs/technotes/guides/jweb/deployment_advice.html#deployingApplications

  • What is the recommended way to perform tape verification?

    I currently have 12 protection groups with a total of about 30 protected members.  I have the "Check backup for data integrity (time consuming operation)" option enabled for all jobs.  The problem is with the way that DPM 2012R2 performs
    verification.  Here is the chronology of backing up an SCCM server's SQL databases that I just witnessed:
    The summary of what DPM did is as follows:
    write and verify approximately 5.6 GB of data
    unload and load the same tape 8 times
    elapsed time: 43 minutes, 6 seconds
    average data rate: 2.2 megabytes per second
    When doing verification DPM unloads and loads the same tape once for each protected member.  Obviously this doesn't scale.  Furthermore, these unnecessary cycles of the tape loading mechanism will reduce the life of the tape library because
    the mechanism has a mean time before failure measured in tape load cycles.  So the question is, what is the currently recommend practice for achieving tape verification with DPM 2012R2? 
    I have
    read here that "Tape verify jobs should be scheduled to start after all the tape backups jobs finish."  Is the corollary to this statement to "disable Check backup for data integrity" on all protection groups?  Also, if
    this is indeed the recommended practice, then how, exactly, do you "schedule a tape verify job to start after all the tape backup jobs finish"?
    Thanks for your help,
    Alex

    Ugh.  This is looking pretty awkward.  Here are the facts as I see them.  If you want to verify that the entire contents of a tape was correctly written you have three options:
    enable "Check backup for data integrity (time consuming operation)" for each PG that is written to the tape
    run Test-DPMTapeData for each recovery point on the tape
    recover each recovery point on the tape
    Each and every one of these options results in a minimum of one tape load and unload cycle per recovery point.  I am using a Dell TL2000 tape library and DPM 2012R2.  Based on the example in my original post, DPM was able to load, verify, and unload
    6 recovery points in 32 minutes for a rate of approximately 11 recovery points per hour.  If you are doing daily tape backups, then the absolute highest number of recovery points you can ever expect to verify is 11*24=264 recovery points.  This is
    so even if all of those recovery points are on a single tape and is a best-case scenario assuming 100% of the duty cycle of the tape library is dedicated to verification, which of course would never be the case in real life.
    If I have made a factual error here please correct me.  Assuming I have these facts correct, we can conclude the following:
    Using DPM 2012R2 there is no possible method to comprehensively verify the contents of daily tape backups if there are more than approximately 250 recovery points per day.
    Above that limit, the most verification you could hope for is spot-checking.  Furthermore, the life expectancy of a tape library is likely to be reduced to months from years if it is performing 250 tape load cycles every day.  This is rather an
    unacceptable result for an enterprise-class backup system.  The solution is straightforward: DPM should provide a means of verifying, copying, or recovering all recovery points on a single tape in a single load/read/unload cycle.
    Am I missing something here?  I just don't see how any form of substantial tape backup verification can work using DPM in its current form at scale.
    Alex

  • What is the best way to create tables in Illustrator cs5

    Hello everyone,
    I am currently working on making a project board for a printer. It needs to have rows and colums so that a project can be tracked and assigned on a day by day basis. I figured I would just creat a table for this. Is there a good way to create tables in Illustrator cs5?

    1. Copy Excel table and paste in illustrator  and Clipping Mask ->  Release
    2. Ungroup!  Group or may not be released, so please ungroup twice.
    3. Excute javascript.
    * Document mode must be CMYK.
    * All Layer On & Unlock!
    * Excel table as shown above the line must be black in color.
    http://uadream0.blog.me/70144704771 <--- javascript download blog

  • What is the recommended way to back up HD movies downloaded from iTunes?

    They don't fit on a standard data DVD and when I try to back it up to my 500GB external hard drive it says its not formatted to files that size.

    Using Time Machine is the simplest way to back up.
    debasencio wrote:
    They don't fit on a standard data DVD and when I try to back it up to my 500GB external hard drive it says its not formatted to files that size.
    How is the drive formatted?
    If it's DOS/FAT32, you can only size file sizes up to 4GB.
    If you are using it on a Mac only, format it Mac OS X HFS+.

Maybe you are looking for