Best practice on frequent updates 11g

I have some source tables I want to get updates from every 20 minutes. What is the best approach in ODI? The tables have several hundred thousand records in them. They do have a last_updated_date in them

ss396s wrote:
That as an option, but there are some sources where I may not be able to set that up.
I was thinking of keeping a last run date or last id in a table, adding a filter on the source with the variable....Anyone using methods like this?Yep - Grab the max last update date from a target table and hold it in a variable , use this variable as a filter in the interfaces to only grab the relevant rows.
Alternatively, use a control table to hold the date, at the end of the ETL run, update this control table using an ODI proc , write the variable value to the table and next run read this value into the variable to start the loop again.
HTH,
Alastair

Similar Messages

  • Best practice for install oracle 11g r2 on Windows Server 2008 r2

    Dear all,
    May I know what is the best practice for install oracle 11g r2 on windows server 2008 r2. Should I create a special account for windows for the oracle database installation? What permission should I grant to the folders where Oracle installed and the database related files located (datafiles, controlfiles, etc.)
    Just grant Full for Administrators and System and remove permissions for all others accounts?
    Also how should I configure windows firewall to allow client connect to the database.
    Thanks for your help.

    Hi Christian,
    Check this on MOS
    *RAC Assurance Support Team: RAC Starter Kit and Best Practices (Windows) [ID 811271.1]*
    https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=BULLETIN&id=811271.1
    DOC Modified: 14-DEC-2010
    Regards,
    Levi Pereira

  • Best practice for auto update flex web applications

    Hi all
    is there a best practice for auto update flex web applications, much in the same way AIR applications have an auto update mechanism?
    can you please point me to the right direction?
    cheers
    Yariv

    Hey drkstr
    I'm talking about a more complex mechanism that can handle updates to modules being loaded into the application ect...
    I can always query the server for the verion and prevent loading from cach when a module needs to be updated
    but I was hoping for something easy like the AIR auto update feature

  • Best practice for installation oracle 11g rac on windows 2008 server x64

    hello!
    can somebody tell me a good book or an other kind of literature regarding "best practice for installation oracle 11g rac on windows 2008 server x64"? thx in advance!
    best regards,
    christian

    Hi Christian,
    Check this on MOS
    *RAC Assurance Support Team: RAC Starter Kit and Best Practices (Windows) [ID 811271.1]*
    https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=BULLETIN&id=811271.1
    DOC Modified: 14-DEC-2010
    Regards,
    Levi Pereira

  • Best Practice for Software Update Structure?

    Is there a best practice guide for Software Update Structure?  Thanks.  I would like to keep this neat and organized.  I would also like to have a test folder for updates with test group.  Thanks.

    Hi,
    Meanwhile, please refer to the following blog get more inspire.
    Managing Software Updates in Configuration Manager 2012
    http://blogs.technet.com/b/server-cloud/archive/2012/02/20/managing-software-updates-in-configuration-manager-2012.aspx
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Best Practice for CQ Updates in complex installations (clustering, replication)?

    Hi everybody,
    we are planning a production setup of CQ 5.5 with an authoring cluster replicating to 4 publisher instances. We were wondering what the best update process looks like in a scenario like this. Let's say, we need to install the latest CQ 5 Update - which we actually have to -:
    Do we need to do this on every single instance, or can replication be utilized to distribute updates?
    If updating a cluster - same question: one instance at a time? Just one, and the cluster does the rest?
    The question is really: can update packages (official or custom) be automatically distributed to multiple instances? If yes, is there a "best practice" way to do this?
    Thanks for any help on this!
    Henning

    Hi Henning,
    The CQ5.5 servicepacks are distributed as CRX packages. You can replicate these packages and on the publishs they are unpacked and installed.
    In a cluster the situation is different: You have only 1 repository. So when you have installed the servicepack on one node, the new versions of bundles and other stuff is unpacked to the repository (most likely to /libs). Then the magic (essentially the JcrInstaller) takes care, that the bundles are extracted to started.
    I would not recommend to activate the service pack in a production environment, because then all publishs will be updated the same time. And as a restart is required, you might encounter downtimes. Of course you can make it work when you play with the replication agents :-)
    cheers,
    Jörg

  • Best Practice for Expired updates cleanup in SCCM 2012 SP1 R2

    Hello,
    I am looking for assistance in finding a best practice method for dealing with expired updates in SCCM SP1 R2. I have read a blog post: http://blogs.technet.com/b/configmgrteam/archive/2012/04/12/software-update-content-cleanup-in-system-center-2012-configuration-manager.aspx
    I have been led to believe there may be a better method, or a more up to date best practice process in dealing with expired updates.
    On one side I was hoping to keep a software update group intact, to have a history of what was deployed, but also wanting to keep things clean and avoid issues down the road as i used to in 2007 with expired updates.
    Any assistance would be greatly appreciated!
    Thanks,
    Sean

    The best idea is still to remove expired updates from software update groups. The process describes in that post is still how it works. That also means that if you don't remove the expired updates from your software update groups the expired updates will
    still show...
    To automatically remove the expired updates from a software update group, have a look at this script:
    http://www.scconfigmgr.com/2014/11/18/remove-expired-and-superseded-updates-from-a-software-update-group-with-powershell/
    My Blog: http://www.petervanderwoude.nl/
    Follow me on twitter: pvanderwoude

  • Best Practice / Solutions for using 11g DB+x86 or Small Computer to build iaas/paas?

    My customer wants to build their own iaas/paas using Oracle 11g DB, plus x86 or other small computer, running Linux or Solaris or Unix OS.
    Oracle Exadata is not feasible for them to use currently.
    Customer wants to know whether there are other customers have implemented their cloud solution based on these or not?
    If yes, would like to share the experience, presentation slides, best practices etc.
    Is there an Oracle email DL for asking this kind of question?
    Thanks,
    Boris

    Like Rick, I'm not aware of a specific "cloud implementors forum". Internally, Oracle has lots of material on implementing cloud, using any platform at all, although obviously we feel Engineered Systems are the most cost-effective solution for many customers. Are you interested in IaaS i.e. virtualised hardware, or PaaS i.e. DBaaS? They should not be confused, neither is required for the other, in fact, using IaaS to implement "DBaaS", as the OpenStack trove API attempts to do, is probably the most counter-productive way to go about it. Define the business-visible services you will be offering, and then design the most efficient means of supporting them. That way you gain from economies of scale, and set up appropriate management systems that address issues like patching, security, database virtualisation and so on.

  • Best practice deploying additional updates

    Hello what is the best practice concerning monthy windows updates. We are currently adding additional windows updates to the existing 1 package and updating the content on the DP's. However this seems to work with inconsistant results.
    DPs are not finalising content .
    Other places I have worked we would create a seperate package each month for additional updates and never had an issue. Any thoughts?
    SCCM Deployment Technician

    The documented best practices are all related to the maximum number of patches that are part of one deployment. That number should not pas the 1000,
    Remember this is a hard limit of 1000 updates per Software Update Group (not deployment package). It's quite legitimate to use a single deployment package.
    I usually create static historical Software Updates Groups at a point in time (eg November 2014). In this case it is not possible to have a single SUG for all products (Windows 7 has over 600 updates for example). You have to split them. I deploy these
    updates (to pilot and production) and leave the deployments in place. Then I create an ADR which creates a new SUG each month and deploy (to pilot and production).
    You can use a single deployment package for all the above.
    Gerry Hampson | Blog:
    www.gerryhampsoncm.blogspot.ie | LinkedIn:
    Gerry Hampson | Twitter:
    @gerryhampson

  • Best practice for windows updates

    Hello,
    I'm new with zpm (10.3.2) and have been searching for somekind of best practice how to install windows updates with it. If somebody have some advices or could point me to the right direction, it would be greatly appreciated.
    thx
    -jarkko-

    leppja,
    It appears that in the past few days you have not received a response to your
    posting. That concerns us, and has triggered this automated reply.
    Has your problem been resolved? If not, you might try one of the following options:
    - Visit http://support.novell.com and search the knowledgebase and/or check all
    the other self support options and support programs available.
    - You could also try posting your message again. Make sure it is posted in the
    correct newsgroup. (http://forums.novell.com)
    Be sure to read the forum FAQ about what to expect in the way of responses:
    http://forums.novell.com/faq.php
    If this is a reply to a duplicate posting, please ignore and accept our apologies
    and rest assured we will issue a stern reprimand to our posting bot.
    Good luck!
    Your Novell Product Support Forums Team
    http://forums.novell.com/

  • Best practice on drivers update

    How to update driver?
    There are so many different types mode of computers. update drives with inf or exe?

    You can do it with two ways:
    Application model
    Package / Program
    I don't know is there really a 'best practice' though. The driver updating itself is done with this neat little utility called devcon.exe (more info on the syntax it uses here:
    http://msdn.microsoft.com/en-us/library/windows/hardware/ff544746%28v=vs.85%29.aspx). For the correct version of the utility, you need to download Windows Driver Kit for the correct version of Windows you will be deploying the updated drivers to (http://msdn.microsoft.com/en-US/windows/hardware/gg454513).
    Devcon is used for the .inf file updates, for the .exe packaged drivers you simply run the .exe silently. Jörgen has done an excellent sample on this with the application model:
    http://ccmexec.com/2013/10/update-a-device-driver-configuration-manager-2012/

  • Best practice for frequently needed config settings?

    I have a command-line tool I wrote to keep track of (primarily) everything I eat and drink in the course of the day.  Obviously, therefore, I run this program many times every day.
    The program reads a keyfile and parses the options defined therein.  It strikes me that this may be awfully inefficient to open the file every time, read it, parse options, etc., before even doing anything with command-line input.  My computer is pretty powerful so it's not actually a problem, per se, but I do always want to become a better programmer, so I'm wondering whether there's a "better" way to do this, for example some way of keeping settings available without having to read them every single time.  A daemon, maybe?  I suspect that simply defining a whole bunch of environment variables would not be a best practice.
    The program is written in Perl, but I have no objection to porting it to something else; Perl just happens to be very easy to use for handling a lot of text, as the program relies heavily on regexes.  I don't think the actual code of the thing is important to my question, but if you're curious, it's at my github.  (Keep in mind I'm strictly an amateur, so there are probably some pretty silly things in my code.)
    Thanks for any input and ideas.

    There are some ways around this, but it really depends on the type of data.
    Options I can think of are the following:
    1) read a file at every startup as you are already doing.  This is extremely common - look around at the tools you have installed, many of them have an rc file.  You can always strive to make this reading more efficient, but under most circumstances reading a file at startup is perfectly legitimate.
    2) run in the background or as a daemon which you also note.
    3) similar to #1, save the data in a file, but instead of parsing it each time save it instead as a binary.  If you're data can all be stored in some nice data structure in the code, in most languages you can just write the block of memory occuppied by that data structure to a file, then on startup you just transfer the contents of the file to a block of allocated memory.  This is quiet do-able - but for a vast majority of situations this would be a bad approach (IMHO).  The data have to be structured in such a way they will occupy one continuous memory block, and depending on the size of the data block this in itself may be impractical or impossible.  Also, you'd need a good amount of error checking or you'd simply have to "trust" that nothing could ever go wrong in your binary file.
    So, all in all, I'd say go with #1, but spend time tuning your file read/write proceedures to be efficient.  Sometimes a lexer (gnu flex) is good for this, but often times it is also overkill and a well written series of if(strncmp(...)) statements will be better*.
    Bear in mind though, this is from another amateur.  I c ode for fun - and some of my code has found use - but it is far from my day job.
    edit: *note - that is a C example, and flex library is easily used in C.  I'd be surprised if there are not perl bindings for flex, but I very rarely use perl. As an after thought, I'd be surprised if flex is even all that useful in perl, given perl's built-in regex abilites.  After-after-thought, I would not be surprised if perl itself were built on some version of flex.
    edit2: also, I doubt environment variables would be a good way to go.  That seems to require more system calls and more overhead than just reading from a config file.  Environment variables are a handy way for several programs to be able to access/change the same setting - but for a single program they don't make much sense to me.
    Last edited by Trilby (2012-07-01 15:34:43)

  • Best practice for the Update of SAP GRC CC Rule Set

    Hi GRC experts,
    We have in a CC production system a SoD matrix that we would like to modified extensively. Basically by activating many permissions.
    Which is a best practice for accomplish our goal?
    Many thanks in advance. Best regards,
      Imanol

    Hi Simon and Amir
    My name is Connie and I work at Accenture GRC practice (and a colleague of Imanolu2019s). I have been reading this thread and I would like to ask you a question that is related to this topic. We have a case where a Global Rule Set u201CLogic Systemu201D and we may also require to create a Specific Rule Set. Is there a document (from SAP or from best practices) that indicate the potential impact (regarding risk analysis, system performance, process execution time, etc) caused by implementing both type of rule sets in a production environment? Are there any special considerations to be aware? Have you ever implemented this type of scenario?
    I would really appreciate your help and if you could point me to specific documentation could be of great assistance. Thanks in advance and best regards,
    Connie

  • Redo log best practice for performace - ASM 11g

    Hi All,
    I am new to ASM world....can you please give your valuable suggestion on below topic.
    What is the best practice for online redo log files ? I read it somewhere that oracle recommand to have only two disk group one for the all DB fils, control files online redo log files and other disk group for recovery files, like mulitplex redo file, archive log files etc.
    Will there be any performance improvment to make different diskgroup for online redo log (separate from datafiles, contorl files etc) ?
    I am looking for oracle document to find the best practise for redo log (peformance) on ASM.
    Please share your valuable views on this.
    Regards.

    ASM is only a filesystem.
    What really count on performance about I/O is the Storage Design i.e (RAID used, Array Sharing, Hard Disk RPM value, and so on).
    ASM itself is only a layer to read and write on ASM DISK. What mean if you storage design is ok ... ASM will be ok. (of course there is best pratices on ASM but Storage must be in first place)
    e.g There is any performance improvement make different Diskgroup?
    Depend...and this will lead to another question.
    Is the Luns on same array?
    If yes the performance will end up on same point. No matter if you have one or many diskgroup.
    Coment added:
    Comparing ASM to Filesystem in benchmarks (Doc ID 1153664.1)
    Your main concern should be how many IOPs,latency and throughput the storage can give you. Based on these values you will know if your redologs will be fine.

  • Best Practices for Batch Updates, Inserts and Complex Queries

    The approach we have taken for our ALDSP Architecture is to model or DASi as Business Data Objects, each DS joining several (some times many) tables and lookups. This works ok when needing to access individual records for read and update, but when we need to update multiple tables and rows within the same commit, trying to do this with a logical single ds built on tables or other dASi, proves both cumbersome and slow. This is also the case for queries, when we have complex where clauses within a DS built upon two or more multi-table-joined logical DASi.
    We tried a DS built on SQL, but that does not allow dml operations. We may have to just use JDBC. Any thoughts on how best to leverage DAS in this respect.

    I tried doing this by creating a UO class and using it on a DS built on a sql statement. What we wanted to do here is first read the DS to get a list of ID values that met the conditions of the query and then call submit() and have the UO update all the necessary tables associated with those IDS.
    However, we found that U/O never get's called unless you actually update something, not just send submit() after a read. Dis I misunderstand the way this shoudl work?

Maybe you are looking for

  • Newbie problems installing Flash 11 and windows 7/SWF issues-PLEASE HELP!

    I am VERY new at Dreamweaver (llike four days in) - my church has a website & they need someone to take it over so I'm trying to help out while teaching myself something new. I was doing okay til now. I downloaded a free template to play with so as n

  • Managed Default versus Managed Custom Folder, and Default Folder Type Options

    I am looking to create some new managed folders needed for company retention policy. It seems pretty forward to create the folder:  New-ManagedFolder -Name "[Name Here]" -DefaultFolderType Inbox -Comment "[comments here]" -MustDisplayComment $true Ho

  • Converting Unicode to plain text

    Hi, Is there anyway i can get the string "Internationalization" from I�t�rn�ti�n�liz�ti�n?

  • Budget vs Actual data Datasource in Funds Management

    We are not able to find the correct datasource for Budget (Vs) Actual in Funds Management in R/3. We have activated most of the existing datasources and checked but could not find any data for the Value Type 'Actual'. Where can we find this? Thanks

  • Search by age where date is in field

    I am creating search and reselt page with session. Members are with birthdate in field, I did convert birthdate in to number with datediff(). It converts, But when I use this it gives error. <cfif session.search.agefrom is not ""> and #dateDiff("yyyy