Best approach to replicate the data.

Hello Every one,
I want to know about the best approach to replicate the data.
First i clear the senario , i have one oracle 10g enterprise edition database and 20 oracle 10g standerd edition databases.The enterprise edition will run at center and the other 20 standered edition databases will run at sites.
The data will move from center to sites and vice versa not between sites.There is only one schema to replicate with more than 500 tables.
what will be the best for replication (Updateble MVs or Oracle Streams or any thing else.),its urgentpls.
Thanx in advance.
Edited by: user560669 on Dec 13, 2009 11:01 AM

Hello,
Yes MV or Oracle Stream are the common ways to replicate datas between databases.
I think that in your case (you have to replicate a whole Schema) Oracle Streams is interresting (it's not so easy
to maintain 500 MV).
But you must care of the type of Edition.
I'm not sure that Standard Edition allows Advanced replication features. It seems to me (but I may be wrong)
that Updatable MV is an EE features.
About the Stream It seems to be available even in SE.
Please, find enclosed some links about it:
[http://www.oracle.com/database/product_editions.html]
[http://www.oracle.com/technology/products/dataint/index.html]
Hope it can help,
Best regards,
Jean-Valentin

Similar Messages

  • What is the best way to mimic the data from production to other server?

    Hi,
    here we user streams and advanced replication to send the data for 90% of tables from production to another production database server. if one goes down can use another one. is there any other best option rather using the streams and replication? we are having lot of problems with streams these days they keep break and get calls.
    I heard about data guard but dont know what is use of it? please advice the best way to replicate the data.
    Thanks a lot.....

    RAC, Data Guard. The first one is active-active, that is, you have two or more nodes accessing the same database on shared storage and you get both HA and load balancing. The second is active-passive (unless you're on 11.2 with Active Standby or Snapshot Standby), that is one database is primary and the other is standby, which you normally cannot query or modify, but to which you can quickly switch in case primary fails. There's also Logical Standby - it's based on Streams and generally looks like what you seem to be using now (sort of.) But it definitely has issues. You can also take a look at GoldenGate or SharePlex.

  • Best approach to get the source tables into Target

    Hi
    I am new to Goldengate and I would like to know what is the best approach to get the Source tables to be replicated into the target (oracle to oracle) before performing the initial load without using exp/expdp . Is there any native Goldengate utility which i can use during the initial load or before that will create the tables on the target before loading the data ?
    Thanks

    i dont think so, for the initial load replication your struncture should be available at target machine. once your machines are in sync then you can use goldengate ddl setup for automatically replicate the table with data. 
    Batter approach for you to create a structure on target machine using export improt.  In export use conect=metadata_only for copy the structure only.....
    like
    EXPDP <<user>>/<<password>>@connection_string schemas=abc directory=TEST_DIR dumpfile= gg.dmp Content = metadata_only logfile= gg.log

  • Best Approach to Compare Current Data to New Data in a Record.

    Which is the best approach in comparing current data to new data for every single record before making updates?
    What I have is a table with 5,000 records that need to be updated every week with data from a data feed, but I dont want to update every single record week after week. I want to update only records with different data.
    I can think of these options:
    1) two cursors (one for current data and one for new)
    2) one cursor for new data and one varray for current data
    3) one cursor for new data and a select into for current data.
    Thanks for your help.

    I don't recommend it over merge, but in theory you could use a checksum (OWA_OPT_LOCK.checksum()) on the rows and if they differ then copy the new row into the destination table. Or you might be able to use rowscn to see if things have changed.
    Like I said, I don't know that I 'd take either approach over merge, but they are options.
    Gaff
    Edited by: Gaff on Feb 11, 2009 3:25 PM
    Actually, rowscn between 2 tables may not be an option. I know you can turn a rowscn into a time value so if that rowscn is derived from anything but the column values for the row that would foil that approach.

  • Replicate the data from oracle to sybase

    Hi, I have to replicate the data of some of the tables from Oracle 10g to Sybase.
    Here the data from the table is live data (OLTP).
    Is there any process like materalized view so that data is refreshed from Oracle to Sybase periodically.
    Please let me know the Process for the same.

    Hello,
    as a starter you may want to read the following Metalink note:
    Note 283700.1: How to replicate Data between Oracle and a Foreign Datasource
    Please let me know whether this is helpful.
    Best regards
    Wolfgang Kobarg-Sachsse

  • Can anyone recommend an external storage device to supplement my mac book air?  I already have a time capsule but want the new device to replicate the data on that so ideally would like a device that i can connect to the time capsule through a cable.

    Can anyone recommend an external storage device to supplement my mac book air?  I already have a time capsule but want the new device to replicate the data on that so ideally would like a device that i can connect to the time capsule through a cable. 

    Hi CJR
    I'm not sure how Time Capsule can be replicated onto your external drive but there is a USB port on the back of TC that can be hooked up to any recent external drive.
    Another thing to think about , besides a Time Capsule failure, is the theft of your Time Capsule, so it might be a good idea to consider separating any redundant back-up from your Time Capsule or both my be stolen.
    TC will back up until full and then begin deleting the oldest back-ups.  Mine is a 2TB Time capsule.
    My back-up plan is as follows:
    1.  Macbook Air back-up on Time Capsule though Time Machine backs up everything - I think it's about every 30 min or so.
    2.  Contacts, reminders, appointments, notes, email are backed up through iCloud.
    3.  iTunes and non-itunes purchased music is on iMatch (and no need to back-up) and any non-iTunes music is backed up on an HP external drive (in case Time Capsule dies)
    4.  Photos are on my MacBook Air and are backed up on my HP external drive.
    5.  Office documents (Excel, Powerpoint, Access etc) are stored in a Skydrive app (like iCloud) and Skydrive syncs these docuemnts with the MS  cloud and my old PC and does not alter their native format like storing in iCloud would do.
    Cheers

  • I am moving from PC to Mac.  My PC has two internal drives and I have a 3Tb external.  What is best way to move the data from the internal drives to Mac and the best way to make the external drive read write without losing data

    I am moving from PC to Mac.  My PC has two internal drives and I have a 3Tb external.  What is best way to move the data from the internal drives to Mac and the best way to make the external drive read write without losing data

    Paragon even has non-destriuctive conversion utility if you do want to change drive.
    Hard to imagine using 3TB that isn't NTFS. Mac uses GPT for default partition type as well as HFS+
    www.paragon-software.com
    Some general Apple Help www.apple.com/support/
    Also,
    Mac OS X Help
    http://www.apple.com/support/macbasics/
    Isolating Issues in Mac OS
    http://support.apple.com/kb/TS1388
    https://www.apple.com/support/osx/
    https://www.apple.com/support/quickassist/
    http://www.apple.com/support/mac101/help/
    http://www.apple.com/support/mac101/tour/
    Get Help with your Product
    http://docs.info.apple.com/article.html?artnum=304725
    Apple Mac App Store
    https://discussions.apple.com/community/mac_app_store/using_mac_apple_store
    How to Buy Mac OS X Mountain Lion/Lion
    http://www.apple.com/osx/how-to-upgrade/
    TimeMachine 101
    https://support.apple.com/kb/HT1427
    http://www.apple.com/support/timemachine
    Mac OS X Community
    https://discussions.apple.com/community/mac_os

  • Best Way to port the data from one DB to another DB using Biztalk

    Hi,
    please suggest best way to move the data from one db to another DB using biztalk.
    Currently I am doing like that, for each transaction(getting from different source tables) through receive port, and do some mapping (some custom logic for data mapping), then insert to target normalized tables(multiple tables) and back to update the status
    of transaction in source table in sourceDB. It is processing one by one.
    How/best we we can do it using  bulk transfer and update the status. Since it has more than 10000 transaction per call.
    Thanks,
    Vinoth

    Hi Vinoth,
    For SQL Bulk inserts you can always use SQL Bulk Load
    adapter.
    http://www.biztalkgurus.com/biztalk_server/biztalk_blogs/b/biztalksyn/archive/2005/10/23/processing-a-large-flat-file-message-with-biztalk-and-the-sqlbulkinsert-adapter.aspx
    However, even though a SQL Bulk Load adapter can efficiently insert a large amount of data into SQL you are still stuck with the issues of transmitting the
    MessageBox database and the memory issues of dealing with really large messages.
    I would personally suggest you to use SSIS, as you have mentioned that records have to be processed in specific time of day as opposed to when the
    records are available.
    Please refer to this link to get more information about SSIS: http://msdn.microsoft.com/en-us/library/ms141026.aspx
    If you have any more questions related to SSIS, please ask it in
    SSIS 
    forum and you will get specific support.
    Rachit

  • What is the best way to export the data out of BW into a flat file on the S

    Hi All,
    We are BW 7.01 (EHP 1, Service Pack Level 7).
    As part of our BW project scope for our current release, we will be developing certain reports in BW, and for certain reports, the existing legacy reporting system based out of MS Access and the old version of Business Objects Release 2 would be used, with the needed data supplied from the BW system.
    What is the best way to export the data out of BW into a flat file on the Server on regular intervals using a process chain?
    Thanks in advance,
    - Shashi

    Hello Shashi,
    some comments:
    1) An "open hub license" is required for all processes that extract data from BW to a non-SAP system (including APD). Please check with your SAP Account Executive for details.
    2) The limitation of 16 key fields is only valid when using open hub for extracting to a DB table. There's no such limitation when writing files.
    3) Open hub is the recommended solution since it's the easiest to implement, no programming is required, and you don't have to worry much about scaling with higher data volumes (APD and CRM BAPI are quite different in all of these aspects).
    For completeness, here's the most recent documentation which also lists other options:
    http://help.sap.com/saphelp_nw73/helpdata/en/0a/0212b4335542a5ae2ecf9a51fbfc96/frameset.htm
    Regards,
    Marc
    SAP Customer Solution Adoption (CSA)

  • Best way to check the data

    I have a made some changes to an existing view. The change is adding a join and a new column from new table. I have created a version2 with all these changes. I want to make sure that same data exists in both the views. what is the best way to test the data.

    Bad wording, the query results could be:
    No rows selected. Same number of rows on both views and same data (considering view1 structure and view without the new column)
    At least one row selected. If the count(*) on both was different, rows returned will be at least the difference. If the count was the same, then every returned row means that there's no equivalent row on view2 and a data difference may exist in at least one of the fields, so You have to find the equivalent row on view2 to compare.
    On the query sintaxis, as view2 hava a new column and I guess it's not a constant value, You have to specify every column for the view1, and the same columns for view2, so new field isn't included and compared.

  • What is the best approach to return Large data from Stored Procedure ?

    no answers to my original post, maybe better luck this time, thanks!
    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

    Create a new farm in the secondary Data Center at the same patch level with the desired configuration. Replicate the databases using the method of choice (Mirroring, AlwaysOn, etc.). Create a downtime window where you can then attach the databases to the
    new farm's Web Application(s)/Service Application(s).
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • What's the best approach to resetting Calendar data on Server?

    I have a database format error in a calendar that I only noticed after the migration to Server on Yosemite.  I'll paste a snippet from the Error Log in at the bottom that shows the error - I've highlighted the description of the problem in red.
    I found a pretty cool writeup from Linc in a different thread, but it's aimed at fixing a similar problem for a local user on their own machine rather that an iCal server like what we're running.  Here's the link to that thread: Re: Calendar crashes on open  For example, does something like Calendar Cleaner work on our server database as well?
    In my case I think I'd basically like to gracefully remove all the Calendar databases from Server and start fresh (all the users' calendars are backed up on their local machines, so they can just import them into fresh/empty calendars once I've cleaned out the old stuff).  Any thoughts on "best approach" would be much appreciated.
    Here's the error log...
    File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twi sted/internet/defer.py", line 1099, in _inlineCallbacks
    2015-01-31 07:14:41-0600 [-] [caldav-0]         result = g.send(result)
    2015-01-31 07:14:41-0600 [-] [caldav-0]       File "/Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/python 2.7/site-packages/txdav/caldav/datastore/sql.py", line 3635, in component
    2015-01-31 07:14:41-0600 [-] [caldav-0]         e, self._resourceID
    2015-01-31 07:14:41-0600 [-] [caldav-0]     txdav.common.icommondatastore.InternalDataStoreError: Data corruption detected (Invalid property: GEO:33.4341666667\\;-112.008055556
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VCALENDAR
    2015-01-31 07:14:41-0600 [-] [caldav-0]     VERSION:2.0
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CALSCALE:GREGORIAN
    2015-01-31 07:14:41-0600 [-] [caldav-0]     PRODID:-//Apple Inc.//Mac OS X 10.8.2//EN
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VEVENT
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:[email protected]
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTSTART:20121114T215900Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTEND:20121114T232700Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CLASS:PUBLIC
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CREATED:20121108T123850Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DESCRIPTION:Flight leg 2 of 2 for trip from MSP to LAX\\nhttp://www.google.
    2015-01-31 07:14:41-0600 [-] [caldav-0]      com/search?q=US+29+flight+status\\nBooked on November 8\\, 2012\\n
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTSTAMP:20121114T210756Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     GEO:33.4341666667\\;-112.008055556
    2015-01-31 07:14:41-0600 [-] [caldav-0]     LAST-MODIFIED:20121108T123850Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     LOCATION:Sky Harbor International Airport\\, Phoenix\\, AZ
    2015-01-31 07:14:41-0600 [-] [caldav-0]     SEQUENCE:0
    2015-01-31 07:14:41-0600 [-] [caldav-0]     STATUS:CONFIRMED
    2015-01-31 07:14:41-0600 [-] [caldav-0]     SUMMARY:US 29 from PHX to LAX
    2015-01-31 07:14:41-0600 [-] [caldav-0]     URL:http://www.hipmunk.com/flights/MSP-to-LAX#!dates=Nov14,Nov17&group=1&s
    2015-01-31 07:14:41-0600 [-] [caldav-0]      elected_flights=96f6fbfd91,be8b5c748d;kind=flight&locations=MSP,LAX&dates=
    2015-01-31 07:14:41-0600 [-] [caldav-0]      Nov14,Nov16&group=1&selected_flights=96f6fbfd91,
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VEVENT
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:X-CALENDARSERVER-PERUSER
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:[email protected]
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-CALENDARSERVER-PERUSER-UID:D0737009-CBEE-4251-A288-E6FCE5E00752
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:X-CALENDARSERVER-PERINSTANCE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     TRANSP:OPAQUE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VALARM
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ACKNOWLEDGED:20121114T210756Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ACTION:AUDIO
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ATTACH:Basso
    2015-01-31 07:14:41-0600 [-] [caldav-0]     TRIGGER:-PT2H
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:040C4AB7-EF30-4F0C-9D46-6A85C7250444
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-APPLE-DEFAULT-ALARM:TRUE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-WR-ALARMUID:040C4AB7-EF30-4F0C-9D46-6A85C7250444
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VALARM
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:X-CALENDARSERVER-PERINSTANCE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:X-CALENDARSERVER-PERUSER
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VCALENDAR
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ) in id: 3405
    2015-01-31 07:14:41-0600 [-] [caldav-0]    
    2015-01-31 07:16:39-0600 [-] [caldav-1]  [-] [txdav.common.datastore.sql#error] Transaction abort too long: PG-TXN</Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/ python2.7/site-packages/calendarserver/tools/purge.py#1032$_cancelEvents>, Statements: 5, IUDs: 0, Statement: None
    2015-01-31 08:08:40-0600 [-] [caldav-1]  [AMP,client] [calendarserver.tools.purge#warn] Cleaning up future events for principal A95C9DB2-9757-46B2-ADF6-4DECE2728820 since they are no longer in directory
    2015-01-31 08:09:10-0600 [-] [caldav-1]  [-] [twext.enterprise.jobqueue#error] JobItem: 39, WorkItem: 762001 failed: ERROR:  canceling statement due to statement timeout
    2015-01-31 08:09:10-0600 [-] [caldav-1]    
    2015-01-31 08:13:40-0600 [-] [caldav-1]  [-] [txdav.common.datastore.sql#error] Transaction abort too long: PG-TXN</Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/ python2.7/site-packages/calendarserver/tools/purge.py#1032$_cancelEvents>, Statements: 5, IUDs: 0, Statement: None

    <facepalm>  Well, there you go.  It turns out I was over-thinking this.  The Calendar app on a Mac can manage this database just fine.  Sorry about that.  There may be an easier way to do this, but here's how I did it.
    Use the Calendar.app on a local computer to:
    - Export the corrupted calendar to an ICS file on the local computer (Calendar -> File -> Export -> Export)
    - Create a new local calendar (Calendar -> File -> New Calendar -> On My Mac)
    - Import the corrupted calendar into the new/empty local calendar (Calendar -> File -> Import...)
    - Delete years and years of old events, including the one that was triggering that error message
    - Export the (now much smaller) local calendar to another ICS file on my computer (Calendar -> File -> Export -> Export)
    - Create a new calendar on the server (Calendar -> File -> New Calendar -> in the offending server-based iCal account)
    - Import the edited/fixed/smaller/no-longer-corrupted calendar into the new/empty server calendar (Calendar -> File -> Import...)
    - Make the newly-created iCal calendar the primary calendar (drag it to the top of the list of calendars on the server)
    - Delete the old/corrupted calendar (right-clicking on the bad calendar in the calendar list - you can only delete it once it's NOT the primary calendar any more)

  • Best practice for replicate the IDM (OIM 11.1.1.5.0) environment

    Dear,
    I need to replicate the IDM production to build the IDM test environment. I have the OIM 11.1.1.5.0 in production with lots of custom codes.
    I have two approaches:
    approach1:
    Manually deploy the code through Jdeveloper and export and import all the artifacts. Issue is this will require a lot of time and resolving dependencies.
    approach2:
    Take the OIM, MDS and SOA schemas export and Import the same into new IDM Test DB environment.
    could you please suggest me what is the best practice and if you have some pointers to achieve the same.
    Appreciate your help.
    Thanks,

    Follow your build document for the same steps you used to build production.
    You should know where all your code is. You can use the deployment manager to export your configurations. Export customized files from MDS. Just follow the process again, and you will have a clean instance not containing production data.
    It only takes a lot of time if your client is lacking documentation or if you re not familiar with all the parts of the environment. What's 2-3 hours compared to all the issues you will run into if you copy databases or import/export schemas?
    -Kevin

  • Best approach to return Large data ( 4K) from Stored Proc

    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

  • What's the best way to replicate the contents of servers?

    Here's the situation. Got a client who wants to take his 5 existing servers - one a general file/app server, one an email (probably Exchange) server, one running MS SQL, and I'm not sure about the roles of servers 4 and 5. Needless to say, he wants to take
    the existing servers and relocate them to his home. He also wants to replace the servers with new versions.  The old servers are running an older version of Windows Server (assuming 2003 but could be newer) - but he IS willing to spend money upgrading
    the OS if need be.
    He wants to take the 5 servers and replicate the contents from the old servers to the new ones. Then periodically update the old group with the updated data on the new group - so in case of disaster, he can quickly either move the old servers back to his
    office OR at least recover the data quickly by way of a sync session.
    What is the best way to go about this? Will he need to upgrade the OS on the old servers to make it happen? Money isn't much of an obstacle.

    Here's some info on migrating the file server.
    http://technet.microsoft.com/en-us/library/jj863566.aspx
    For exchange migration I'd ask them here.
    http://social.technet.microsoft.com/Forums/exchange/en-us/home?forum=exchangesvrdeploy&filter=alltypes&sort=lastpostdesc
    The sql should be faily simple backup / restore but better to ask them over here.
    http://social.technet.microsoft.com/Forums/sqlserver/En-US/home?forum=sqlservermigration
    Regards, Dave Patrick ....
    Microsoft Certified Professional
    Microsoft MVP [Windows]
    Disclaimer: This posting is provided "AS IS" with no warranties or guarantees , and confers no rights.

Maybe you are looking for

  • Preventing Password Characters from being Displayed

    I know this question has been asked and answered about a billion times but I can't seem to find any definitive answer as to how it will be addressed or if there are any plans to address it in the future. We all know that when an iPad, iPhone, or iPod

  • Error 3131 when trying to share application on Twitter

    When I try to share this application , I'm getting error 3131.

  • Transaction UMB is trigered during 101

    Hi all During GR wrt PO. In the accounting document for GR i see a different entry. one particular amount is posted to a GL(Gain due to revaluation). This entry is trigered from the transaction event key UMB. and we use moving average price for all m

  • Can't upgrade from Lion 10.7.4 to 10.7.5.

    I keep trying unsuccessfully to upgraded from Lion 10.7.4 to 10.7.5.  I keep getting an error that says to power down and restart after the upgrade.  Is there a work around for this?  I'm worried that my security updates aren't being done.  I also ca

  • Un deploy Itinerary ESB 2.0

    currently, I'm supporting production Biztalk servers and I found lot of unused Itineraries in the Itinerary Database. But I couldn't find any tool / best practices to get rid off them. The only possible option that i can see is write a SQL query and