What is the best practice when any of Src/Tgt DB is re-started in streams

We have a live production dual direction streams environment (A --> B and B --> A) and due to a DBFcorrupt file at source A, it was brought down and all traffic was switched to B. All streams captures,prop and apply were enabled and messages were captured at B and propagated to A (but cannot reach A and apply there as it was down) .When A is re-started, some of the captured messages in B never got applied to A. What could be the possible reason. What is a best practice in a streams environment when any of the source/target instance is shutdown and re-started.

Hi Serge,
A specific data file got corrupted and they restored it. Can you please send me the URL for the metalink document about that bug in 9.2. I'd really appreciate your help on this.
Thx,
Amal

Similar Messages

  • What is the best practice when distributing a desktop application which uses SMO

    I have a WPF application which installs a web site. One of the installation steps is to execute a number of SQL scripts which will install website's database. The database server is not necessarily the same machine as the product is installed - in most cases
    it's a different server on the same network. All the scripts are generated by a build of database project (visual studio 2012 db project) - hence all of those are in sqlcmd mode.
    I can workaround sql variables (:variable - type of creature) by making some simple text replacements. The big problem is with the "GO" statements. At first I thought that I can split the script into many subscripts using a regex split operation
    defining "GO" to be the separator - this did not work; having a world Polygon in the script will cause the whole operation to fail. Unfortunately I have no control  over the content of the script hence if someone will decide
    to put a comment like /* If you do not know how to use this script then GO TO HELL */ would break the installation process.
    I have then tried executing the whole script (with "GO" statements) using SMO. This works well on my development machine but once I try to do this on a server, application falls over missing dlls. Now for Microsoft.SqlServer.ConnectionInfo, Microsoft.SqlServer.Smo, Microsoft.SqlServer.SqlClrProvider
    and Microsoft.SqlServer.SqlEnum the solution is simple - I can just set "Copy local" to true and those dlls will be distributed with the application. 
    Big problem is with Microsoft.SqlServer.SqlClrProvider.dll. OK, I can get this library from my local machine - but I am not sure which SQL version this will be used with. I can't include more than one of those dlls as all of those have the same name. 
    I know that the official MS line is to install SQL feature pack (http://msdn.microsoft.com/en-us/library/ff713979.aspx). But again - I do not know which version of SQL my application will be working with, and I need to make the installation process as simple
    as possible. 
    My question is - is there a way I can distribute a desktop application (which makes a use of SMO) without any prerequisites and for all versions of SQL server? If there is no such a thing then using SMO is pretty much pointless as it cannot be distribute
    - even if then it's not version agnostic... Thanks for any help! 

    I agree with Olaf, when you want to ensure that you can interact with any version of SQL Server.
    The problem you will run into if you are not controlling the version of SQL Server is in the version of SMO. If you use SMO version 10.5 and need to deploy using SMO to SQL 2012, it will not work. SMO will always be backward compatible but not forward compatible.
    So you would have to have a max version of SQL Server your setup would deploy to in order to control errors and failures. So if you distribute SQL 2012 SMO then the max version would be SQL 2012 for the deploy.
    So using a tool that is version agnostic is the way to go, and you can also use in .NET the System.Data.SqlClient to execute statements against the SQL Server as well. This will be version agnostic too.
    Ben Miller - SQL Server MVP, SQL MCM 2008 - @DBADuck http://www.dbaduck.com

  • PHP 5.6.8 to 5.5.x. What is the best practice when trying to do so?

    Hi everyone,
    I've been using Arch for a while and I really try hard to really keep it simple.
    Most of the time I'm able to do so, but right now I'm not sure what should I do.
    I have LAMP stack installed using PHP 5.6.8.
    What I don't like about it is that most of rolling release linux distributions just don't care about the fact that most of PHP software (unless it is very simple) will break due poor backward compatibility of PHP itself.
    Ideally, distribution should pack all currently developed versions of such software as PHP (5.6/5.5/5.4) is, but this is not the case....
    So, basicaly I can't run Magento on it.
    I have been investigating my options and I ended up here:
    https://wiki.archlinux.org/index.php/Ar … ck_Machine
    It seems that it expects me to have old version in pacman cache which i don't have (I have been using vagrant with virtualbox before).
    Also, it seems that I could install it from AUR, but I can't find 5.5.x package.
    So, what can i do?
    It seems that there is no Arch setup capable of running Magento at this point?
    Is it really my only option to do everything from scratch?
    Sorry, if I'm missing something obvious.

    Hi everyone,
    I've been using Arch for a while and I really try hard to really keep it simple.
    Most of the time I'm able to do so, but right now I'm not sure what should I do.
    I have LAMP stack installed using PHP 5.6.8.
    What I don't like about it is that most of rolling release linux distributions just don't care about the fact that most of PHP software (unless it is very simple) will break due poor backward compatibility of PHP itself.
    Ideally, distribution should pack all currently developed versions of such software as PHP (5.6/5.5/5.4) is, but this is not the case....
    So, basicaly I can't run Magento on it.
    I have been investigating my options and I ended up here:
    https://wiki.archlinux.org/index.php/Ar … ck_Machine
    It seems that it expects me to have old version in pacman cache which i don't have (I have been using vagrant with virtualbox before).
    Also, it seems that I could install it from AUR, but I can't find 5.5.x package.
    So, what can i do?
    It seems that there is no Arch setup capable of running Magento at this point?
    Is it really my only option to do everything from scratch?
    Sorry, if I'm missing something obvious.

  • When displaying just One WDiView in the Portal what is the best practice?

    Hi,
    I´m configuring some Roles to display WebDynpro iViews and I´m concerned because when a page has to display just one iView I don´t create the Page but instead I call the iView directly into the Role and the Displayed iView is viewed correctly, my concern began cause I´m checking SAP standard roles, WDiviews and WDPages and eventhough there is just one WDiView to be displayed a WDPage is been created to display it and the WD page id assigned to the Role.
    Does anyone knows what is the best practice in this case?
    Thanx in Advanced and Kind Regards,
    Gerardo J

    there is no harm in assigning iview directly to a role but usually assigning to a page ,page to a workset and workset to a role is followed universally

  • What is the best practice to display info of completed task in process flow

    Hi all,
    I'm starting to study BPM modeling with CE7.1 EHP1. Thanks to the tutorial and example on SDN site and I can easily build my own process in NWDS and deploy to server, start it, finish it.
    I like the new runtime which can show a BPMN diagram to the processors. However, I can't find a way to let the follow up processor to review the task result completed in previous step. I'm more familiar with Guided Procedure, and know there is "Display Callable Object" which can used to show some info of a completed task when the processor/owner/admin/overseer click on a completed task.  Where is the feature in BPM ? What is the best practice to show such task information in BPM environment.
    For example, A multiple level approval process, the higher level approver need to know the comment written by the previous approver. Can he read this information from process flow ?
    I think it is very important feature for a BPM platform. In Guided Procedure, such requirement can be done with Display Callable Object + View Permission, and you just need some coding for the UI. If BPM is superior to GP, I think there must be a way to achieve this, I just do not know how ?
    Can anyone shed me some light on it ?

    Oliver,
    Thanks for your quick reply.
    Yes, Notes and Attachment CAN BE USED for the purpose. But I'm still looking for a more elegant solution.
    With the solution of using Notes/Attachment, the processor need to give input at two places : the task UI and Note/Attach , with similar or same data. It is really annoying.
    Is there any SAP BPM real-world deployment ? None of customer has the requirement ?

  • What are the best practices for generating an EPS logo from InDesign?

    Our costomer is running into technical issues with the logo we sent them, which was exported from Indesign. Images were not embedded and fonts missing. I was able to embed the images and fonts. However, we DO NOT want them to be able to make any text changes. So after exporting an eps, I opened the file in Adobe Illustrator and made all the text outlines. I hope this works. But I just wanted to post the question on what are the best practices for doing this?
    The client needs the logo with transparent background, images emebdded and type in outlines. Also, they need some space around the text. When I exported the eps, the file is right up on the edge of the type.

    It sounds like you are pretty far from "best practice" with regard to logo design and delivery.
    These days, the very use of the EPS format should be considered bad practice, and some other terms in your post, (i.e., 'images,' 'missing fonts'), make it sound like there is not a seasoned logo designer involved.
    That said, you probably already got the advice you need to get out of the immediate jam. However, without proper logo design, you and the client will soon be facing other problems. You should be delivering a 100% vector graphic in single-color (black) and corporate-color(s) versions, with no live font data, that has been test-scaled to very small and very large sizes; ensuring it will work at postage-stamp size and on the side of a truck or building, with specific spot color(s) and proportions that will enable it to be offset printed, embroidered and screen-printed on apparel, and cut into signage materials and decals.

  • What is the best practice in securing deployed source files

    hi guys,
    Just yesterday, I developed a simple image cropper using ajax
    and flash. After compiling the package, I notice the
    package/installer delivers the same exact source files as in
    developed to the installed folder.
    This doesnt concern me much at first, but coming to think of
    it. This question keeps coming out of my head.
    "What is the best practice in securing deployed source
    files?"
    How do we secure application installed source files from
    being tampered. Especially, when it comes to tampering of the
    source files after it's been installed. E.g. modifying spraydata.js
    files for example can be done easily with an editor.

    Hi,
    You could compute a SHA or MD5 hash of your source files on
    first run and save these hashes to EncryptedLocalStore.
    On startup, recompute and verify. (This, of course, fails to
    address when the main app's swf / swc / html itself is
    decompiled)

  • What is the best practice for full browser video to achieve the highest quality?

    I'd like to get your thoughts on the best way to deliver full-browser (scale to the size of the browser window) video. I'm skilled in the creation of the content but learning to make the most out of Flash CS5 and would love to hear what you would suggest.
    Most of the tutorials I can find on full browser/scalable video are for earlier versions of Flash; what is the best practice today? Best resolution/format for the video?
    If there is an Adobe guide to this I'm happy to eat humble pie if someone can redirect me to it; I'm using CS5 Production Premium.
    I like the full screen video effect they have on the "Sounds of pertussis" web-site; this is exactly what I'm trying to create but I'm not sure what is the best way to approach it - any hints/tips you can offer would be great?
    Thanks in advance!

    Use the little squares over your video to mask the quality. Sounds of Pertussis is not full screen video, but rather full stage. Which is easier to work with since all the controls and other assets stay on screen. You set up your html file to allow full screen. Then bring in your video (netstream or flvPlayback component) and scale that to the full size of your stage  (since in this case it's basically the background) . I made a quickie demo here. (The video is from a cheapo SD consumer camera, so pretty poor quality to start.)
    In AS3 is would look something like
    import flash.display.Loader;
    import flash.net.URLRequest;
    import flash.display.Bitmap;
    import flash.display.BitmapData;
    import flash.ui.Mouse;
    import flash.events.Event;
    import flash.events.MouseEvent;
    import flash.display.StageDisplayState;
    stage.align = StageAlign.TOP_LEFT;
    stage.scaleMode = StageScaleMode.NO_SCALE;
    // determine current stage size
    var sw:int = int(stage.stageWidth);
    var sh:int = int(stage.stageHeight);
    // load video
    var nc:NetConnection = new NetConnection();
    nc.connect(null);
    var ns:NetStream = new NetStream(nc);
    var vid:Video = new Video(656, 480); // size off video
    this.addChildAt(vid, 0);
    vid.attachNetStream(ns);
    //path to your video_file
    ns.play("content/GS.f4v"); 
    var netClient:Object = new Object();
    ns.client = netClient;
    // add listener for resizing of the stage so we can scale our assets
    stage.addEventListener(Event.RESIZE, resizeHandler);
    stage.dispatchEvent(new Event(Event.RESIZE));
    function resizeHandler(e:Event = null):void
    // determine current stage size
        var sw:int = stage.stageWidth;
        var sh:int = stage.stageHeight;
    // scale video size depending on stage size
        vid.width = sw;
        vid.height = sh;
    // Don't scale video smaller than certain size
        if (vid.height < 480)
        vid.height = 480;
        if (vid.width < 656)
        vid.width = 656;
    // choose the smaller scale property (x or y) and match the other to it so the size is proportional;
        (vid.scaleX > vid.scaleY) ? vid.scaleY = vid.scaleX : vid.scaleX = vid.scaleY;
    // add event listener for full screen button
    fullScreenStage_mc.buttonMode = true;
    fullScreenStage_mc.mouseChildren = false;
    fullScreenStage_mc.addEventListener(MouseEvent.CLICK, goFullStage, false, 0, true);
    function goFullStage(event:MouseEvent):void
        //vid.fullScreenTakeOver = false; // keeps flvPlayer component from becoming full screen if you use it instead  
        if (stage.displayState == StageDisplayState.NORMAL)
            stage.displayState=StageDisplayState.FULL_SCREEN;
        else
            stage.displayState=StageDisplayState.NORMAL;

  • Database Log File becomes very big, What's the best practice to handle it?

    The log of my production Database is getting very big, and the harddisk is almost full, I am pretty new to SAP, but familiar with SQL Server, if anybody can give me advice on what's the best practice to handle this issue.
    Should I Shrink the Database?
    I know increase hard disk is need for long term .
    Thanks in advance.

    Hi Finke,
    Usually the log file fills up and grow huge, due to not having regular transaction log backups. If you database is in FULL recovery mode, every transaction is logged in Transaction file, and it gets cleared when you take a log backup. If it is a production system and if you don't have regular transaction log backups, the problem is just sitting there to explode, when you need a point in time restore. Please check you backup/restore strategy.
    Follow these steps to get transactional file back in normal shape:
    1.) Take a transactional backup.
    2.) shrink log file. ( DBCC shrinkfile('logfilename',10240)
          The above command will shrink the file to 10 GB.(recommended size for high transactional systems)
    >
    Finke Xie wrote:
    > Should I Shrink the Database? .
    "NEVER SHRINK DATA FILES", shrink only log file
    3.) Schedule log backups every 15 minutes.
    Thanks
    Mush

  • What is the best practice for creating primary key on fact table?

    what is the best practice for primary key on fact table?
    1. Using composite key
    2. Create a surrogate key
    3. No primary key
    In document, i can only find "From a modeling standpoint, the primary key of the fact table is usually a composite key that is made up of all of its foreign keys."
    http://download.oracle.com/docs/cd/E11882_01/server.112/e16579/logical.htm#i1006423
    I also found a relevant thread states that primary key on fact table is necessary.
    Primary Key on Fact Table.
    But, if no business requires the uniqueness of the records and there is no materilized view, do we still need primary key? is there any other bad affect if there is no primary key on fact table? and any benifits from not creating primary key?

    Well, natural combination of dimensions connected to the fact would be a natural primary key and it would be composite.
    Having an artificial PK might simplify things a bit.
    Having no PK leads to a major mess. Fact should represent a business transaction, or some general event. If you're loading data you want to be able to identify the records that are processed. Also without PK if you forget to make an unique key the access to this fact table will be slow. Plus, having no PK will mean that if you want to used different tools, like Data Modeller in Jbuilder or OWB insert / update functionality it won't function, since there's no PK. Defining a PK for every table is a good practice. Not defining PK is asking for a load of problems, from performance to functionality and data quality.
    Edited by: Cortanamo on 16.12.2010 07:12

  • Terabyte Plus Libraries - What is the Best Practice?

    In my current Aperture library I have 150 gigs and I have only been using it for about a year. Previous to that I used iPhoto for a short time which still has a 60 gig library and previous to that I stored my images in file folders on a windows servers which I have around 500+ gig there. Then there are are those non electronic images which should be scanned in one day.....
    My hope was to have import everything into one tool and eventually to have some better organization and management of my images. At the rate I am shooting now it won't be long before I break the terabyte mark and am wondering as I try to pull al these sources together what is the best Practice?
    I know I can have more than one library now with Aperture, do folks manage their libraries by themes? Weddings, Family, Commercial, etc? I just picked up a 2 terabyte drive to start moving stuff off my Mac Book but am not sure if I should use the Archive tool to do this or break apart my images into libraries and store them them and just keep a working library on my Mac?
    Within Aperture I am using the Project -> Album hierarchy to manage my shoots now as well.
    Also, I don't have a ton of video yet, but have started shooting a little, plus have been making slideshows and books now, so I need to start planning for that as well. Just wondering what is the best, most efficient way of large data management with Aperture.
    Thanks!

    The solution is to avoid the (unfortunately default) Managed Masters and instead use a Referenced Masters Library kept on an internal drive. Back up originals prior to importing and keep Masters off the Library drive. That way the Aperture Library will remain small enough to live on a standard internal drive without overfilling it.
    Note that working drives (as opposed to backup-only drives) should not be allowed to exceed ~70% full for ideal speed and stability.
    Multiple Libraries is almost always poor images management in a digital world unless all rights to the images, including the right to simply view any image (such as security work), belong exclusively to the client. Usage of multiple Libraries is a big backward step into film-think and very significantly limits the power of digital images management.
    -Allen

  • What is the best practice to perform DB Backup on Sun Cluster using OSB

    I have a query on OSB 10.4.
    I want to configure OSB 10.4 on 2 Node Sun Cluster where the oracle database is running.
    When im performing DB backup, my DB backup job should not get failed if my node1 fails. What is the best practice to achieve this?

    Hi,
    Each Host that participates in an OSB administrative domain must also have some pre-configured way to resolve a host name to an IP address.Use DNS, NIS etc to do this.
    Specify cluster IP in OSB, so that OSB always looks for Cluster IP only instead of physical IPs of each node.
    Explanation :
    If it is 2-Node OR 4-Node, when Cluster software installed in these nodes we have to configure Cluster IP so that when one node fails Cluster IP will automatically move to the another node.
    This cluster IP we have to specify whether it is RMAN backup or Application JDBC connection. Failing to second node/another Node is the job of Cluster IP. So wherever we install cluster configuration we have to specify in all the failover places specify CLUSTER IP.
    Hope it helps..
    Thanks
    LaserSoft

  • What's the best practice to manage the page file?

           
    We have one Hyper-v Server running windows 2012 R2 with 128 GB RAM and 2 drives (C and D). It setup Automatically manage page file size for all drives. What's the best practice to manage the page file?
    Bob Lin, MCSE & CNE Networking, Internet, Routing, VPN Networking, Internet, Routing, VPN Troubleshooting on http://www.ChicagoTech.net How to Install and Configure Windows, VMware, Virtualization and Cisco on http://www.HowToNetworking.com

    For Hyper-V systems, my general recommendation is to set the page file to 1-4 GB. This allows for a mini-dump should something happen. 99.99% of the time, Microsoft will be able to figure out the cause of the problem from the mini-dump. It does not make
    sense on a Hyper-V system to set aside enough space to capture all the memory on the system because only a very small portion of that memory is used by the parent partition. Most of the memory is under control of the individual VMs.
    Yes, I had one of the Hyper-V product group tell me that I should let Windows manage it.  A couple of times I saw space on my system disk disappear because the algorithm decided it wanted all the space for the page file.  Made it so I couldn't
    patch my systems.  Went back in and set the page file to 1-4 GB and have not had any issues since.
    . : | : . : | : . tim

  • What are the best practice for CQ5.5 configuration?

    Hello,
    What are the best practice for CQ5.5 configuration which handle for High availability.
    Last time I had a issues on server when I was uploaded 2 GB of DAM and then after that the server is not able to start and always getting error regarding Tar Persistance.
    So kindly request you to please let me know what are the best apache felix configuration.
    Thanks in advance...
    Regards,
    Satish

    Hi,
    A DAM upload, regardless of the size of the assets, never should result in TarPM problems, unless you run into an OOM, which left the repository in an unclean state. So if you regularly do DAM uploads of that size, you should check the Garbage Collection logs and probably adjust the heapsize if necessary. You might want to limit the number of concurrent running workflows to keep the memory consumption a bit lower.
    To your question: HA in a traditional sense you cannot achieve with a single box, even with optimized settings. In an author usecase you would need clustering.
    Jörg

  • What is the best practice?

    Hi Everybody,
    I would like to know the best practice for the following scenario. Would appreciate any help on it.
    We plan to store XML documents in Oracle DB and then do searches using Intermedia on it.
    The question I have is,
    1. Should I use a clob datatype or varchar2, What is the best practice?
    2. Which would give me the best performance?
    3. Do we have some performance statistics published somewhere?
    I'd be grateful if you could pass on any other comments or info on it.
    Thanks
    Manoj

    Hi Manoj
    We had a similiar requirement here for HTMl documents
    You can use BLOB datatype And then create text index.
    One thing just use CTX_user.LOG and see if it indexes these documents
    then use the CONTAINS query and confirm it.
    this should work .
    thanks
    null

Maybe you are looking for

  • Report printing only 1 page

    Hi folks, I have a report, but I run it only the first page is displayed. (query returns enough data for displaying more then 1 page). I checked every repeat frame and also normal frame within repoirt and it's "Print Object On" property (it's setup f

  • Comnpany Code is not fully maintained

    Hi Experts, I am facing an below issue while creating the material master in SAP SD module. ERROR:"Company Code is not fully maintained". However when I checked my assignment by EC01 than it seem no problem. Please resolve it. Thanks in Advance Regar

  • RSA3 Error

    Hi All, There are CRM datasources which are giving error in RSA3 - "Error during extraction". These datasources are working fine in Development. We have reactivated and retransported these datasources to Production but still the issue remains. Please

  • DataGrid: Refresh one single row.

    How do I refresh one single row in FlashBuilder DataGrid?  I don't want to change the user's field sort order or selected row.

  • Where are my purchased apps?

    My iPad died last night. I got a new one today and loaded it with the back-up from the old one. Several things are missing, but most particularly my purchased apps. How do I retrieve them? thanks