INCREMENTAL BACKUP_RESTORE

HI,
I have PROD and PRE0-PROD DB in 2 servers.
1) i did: FULL DB backup of database and i restore it in pre-prod from prod .
2) Now i have incremental backup of PROD , so i need to know whtr i can restore this INCR BACKUP OF PROD and can i able to restore in pre-PROD (with prod increm backup)
isit possible ....please give me some steps to do / DOC id. If possibel
ThankU,

826854 wrote:
HI,
I have PROD and PRE0-PROD DB in 2 servers.
1) i did: FULL DB backup of database and i restore it in pre-prod from prod .
2) Now i have incremental backup of PROD , so i need to know whtr i can restore this INCR BACKUP OF PROD and can i able to restore in pre-PROD (with prod increm backup)
isit possible ....please give me some steps to do / DOC id. If possibel
ThankU,I think your question is something like below.
One thing make sure, have you opened database after restoring from full backup?
are you trying like incremental SCN for standby? like incremental SCN for pre-prod..

Similar Messages

  • I am trying to restore my catalog having previously done a back up to an external hard drive and subsequently an incremental backup. I am using Photoshop Elements 11 and the only option given in the restore procedure is to open a .tly file.

    I have done this but now the restore function is asking for yet another file, which i assume to be the original back up, but that is the only .tly file since the only other relevant file appears to be called catalog.buc but that is just not visible when using the restore function? How do I continue from here with this restoration of my catalog?

    Martin_Had a écrit:
    Thank you Andaleeb. I appear to have an old backup of a year ago, and a more recent full back up plus an incremental backup.
    Regrettably I don't really understand what is going on because firstly the restore does not complete its cycle so I cannot see what that backup file contains and secondly all I have read would suggest that the .tly file is the full backup and the catalog.buc file is the incremental backup. For the present, the catalog shows the photos for 2014 which makes me think I might have backed from the old back up file.
    I am minded to create another catalog and try again.
    Any views on what I can do?
    A backup (full or incremental) is a folder, not a file. It contains renamed pictures file copies as well a copies of the files and subfolders of the original catalog. The catalog.buc is a renamed copy of the database of your original catalog while the backup.tly. That backup.tly file contains the information to restore the renamed pictures where you decide, the original location or a new custom one. You can't do anything with the backup yourself, only the restore process can do the job if it finds the backup.tly file. In the case of an incremental backup, you have to tell the restore process where to find the incremental backup folder; it finds the backup.tly file in that folder and finds what is to restore there; then it asks you for the previous backup folder (in your case the full backup); you then browse to that full backup folder so that the restore process can find find the backup.tly there; the restore then deals with the rest of the files to restore.

  • Regarding REFRESHING of Data in Data warehouse using DAC Incremental approa

    My client is planning to move from Discoverer to OBIA but before that we need some answers.
    1) My client needs the data to be refreshed every hour (incremental load using DAC) because they are using lot of real time data.
    We don't have much updated data( e.g 10 invoices in an hour + some other). How much time it usually takes to refresh those tables in Data wareshouse using DAC?
    2) While the table is getting refreshed can we use that table to generate a report? If yes, what is the state of data? Stale or incorrect(undefined)?
    3) How does refresh of Fin analytics work? Is it one module at a time or it treats all 3 modules (GL, AR and AP) as a single unit of refresh?
    I would really appreciate if I can get an answer for all the questions.
    Thank You,

    Here you go for answers:
    1) Shouldn't be much problem for a such small amt of data. All depends on ur execution plan in DAC which can always be created as new and can be customized to load data for only those tables...(Star Schema) ---- Approx 15-20 mins as it does so many things apart from loading table.
    2) Report in OBIEE will give previous data as I believe Cache will be (Shud be) turned on. You will get the new data in reports after the refresh is complete and cache is cleared using various methods ( Event Polling preferred)
    3) Again for Fin Analytics or any other module, you will have OOTB plans. But you can create ur new plans and execute. GL, AR, AP are also provided seperate..
    Hope this answers your question...You will get to know more which going through Oracle docs...particular for DAC

  • Is there a routine one can use to shift the column of data by one each time the loop index increments? In other words, increment the columns that the data is being saved by using the index?

    The device, an Ocean Optics spectrometer in columns of about 9000 cells.I'm saving this as a lvm file using the "write to measurement file.vi". But it doesn't give me the flexibility as far as I can tell.
    I need to move the column by the index of the for loop, so that when i = n, the data will take up the n+1 column. (the 1st column is used for wavelength). How do I use the "write to spreadsheet file.vi" to do this? Also, if I use the "write to spreadsheet file.vi", is there a way one can increment the file name, so that the data isn't written over. I like what "write to measurement file.vi" does.
    I'd really appreciate any help someone can give me. I'm a novice at this, so the greater the detail, the better. Thanks!!!

    You cannot write one column at a time to a spreadsheet file, because a file is arranged linearly and adding a column would need to move (=read and rewwrite elsewhere) almost all existing elements to interlace the new data. You can only append new rows without having to touch the already written data.
    Fields typically don't have fixed width. An exception would be binary files that are pre-allocated at the final size. In this case you can write columns by setting the file positions for each element. It still will be very inefficient.
    What you could do is append rows until all data is written, the read, transpose, and write back the final file.
    What you also could to is build the final array in a shift register and write the entire things to file at once after all data is present.
    LabVIEW Champion . Do more with less code and in less time .

  • SYSDATE is not getting incremented in the parameter - Scheduler

    Hi All
    I have used the query 'SELECT SYSDATE FROM DUAL' for a parameter in concurrent request. If I execute this concurrent as a single request, then it returns exact sysdate (current date) in a parameter.
    If i schedule this program for daily execution using scheduler, then the sysdate is always giving value of the date when i have configured the scheduler (even I have enabled the check box increment date parameters each run).
    For example,
    My Requirement :The scheduler configured such a way that it should run everyday.Whenever it runs my query has to return the current date.
    Current Issue: I schedule the scheduler on 23-Dec-2008. If it runs on 24-DEC-2008,it returns as '23-DEC-2008', If it runs on 28-DEC-2008 also,it returns as '23-DEC-2008.However If I execute this as a single request, it works fine.
    Can anyone help me please,,It is very urgent.
    Thanks in Advance

    Hi,
    This forum is dedicated to the Oracle Scheduler which uses the dbms_scheduler package.
    It looks like you might be using a different scheduler since the Oracle Scheduler does not allow using a query as a parameter and does not have an "increment date parameters" checkbox.
    If you are using the Oracle Applications Scheduler, you should ask this question on the Applications forum here
    http://forums.oracle.com/forums/category.jspa?categoryID=3
    If you are using the Enterprise Manager Scheduler you should ask on the Grid Control forum here
    Enterprise Manager
    If you are using the Oracle Scheduler you should post the code being used to create your job which may be found by using the "Show SQL" button on the create job webpage. This should include a call to dbms_scheduler.create_job .
    Thanks,
    Ravi.

  • InDesign CS6 Text Wrap increments not accurate.

    My work jumped from CS4 to CS6 so I don't know if CS5 had the same issue. Anyway. Text wrap (around bounding box) seem to now be very limited. For example, If I have, say a .25 text wrap on the bottom of a object, but I need to change it slightly to .245 the text does not move. The box shows the change, but the text below just doesn't follow suit. CS4 I was able to make tiny incremental changes to the wrap, but it is not working for me now in CS6. Any ideas, answers?

    Suggest you post product specific questions in the relevant product forum.
    Try posting at http://forums.adobe.com/community/indesign/indesign_general

  • Refresh browser causes to increment results

    Hi,
    I have a servlet that is using a database to query results from a survey. The survey has 3 questions and each one of them has 4 possible answers. The query runs fine and displays in the web browser, but once I do a refresh some of the data increments its value at a constant rate.
    Here is the code:
    import java.io.*;
    import javax.servlet.*;
    import javax.servlet.http.*;
    import java.sql.*;
    import java.util.StringTokenizer;
    public class Survey extends HttpServlet {
    private Connection con = null;
    private Statement stmt = null;
    private String url = "jdbc:odbc:survey";
    private String table = "results";
    private int numQues = 3;
    private int [] numAns = {4,4,4};
    private int num = 0;
    public void init() throws ServletException {
    try {
    // loading the jdbc-odbc bridge
    Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
    // making a connection
    con = DriverManager.getConnection(url,"anonymous","guest");
    } catch (Exception e) {
    e.printStackTrace();
    con = null;
    public void doPost(HttpServletRequest req, HttpServletResponse res)
    throws ServletException, IOException {
    String [] results = new String[numQues];
    for (int i=0;i<numQues;i++) {
    results = req.getParameter("q"+i);
    // test if the user has answered all the question
    String resultsDb = "";
    for (int i=0;i<numQues;i++) {
    if (i+1!=numQues) {
    resultsDb += "'" + results + "',";
    else {
    resultsDb += "'" + results + "'";
    boolean success = insertIntoDb(resultsDb);
    // print a thank you message
    res.setContentType("text/html");
    PrintWriter output = res.getWriter();
    StringBuffer buffer = new StringBuffer();
    buffer.append("<HTML>");
    buffer.append("<HEAD>");
    buffer.append("</HEAD>");
    buffer.append("<BODY BGCOLOR=\"#FFFFFF\">");
    buffer.append("<P>");
    if (success) {
    buffer.append("Thank you for participating!");
    else {
    buffer.append("An error has occurred. Please press the back button of your browser");
    buffer.append(" and try again.");
    buffer.append("</P>");
    buffer.append("</BODY>");
    buffer.append("</HTML>");
    output.println(buffer.toString());
    output.close();
    public void doGet(HttpServletRequest req, HttpServletResponse res)
    throws IOException {
    // get info from file
    res.setContentType("text/html");
    PrintWriter output = res.getWriter();
    StringBuffer buffer = new StringBuffer();
    buffer.append("<HTML>");
    buffer.append("<HEAD>");
    buffer.append("</HEAD>");
    buffer.append("<BODY BGCOLOR=\"#FFFFFF\">");
    buffer.append("<P>");
    try {
    stmt = con.createStatement();
    // find the number of participation
    for (int i=0;i<1;i++) {
    String query = "SELECT q" + i + " FROM " + table;
    ResultSet rs = stmt.executeQuery(query);
    while (rs.next()) {
    rs.getInt("q"+i);
    num++;
    // loop thru each question
    for (int i=0;i<numQues;i++) {
    int [] results = new int[num];
    String query = "SELECT q" + i + " FROM " + table;
    ResultSet rs = stmt.executeQuery(query);
    int j=0;
    while (rs.next()) {
    results[j]=rs.getInt("q"+i);
    j++;
    //call method
    int[] total = percent(results,4);
    buffer.append("Question" + i + ":<BR>");
    for (int k=0;k<4;k++) {
    buffer.append(" > Answer " + k + ":" + total[k]);
    buffer.append("<BR>");
    buffer.append("\n");
    } catch (SQLException ex) {
    ex.printStackTrace();
    // display the results
    buffer.append("</P>");
    buffer.append("</BODY>");
    buffer.append("</HTML>");
    output.println(buffer.toString());
    output.close();
    public void destroy() {
    try {
    con.close();
    } catch (Exception e) {
    System.err.println("Problem closing the database");
    public boolean insertIntoDb(String results) {
    String query = "INSERT INTO " + table + " VALUES (" + results + ");";
    try {
    stmt = con.createStatement();
    stmt.execute(query);
    stmt.close();
    } catch (Exception e) {
    System.err.println("ERROR: Problems with adding new entry");
    e.printStackTrace();
    return false;
    return true;
    public int [] percent(int [] array, int numOptions) {
    System.out.println("==============================================");
    int [] total = new int[numOptions];
    // initialize array
    for (int i=0;i<total.length;i++) {
    total=0;
    for (int j=0;j<numOptions;j++) {
    for (int i=0;i<array.length;i++) {
    System.out.println("j="+j+"\t"+"i="+i+"\ttotal[j]="+total[j]);
    if (array==j) {
    total[j]++;
    System.out.println("==============================================");
    return total;
    Thanks!

    Hi,
    I do encounter similar problem. The root cause was that URL at the location bar of the browser still pointing to the same servlet (which handle the HTTP POST/GET request to update the database) after the result was returned by the servicing servlet.
    For example, in your case the "Survey" servlet URL.
    The scenario is like this.
    1.Browser (HTML form) ---HTTP post ---> Survey Servlet (service & update database)
    2.Survey Servlet -- (Result in HTML via HttpServletResponse)---> Browser (display the result)
    Note that after step 2, the browser's location bar value still pointing to the Survey Servlet URL (used in the HTML form's action value). So if the refresh is performed here, the same action will be repeated, as the result, 2 identical records have been created in your database table instead of one.
    A way to work around this is to split the servlet in to two, one performing the update of database and one responsible to display the result.
    Thing become as follow:
    1.Browser (HTML form) ---HTTP post ---> Survey Servlet (service & update database)
    2.Survey Servlet -- (Redirect the Http request)---> Display Result ServletBrowser
    3.Display Result Servlet --(Acknowledgement in HTML via HttpServletResponse) ---> Browser (display the result)
    Note that now the browser's location bar will point to the Display Result Servlet. Refresh action will be posted to Display Result Servlet which will not create duplication of database update activity.
    to redirect the request to another servlet, use the res.sendRedirect(url); where res is the HttpServletResponse object and url is the effective url of calling the target servlet.
    Hope that this help.

  • Can't resume incremental backups

    I fist finished moving and after a few weeks of sitting in a box I set up my Time Capsule to resume operation as my backup device and wireless router. Unfortunately, I must have made a mistake during setup because now my MacBook Pro is treating the TC as a completely new device, seeking to make a full backup of its entire hard drive rather than the first incremental backup since before the move. Is there some way of forcing my MBP to recognize the existing backups on the TC and pick up where it left off? I'd really appreciate some help because I've got some valuable old files trapped in those old backups. Thanks!

    I think i have an idea of why this might be happening now but don't know how to fix this. any help would be appreciated.
    so i checked this directory
    /Volumes/Time Machine Backup 2/Backups.backupdb
    and there were two directories. a new one called "computer" and the old folder with my backups "computer 2", so Time Machine must be looking at the "computer" folder. How can I force Time Machine to look at the other older folder? I tried removing the "computer" folder but it won't let me rename "computer 2" to "computer", even from the terminal using sudo and admin privileges. Is there a reason why I'm not allowed rename files on my own machine when I'm the admin?
    thanks

  • ACS 5.3 Incremental BackUp Issue

    We have ACS 5.3 and it turns back to "Off" by itself and doesn't perform incremental backup. I turned it "On" several times, but it keeps on turning "Off"
    Version : 5.3.0.40.8 in VM Enviroment.

    Hi Kumar2000,
    I tried to answer anas query here, you may want to go through the same.
    https://supportforums.cisco.com/discussion/12142716/acs-inceremental-backup-turns
    Regards,
    Jatin Katyal
    *Do rate helpful posts*

  • ACS 5.3 incremental backup error

    Hi ,
    I have ACS 5.3 that recently having problems with the incremental backup.
    The error is : on demand back failed
    and the details is: SQL Anywhere backup utility connection error: insufficient system resources- failed  to allocate a SYSV semphorenull .
    I mean come on.... and I did not find this error on cisco website.
    The ADE.log file is not showing errors/details related to this. Atttached are the files showing the errors
    Have anyone faced this problem before? Ideas? Anything?
    Regards,
    George

    Hi George:
    with 5.3 I experienced many issues including the incremental backup does not work. whenever I set it to "ON" next time the scheduled backup comes It fails and set itself back to "Off'. I did not get same message you get though.
    I finally did two things:
    - upgraded to latest patch.
    - moved the log collector from the primary to the secondary.
    Now things are fine for about 1 month without issues.
    Regarding your issue, I think it could be related to resource issue as mentioned in the message.
    What is the current DB size that you have?
    Note that the message is misleading (messages I got with my ACS are the same) because they mention incremental backup in the message title and then say on-demand full backup failed!
    So, you have to specify yourself if the issue with the incremental backup or the full backup?
    HTH
    Amjad
    Rating useful replies is more useful than saying "Thank you"

  • User Profile Incremental Sync does not views changes in profiles.

    User Profile Incremental Sync does not views changes in profiles. I make full sync - all ok. then I change the phone number for example, and then start incremental sync and nothing moves to AD. 
    All the stages of FIM sync just state 0 records to process. Looks like it does not see the changes
    1.profile property is set up to sync with AD attribute (with EXPORT).
    2.no errors in Windows logs or FIM UI client, nor in sharepoint UI
    3. sharepoint server 2010 SP1, "CU June 2012"  14.0.6123.5002
    4. I have just reprovisioned sync service, set up AD connections and property-attribute relations. before
    there was this problem http://social.msdn.microsoft.com/Forums/en-US/cb2b8aeb-d1b6-49a6-a788-2491ca45308a/critical-error-6398-userprofileimportjob-problem-profile-sync-malfunction?forum=sharepointadminhttp://social.msdn.microsoft.com/Forums/en-US/cb2b8aeb-d1b6-49a6-a788-2491ca45308a/critical-error-6398-userprofileimportjob-problem-profile-sync-malfunction?forum=sharepointadmin,
    which I resolved by clearing timer service cache.
    Help with advice, urls or solurions please..

    Hi  VlH,
    According to your description, the issue can be caused by the Profile or Profile Values table.
    You can check the event log and ULS log to see detail error message.
    To check event log, click the Start button and type “Event Viewer” in the Search box.
    For SharePoint 2013, by default, ULS log is at      
    C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\LOGS
    For your issue you can refer to the similar thread:
    http://social.technet.microsoft.com/Forums/sharepoint/en-US/bc3cd2ff-dc12-4223-a80b-e4cae616e861/user-profile-sync-doesnt-update-the-profiles-from-ad?forum=sharepointadminlegacy
    Also a good blog for user profile synchronization:
    http://www.harbar.net/articles/sp2010ups.aspx
    Best Regards,
    Eric
    Eric Tao
    TechNet Community Support

  • User Profile Service - User Profile Incremental Synchronization Timer job stuck at 33% Status: Pausing

    User Profile Service - User Profile Incremental Synchronization Progress: 33% Status: Pausing
    It has been almost 15 days.
    Both User Profile Service and User Profile Synchronization Service are in Started state and FIM service also starting 
    I tried clearing sharePoint config cache.
    I also restarted the sharepoint timer service.
    I tried almost everything that is on Internet but nothing helped me.
    Is there any other way to solve the issue as I was struck on production server (ASAP) 
    In synchronization serivce manager status of MOSS_DeltaImport is Inprogress from past 2 days  
    Best Regards.

    Hi,
    Please follow the steps in the link below to clear the configuration cache.
    http://blogs.msdn.com/b/jamesway/archive/2011/05/23/sharepoint-2010-clearing-the-configuration-cache.aspx
    Here is a similar thread for your reference:
    https://social.technet.microsoft.com/Forums/en-US/beaa852c-6f40-428a-b97c-20722864e045/user-profile-service-user-profile-incremental-synchronization-timer-job-stuck-at-88-status?forum=sharepointadminprevious
    Or try to clear the file system cache on all servers in the server farm on which the Windows SharePoint Services Timer service is running. Microsoft has provided a step by step procedure on clearing file system cache from the SharePoint front-end servers
    in this kb article.
    You can also see the ULS logs and check error messages.
    http://sharepointlogviewer.codeplex.com/
    Best Regards
    Dennis Guo
    TechNet Community Support

  • Not able to see ikm oracle incremental update and ikm oracle slowly changing dimensions under PHYSCIAL tab in odi 12c

    not able to see ikm oracle incremental update and ikm oracle slowly changing dimensions under PHYSCIAL tab in odi 12c
    But i'm able to see other IKM's please help me, how can i see them

    Nope, It has not been altered.
    COMPONENT NAME: LKM Oracle to Oracle (datapump)
    COMPONENT VERSION: 11.1.2.3
    AUTHOR: Oracle
    COMPATIBILITY: ODI 11.1.2 and above
    Description:
    - Loading Knowledge Module
    - Loads data from an Oracle Server to an Oracle Server using external tables in the datapump format.
    - This module is recommended when developing interfaces between two Oracle servers when DBLINK is not an option.
    - An External table definition is created on the source and target servers.
    - When using this module on a journalized source table, the Journaling table is first updated to flag the records consumed and then cleaned from these records at the end of the interface.

  • Time Machine won't back up incrementally

    Hi there,
    Here's my situation.  Using a Mac Pro, Snow Leopard 10.6.8.  (still SL because I have a solid FCP7 set up and don't want to mess with it at the moment).
    I've used Time machine successfully with one large 6TB backup external hard drive. But recently, since I don't have enough space on that one drive to back up all my other drives, I decided to try to use a second backup that I would dedicate to backing up just one large drive with lots of large media files.  So I backed up eveything else  on original backup1 except that drive....then connected the other backup2...chose that as my backup disc in TM, excluded all other drives (including my system hard drive) and let it fly.  It seemed to work, but now when I try to use backup2 to incrementally backup it wants to backup that WHOLE drive again and says "there's not enough space on the backup drive", so it won't proceed. When I look on the backup2, I do see the backup folders, and the files are in there...I just hope to do an easy incremental backup of this drive from time to time.
    I tried some recommend troubleshooting and deleted the com.apple.TimeMachine.plist file...but after restarting, rechoosing that drive as the backup drive, excluding the others, it still won't incrementally back up the drive that I needed to be backed up.   Still says, "Not enough space". 
    Now, I'm not sure if switching between drives is even possible..seems like it should be.  You'd think many people would have this problem of their backup drive not being big enough to handle everything and want to split it up ( i understand Mavericks makes this easier).  But I really can't update the OS at the moment and am looking for solutions or ideas as to how to work this out.  Any ideas would be greatly appreciated, even a third party backup recommendation if necessary.
    scott

    Time Machine, especially the version on Snow Leopard, isn't made for splitting backup duties across volumes. Even on Mountain Lion, where you can use multiple target drives, they each back up the entire group of source drives. I haven't tried Mavericks yet, but I understand it is similar to Mountain Lion in this respect.
    You could use Time Machine to back up everything but your large media file drive into the 6TB drive and use another utility to back up the large media drive onto another external volume. Carbon Copy Cloner, for example, has an incremental backup mode.

  • Time Machine won't do incremental backups after I restored my computer

    I am using a macbook (2010 version) and operating Lion.  I had a problem with my Macbook where it wouldn't log on, even in safe mode, so I restored it using Timemachine.  Now that everything is back up and running well I want to keep backing up.  But Time Machine now says that there is not eneough space on the backup disk as it wants to back up the whole computer alongside the exisitng back up.  In short it will no longer do incremental backups on top of what has already been backed up.  I don't want to wipe my backup disk for fear that something might go wrong again and I may want to go back to a version earlier than the one I restored.  Is there anyway of getting round this problem?  It seems like a fairly serious problem.
    I would appreciate any help.

    ms364 wrote:
    I am using a macbook (2010 version) and operating Lion.  I had a problem with my Macbook where it wouldn't log on, even in safe mode, so I restored it using Timemachine.
    When you erased the disk, it got a new UUID (Universally Unique IDentifier), which is treated like a different drive, and Time Machine will back it up in it's entirety.
    Did you do a full system restore, starting from the Recovery HD?  If so, that should have left a "trail" so Time Machine should have automatically "associated" the restored disk with the existing backups.
    If you did it "piecemeal," though, via the "Star Wars" display, it won't (that doesn't leave the trail for TM to figure out what happened).
    You might be able to get it to do the association manually.  See #B6 in Time Machine - Troubleshooting.
    But Linc is right;  you apparently need a larger TM drive.  See #1 in Time Machine - Frequently Asked Questions.

Maybe you are looking for

  • Can we have multiple IE in the same machine to access IC Webclient ?

    Dear expert, I have a question here. Can we access IC Webclient if our computer have multiple IE ( E.g IE 6 & IE 7 ). Currently the client computer have multiple IEs in the computer by using Tredosoft product. The environment of the system is as belo

  • How to see a query?

    I get the following message when I try to delete a row using BC4J and JSP: Error Message: JBO-26080: Error while selecting entity for GenesisSalesGroup Error Message: ORA-00936: missing expression I've added a transient column to an entity class. Bes

  • The Shopping cart not appearing in the sourcing cockpit of the buyer

    Hi We are using Extended Classic Scenario. The SC was created in SRM and then after approval was send to a central buyer for sourcing. The central buyer redistributes the same to the respective Buyer based on the purchasing group. However the SC does

  • Looking for full time position using Labview and/or PLC's in Long Island NY area

    I have 13 years experience using Labview for user interface/daq applications.  I also have 20 years experience with Programmable Logic Controllers (PLC) programming for industrial machine controls.  My resume follows: Adam Wechsler 111 Holbrook Rd.,H

  • Status Agent Not Reachable - How to set a fix port number?

    My EM 10g was working fine http://localhost:5500/em Now when I tried, it was not working. I looked into portlist and found out it there was additional port of 5502 But when I logged into port 5502, Status Agent Not Reachable. Other links were working