Memory consumption on oms/agent

Hello all, I had posted this at the wrong place. I was wondering if someone can shed some light on this....i did get one reply back saying look at the metalink doc...which i did...but dose not seem to be my issue...
Re: agent taking too much memory...
Also, how dose OMS runs ??? dose OMS starts a java process ??? why is the java process conusing so much memory ?
if i grep for the java process....i get the below(system was restarted yesterday, thats why diff pid)...
$ ps -ef|grep 29301
oracle 11192 24846 0 08:42:51 pts/1 0:00 grep 29301
oracle 29301 29284 0 14:19:40 ? 27:53 /opt/java6/bin/IA64W/java -client -Xms256m -Xmx1024m -XX:MaxPermSize=512m -XX:CompileThreshold=8000 -XX:PermSize=128m -Dweblogi
$ ps -ef|grep 29120
oracle 29120 29103 0 14:18:41 ? 12:41 /opt/java6/bin/IA64W/java -client -Xms256m -Xmx1024m -XX:MaxPermSize=512m -XX:CompileThreshold=8000 -XX:PermSize=128m -Dweblogi
oracle 11804 24846 0 08:45:10 pts/1 0:00 grep 29120
$
why do i have 2 java things, and looks like they are running the same cmd....is that part of OMS or ???

I assume that your are using GridControl 11g?
Why are there java-processes? Because that's how the architecture of GridControl was build :-)
And as a matter of fact java-processes are usually memory and cpu-consuming processes. When checking the requirements for GC you will figure out that you will need a certain amount of cpu and ram to keep that thing running...
Please check the documentation "Enterprise Manager Documentation 11g Release 1 (11.1)" which can be found under http://download.oracle.com/docs/cd/E11857_01/index.htm for a detailed description of the architecure

Similar Messages

  • How to measure memory consumption during unit tests?

    Hello,
    I'm looking for simple tools to automate measurement of overall memory consumption during some memory-sensitive unit tests.
    I would like to apply this when running a batch of some test suite targetting tests that exercise memory-sensitive operations.
    The intent is, to verify that a modification of code in this area does not introduce regression (raise) of memory consumption.
    I would include it in the nightly build, and monitor evolution of summary figure (a-ah, the "userAccount" test suite consumed 615Mb last night, compared to 500Mb the night before... What did we check-in yesterday?)
    Running on Win32, the system-level info of memory consumed is known not to be accurate.
    Using perfmon is more accurate but it seems an overkill - plus it's difficult to automate, you have to attach it to an existing process...
    I've looked in the hprof included in Sun's JDK, but it seems to be targetted at investigating problems rather than discovering them. In particular there isn't a "summary line" of the total memory consumed...
    What tools do you use/suggest?

    However this requires manual code in my unit test
    classes themselves, e.g. in my setUp/tearDown
    methods.
    I was expecting something more orthogonal to the
    tests, that I could activate or not depending on the
    purpose of the test.Some IDEs display mmeory usage and execution time for each test/group of tests.
    If I don't have another option, OK I'll wire my own
    pre/post memory counting, maybe using AOP, and will
    activate memory measurement only when needed.If you need to check the memory used, I would do this.
    You can do the same thing with AOP. Unless you are using an AOP library, I doubt it is worth additional effort.
    Have you actually used your suggestion to automate
    memory consumption measurement as part of daily builds?Yes, but I have less than a dozen tests which fail if the memory consumption is significantly different.
    I have more test which fail if the execution time is siginificantly different.
    Rather than use the setUp()/tearDown() approach, I use the testMethod() as a wrapper for the real test and add the check inside it. This is useful as different test will use different amounts of memory.
    Plus, I did not understand your suggestion, can you elaborate?
    - I first assumed you meant freeMemory(), which, as
    you suggest, is not accurate, since it returns "an
    approximation of [available memory]"freeMemory gives the free memory from the total. The total can change so you need to take the total - free as the memory used.
    - I re-read it and now assume you do mean
    totalMemory(), which unfortunately will grow only
    when more memory than the initial heap setting is
    needed.more memory is needed when more memory is used. Unless your test uses a significant amount of memory there is no way to measure it reliably. i.e. if a GC is perform during a test, you can have the test appear to use less memory than it consumes.
    - Eventually, I may need to inlcude calls to
    System.gc() but I seem to remember it is best-effort
    only (endless discussion) and may not help accuracy.if you do a System.gc(); followed by a Thread.yield() at the start it can improve things marginally.

  • Problems updating projects to new versions of Premiere (CS5 to CC and CC to CC 2014) Memory consumption during re-index and Offline MPEG Clips in CC 2014

    I have 24GB of RAM in my 64 bit Windows 7 system running on RAID 5 with an i7 CPU.
    A while ago I updated from Premiere CS5 to CC and then from Premiere CC to CC 2014. I updated all my then current projects to the new version as well.
    Most of the projects contained 1080i 25fps (1080x1440 anamorphic) MPEG clips originally imported (captured from HDV tape) from a Sony HDV camera using Premiere CS5 or CC.
    Memory consumption during re-indexing.
    When updating projects I experienced frequent crashes going from CS5 to CC and later going from CC to CC 2014. Updating projects caused all clips in the project to be re-indexed. The crashes were due to the re-indexing process causing excessive RAM consumption and I had to re-open each project several times before the re-index would eventually complete successfully. This is despite using the setting to limit the RAM consumed by Premiere to much less than the 24GB RAM in my system.
    I checked that clips played; there were no errors generated; no clips showed as Offline.
    Some Clips now Offline:Importer  CC 2014
    Now, after some months editing one project I found some of the MPEG clips have been flagged as "Offline: Importer" and will not relink. The error reported is "An error occurred decompressing video or audio".
    The same clips play perfectly well in, for example, Windows Media Player.
    I still have the earlier Premiere CC and the project file and the clips that CC 2014 importer rejects are still OK in the Premiere CC version of the project.
    It seems that the importer in CC 2014 has a bug that causes it to reject MPEG clips with which earlier versions of Premiere had no problem.
    It's not the sort of problem expected with a premium product.
    After this experience, I will not be updating premiere mid-project ever again.
    How can I get these clips into CC 2014? I can't go back to the version of the project in Premiere CC without losing hours of work/edits in Premiere CC 2014.
    Any help appreciated. Thanks.

    To answer my own question: I could find no answer to this myself and, with there being no replies in this forum, I have resorted to re-capturing the affected HDV tapes from scratch.
    Luckily, I still had my HDV camera and the source tapes and had not already used any of the clips that became Offline in Premiere Pro CC 2014.
    It seems clear that the MPEG importer in Premiere Pro CC 2014 rejects clips that Premiere Pro CC once accepted. It's a pretty horrible bug that ought to be fixed. Whether Adobe have a workaround or at least know about this issue and are working on it is unknown.
    It also seems clear that the clip re-indexing process that occurs when upgrading a project (from CS5 to CC and also from CC to CC 2014) has a bug which causes memory consumption to grow continuously while it runs. I have 24GB RAM in my system and regardless of the amount RAM I allocated to Premiere Pro, it would eventually crash. Fortunately on restarting Premiere Pro and re-loading the project, re-indexing would resume where it left off, and, depending on the size of the project (number of clips to be indexed), after many repeated crashes and restarts re-indexing would eventually complete and the project would be OK after that.
    It also seems clear that Adobe support isn't the greatest at recognising and responding when there are technical issues, publishing "known issues" (I could find no Adobe reference to either of these issues) or publishing workarounds. I logged the re-index issue as a bug and had zero response. Surely I am not the only one who has experienced these particular issues?
    This is very poor support for what is supposed to be a premium product.
    Lesson learned: I won't be upgrading Premiere again mid project after these experiences.

  • Query on memory consumption during SQL

    Hi SAP Gurus,
    Could I kindly request for your inputs concerning the following scenario?
    To put it quite simply, we have a program where we're required to retrieve all the fields from a lengthy custom table, i.e. the select statement uses an asterisk.  Unfortunately, there isn't really a way to avoid this short of a total overhaul of the code, so we had to settle with this (for now).
    The program retrieves from the database table using a where clause filtering only to a single value company code.  Kindly note that company code is not the only key in the table.  In order to help with the memory consumption, the original developer had employed retrieval by packages (also note that the total length of each record is 1803...).
    The problem encountered is as follows:
    - Using company code A, retrieving for 700k entries in packages of 277, the program ran without any issues.
    - However, using company code B, retrieving for 1.8m in packages of 277, the program encountered a TSV_TNEW_PAGE_ALLOC_FAILED short dump.  This error is encountered at the very first time the program goes through the select statement, ergo it has not even been able to pass through any additional internal table processing yet.
    About the only biggest difference between the two company codes is the number of corresponding records they have in the table.  I've checked if company code B had more values in its columns than company code A.  However, they're just the same.
    What I do not quite understand is why memory consumption changed just by changing the company code in the selection.  I thought that the memory consumed by both company codes should be the same... at least, in the beginning, considering that we're retrieving by packages, so we're not trying to get all of the records all at once.  However, the fact that it failed at the very beginning has shown me that I'm gravely mistaken.
    Could someone please enlighten me on how memory is consumed during database retrieval?
    Thanks!

    Hi,
    with FAE (FOR ALL ENTRIES) the whole query even for a single record in the itab is executed and all results for
    the company code are transfered from the database to the DBI since the duplicates will be removed by the DBI
    not by the database.
    If you use package size the resultset is buffered in a system table in the DBI (which allocates memory from your user quota). And from there on the package sizes are built and handed over to your application (into table lt_temp).
    see recent ABAP documentation:
    Since duplicate rows are only removed on the application server, all rows specified using the WHERE condition are sometimes transferred to an internal system table and aggregated here. This system table has the same maximum size as the normal internal tables. The system table is always required if addition PACKAGE SIZE or UP TO n ROWS is used at the same time. These do not affect the amount of rows transferred from the database server to the application server; instead, they are used to transfer the rows from the system table to the actual target area.
    What you should do:
    calculate the size needed for your big company code B. How many row  multiplied with line length.
    That is the minimum amount you need for your user memory quota. (quotas can be checked with
    ABAP report RSMEMORY) If the amount of memory is sufficient then try without packagesize.
    SELECT * FROM <custom table>
    INTO TABLE lt_temp
    FOR ALL ENTRIES IN lt_bukrs
    WHERE bukrs = lt_bukrs-bukrs
    ORDER BY primary key.
    This might actually use less memory than the package size option for the FOR ALL ENTRIES.
    Since with FAE it is buffered anyway in the DBI (and subtracted from your quota) you can
    do it right away and avoid double saving portions (the DBI buffer and a portion of that in the
    packe in lt_temp).
    If the amount of memory is still too big, you have to either increase the quotas or select
    less data (additional where conditions) or avoid using FAE in this case in order to not read all
    the data in one go.
    Hope this helps,
    Hermann

  • Integration Builder Memory Consumption

    Hello,
    we are experiencing very high memory consumption of the Java IR designer (not the directory). Especially for loading normal graphical idoc to EDI mappings, but also for normal idoc to idoc mappings. examples (RAM on client side):
    - open normal idoc to idoc mapping: + 40 MB
    - idoc to edi orders d93a: + 70 MB
    - a second idoc to edi orders d93a: + 70 MB
    - Execute those mappings: no additional consumption
    - third edi to edi orders d93a: + 100 MB
    (alle mappings in same namespace)
    After three more mappings RAM on client side goes on 580 MB and then Java heap error. Sometimes also OutOfMemory, then you have to terminate the application.
    Obviously the mapping editor is not quite will optimized for RAM usage. It seems to not cache the in/out message structures. Or it loads for every mapping very much dedicated functionality.
    So we cannot really call that fun. Working is very slow.
    Do you have similar experiences ? Are there workarounds ? I know the JNLP mem setting parameters, but the problem is the high load of each mapping, not only the overall maximum memory.
    And we are using only graphical mappings, no XSLT !
    We are on XI 3.0 SP 21
    CSY

    Hii
    Apart from raising tablespace..
    Note 425207 - SAP memory management, current parameter ranges
    you have configure operation modes to change work processes dynamically using rz03,rz04.
    Please see the below link
    http://help.sap.com/saphelp_nw04s/helpdata/en/c4/3a7f53505211d189550000e829fbbd/frameset.htm
    You can Contact your Basis administrator for necessary action

  • Check Process memory consumption and Kill it

    Hello
    I have just installed Orchestrator and have a problem that I think is perfekt for Orchestrator to handle.
    I have a process that sometimes hangs and the only way to spot it is that the memory consumption has stoped.
    The process is started every 15 minutes and scans a folder, if it finds a file it reads the file to a system. You can see that it is working by the increasing Memory consumption. If the read fails then the memory consumption stops. The process is still working
    and is responding but is hung.
    I'm thinking about doing a runbook that checks the memory-consumption every 5 minutes and compares it with the previous value. if the last three values are the same then I will kill the process and start it again.
    My problem is that I have not found a way to check the memory consumption of a process.
    I have set up a small test, just verify that I get the correct process, with the activity Monitor process -> Get Process Status -> Append Line (process name).
    But How do I get the process memory consumption?
    /Anders

    Now that I think about it a bit more I don't think there will be an easy way to set up a monitor for your situation in SCOM. Not that it couldn't be done, just not easily. Getting back to SCORCH. What you are trying to do isn't an every day kind of
    scenario. I don't think there is a built in activity for this.
    The hardest thing to overcome whether you use SCORCH or SCOM is likely going to be determining the error condition of three consecutive samples of the same memory usage. you'll need a way to track the samples. I can't think of a good way to do
    this without utilizing scripting.

  • High memory consumption in XSL transformations (XSLT)

    Hello colleagues!
    We have the problem of a very high memory consumption when transforming XML
    files with CALL TRANSFORMATION.
    Code example:
    CALL TRANSFORMATION /ipro/wml_translate_cls_ilfo
                SOURCE XML lx_clause_text
                RESULT XML lx_temp.
    lx_clause_text is a WordML xstring (i.e. it is a Microsoft Word file in XML
    format) and can therefore not be easily splitted into several parts.
    Unfortunately this string can get very huge (e.g. 50MB). The problem is that
    it seems that CALL TRANSFORMATION allocates memory for the source and result
    xstrings but doesn't free them after the transformation.
    So in this example this would mean that the transformation allocates ~100MB
    memory (50MB for source, ~50MB for result) and doesn't free it. Multiply
    this with a couple of transformations and a good amount of users and you see
    we get in trouble.
    I found this note regarding the problem: 1081257
    But we couldn't figure out how this problem could be solved in our case. The
    note proposes to "use several short-running programs". What is meant with
    this? By the way, our application is done with Web Dynpro for ABAP.
    Thank you very much!
    With best regards,
    Mario Düssel

    Hi,
    q1. how come the Ram consumption is increased to 99% on all the three boxes?If we continue with the theory that network connectivity was lost between the hosts, the Coherence servers on the local hosts would form their own clusters. Prior to the "split", each cache server would hold 1/12 of the primary and 1/12 of the backup (assuming you have one backup). Since Coherence avoids selecting a backup on the same host as the primary when possible, the 4 servers on each host would hold 2/3 of the cache. After the spit, each server would hold 1/6 of the primary and 1/6 of the backup, i.e., twice the memory it previously consumed for the cache. It is also possible that a substantial portion of the missing 1/3 of the cache may be restored from the near caches, in which case, each server would then hold 1/4 of the primary and 1/4 of the backup, i.e., thrice the memory it previously consumed for the cache.
    q2: where is the cache data stored in the coherence servers?on which memory?The cache data is typically stored in the jvm's heap memory area.
    Have you reviewed the logs?
    Regards,
    Harv

  • Query on Memory consumption of an object

    Hi,
    I am able to get information on the number of instances loaded, the memory occupied by those instances using heap histogram.
    Class      Instance Count      Total Size
    class [C      10965      557404
    class [B      2690      379634
    class [S      3780      220838
    class java.lang.String      10807      172912 Is there way to get detailed info like, String object of which class consume much memory.
    In other words,
    The memory consumption of String is 172912. can I have a split up like
    String Objects of Class A - 10%
    String Objects of Class B - 90%
    Thanks

    I don't know what profiler you are using but many memory profilers can tell you where the strings are allocated.

  • Dbxml memory consumption

    I have a query that returns about 10MB worth of data when run against my db -- it looks something like the following
    'for $doc in collection("VcObjStore")/doc
    where $doc[@type="Foo"]
    return <item>{$doc}</item>'
    when I run this query in dbxml.exe, I see memory footprint (of dbxml.exe) increase 125MB. Once query finishes, it comes back down.
    I expected memory consumption to be somewhat larger than what the query actually returns but this seems quite extreme.
    Is this behavior expected? What is a general rule of thumb on memory usage with respect to result size (is it really 10x)? Any way to make it less of a hog?
    Thanks

    Hi Ron,
    Thanks for a quick reply!
    - I wasn't actually benchmarking DBXML. We've observed large memory consumption during query execution in our test application and verified the same issue with dbxml.exe. Since dbxml.exe is well understood by everyone familiar with DBXML, I thought it would help starting with that.
    - Yes, an environment was created for this db. Here is the code we used to set it up
    EnvironmentConfig envConfig = new EnvironmentConfig();
    envConfig.setInitializeLocking(true);
    envConfig.setInitializeCache(true);
    envConfig.setAllowCreate(true);
    envConfig.setErrorStream(System.err);
    envConfig.setCacheSize(1024 * 1024 * 100);
    - I'd like an explanation on reasons behind the performance difference between these two queries
    Query 1:
    dbxml> time query 'for $doc in collection("VcObjStore")/doc
    where $doc[@type="VirtualMachine"]
    return $doc'
    552 objects... <snip>
    Time in seconds for command 'query': 0.031
    Query 2:
    dbxml> time query 'for $doc in collection("VcObjStore")/doc
    where $doc[@type="VirtualMachine"]
    return <val>{$doc}</val>'
    552 objects... <snip>
    Time in seconds for command 'query': 5.797
    - Any way to make the query #2 go as fast as #1?
    Thanks!

  • Memory consumption of queries in workbooks

    We have an issue with the exceution of a Workbook which contains several queries. The queries require very much memory which finally leads to a shortdump (TSV_TNEW_PAGE_ALLOC_FAILED). We found that during execution of the workbook the memory is not released after a query has been executed and therfore at some point of time the dump occurs. However, if the queries are refreshed manually one after the other in the workbook the memory is relaesed and finally the workbook can be executed by this workaround.
    My question is, if anyone has an idea, if it is possible to apply a setting somewhere that the queries relaese the memory after execution when they are all refreshed together in the workbook?
    Thanks a lot in advance for any hint & Kind regards,
    Hans-Jörg

    Hi,
    Try this,
    You may be able to workaround the problem by increasing free memory avaiable, parameter em/initial_size_MB (contact your Basis team or refer note 835474).
    Also concenrate on parameter ztta/roll_extension (Refer note 146289)
    Try increasing the parameter, abap/heap_area_dia from tcode RZ11.
    Also check the following notes in detail as well,
    649327     Analysis of memory consumption
    425207     SAP memory management, current parameter ranges
    369726     TSV_TNEW_PAGE_ALLOC_FAILED
    185185     Application: Analysis of memory bottlenecks
    If the issue persist, please review SAP Note 779123 and query design.
    check this,
    http://scn.sap.com/thread/288222
    http://www.sapfans.com/forums/viewtopic.php?f=3&t=109557
    regards,
    anand.

  • Memory Consumption: Start A Petition!

    I am using SQL Developer 4.0.0.13 Build MAIN 13.80.  I was praying that SQL Developer 4.0 would no longer use so much memory and, when doing so, slow to a crawl.  But that is not the case.
    Is there a way to start a "petition" to have the SQL Development team focus on the products memory usage?  This is problem has been there for years now with many posts and no real answer.
    If there isn't a place to start a "petition" let's do something here that Oracle will respond to.
    Thank you

    Yes, at this point (after restarting) SQL Developer is functioning fine.  Windows reports 1+ GB of free memory.  I have 3 worksheets open all connected to two different DB connections.  Each worksheet has 1 to 3 pinned query results.  My problem is that after working in SQL Developer for a a day or so with perhaps 10 worksheets open across 3 database connections and having queried large data sets and performing large exports it becomes unresponsive even after closing worksheets.  It appears like it does not clean up after itself to me.
    I will use Java VisualVM to compare memory consumption and see if it reports that SQL Developer is releasing memory but in the end I don't care about that.  I just need a responsive SQL Developer and if I need to close some worksheets at times I can understand doing so but at this time that does not help.

  • Query memory consumption

    Hi,
    Need some expert in SQL here. May i know how much memory (RAM) consumption for a simple query like 'SELECT SUM(Balance) FROM OCRD' cost.
    What about query like
    select (select sum(doctotal) from ordr) + (select sum(doctotal) from odln) + (select sum(doctotal) from oinv)
    How much memory would it normally takes? The reason is that i have a query that is quite similar to this and it would be run quite often. So i wonder if it is feasible to use this type of queries withought making the server to a crawl.
    Please note that the real query would include JOINS and such. Thanks
    Any information is appreciated

    Hi Melvin,
    Not sure I'd call myself an expert but I'll have a go at an answer
    I think you are going to need to set up a test environment and then stress test your solution to see what happens. There are so many different variables that affect the memory consumption that no-one is likely to be able to say just what the impact will be on your server. SQL Server, by default will allocate 1024Kb to each query but, of course, quite a number of factors will affect whether SQL needs more memory than this to execute a particular query (e.g. the number of joins, the locks created, whether the data is grouped or sorted, the size of the data etc etc). Also, SQL will release memory as soon as it can (based on its own algorithms) so a query that is run periodically has much less impact on the server than a query that will be run concurrently by multiple users. For these reasons, the impact can only really be assessed if you test it in a real-world scenario.
    If you've ever seen SQL Server memory usage when XL Reporter is running a very large report then you'll know that this is a very memory hungry operation. XL Reporter bombards SQL with a huge number of separate little queries and SQL Server starts grabbing significant amounts of memory to fulfill these queries. As the queries are coming so fast, SQL hasn't yet got around to releasing the memory used by previous queries so SQL instead grabs available memory from the server.
    You'll get better performance and scaleability by using stored procedures but SDK certification does not allow the use of SPs in the SBO databases.
    Hope this helps,
    Owen

  • BW data model and impacts to HANA memory consumption

    Hi All,
    As I consider how to create BW models where HANA is the DB for a BW application, it makes sense moving the reporting target from Cubes to DSOs.  Now the next logical progression of thought is that the DSO should store the lowest granularity of data(document level).  So a consolidated data model that reports on cross functional data would combine sales, inventory and purchasing data all being stored at document level.  In this scenario:
    Will a single report execution that requires data from all 3 DSOs use more memory vs the 3 DSOs aggregated say at site/day/material?Lower Granularity Data = Higher Memory Consumption per report execution
    I'm thinking that more memory is required to aggregate the data in HANA before sending to BW.  Is aggregation still necessary to manage execution memory usage?
    Regards,
    Dae Jin

    Let  me rephrase.
    I got an EarlyWatch that said my dimensions on one of cube were too big.  I ran SAP_INFOCUBE_DESIGNS in SE38 in my development box and that confirmed it.
    So, I redesigned the cube, reactivated it and reloaded it.  I then ran SAP_INFOCUBE_DESIGNS again.  The cube doesn't even show up on it.  I suspect I have to trigger something in BW to make it populate for that cube.  How do I make that happen manually?
    Thanks.
    Dave

  • Grid Control 에서 새 target node를 추가한 이후에 기존OMS와 새 Agent 간의 HeartBeat fail 해결방

    Grid Control 에서 새 target node를 추가한 이후에 기존OMS와 새 Agent 간의 HeartBeat fail 해결방법
    =========================================================================
    다음은 Grid Control 을 설치한 이후에 node를 관리해 오다가 새로운 관리할 Target node를
    추가한 이후에 Grid Control 이 있는 node의 OMS와 새로 추가한 node의 Agent가 HeartBeat 통신
    실패 시에 해결책을 소개하고 있습니다.
    Problem Description
    Grid Control 에서 새로운 node를 추가한 다음에 그 새로운 node 에서
    emctl status agent 명령 수행 시 다음과 같은 에러가 발생합니다.
    Environment :
    mesdev01 : 새로 추가한 Agent가 있는 node명
    mesdev02 : 'Central Grid Agent' 존재 & 'Repository database가 존재하는 node'
    새로 추가한 node에서 아래와 같이 emctl status agent 명령을 수행합니다.
    그런데, OMS에 대한 HeartBeat fail 이 발생했음을 알 수 있습니다.
    [mesdev01:/oracle/app/oracle/product/agent10g/bin] emctl status agent
    Oracle Enterprise Manager 10g Release 3 Grid Control 10.2.0.3.0.
    Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
    Agent Version : 10.2.0.3.0
    OMS Version : 10.2.0.3.0
    Proto Version : 10.2.0.2.0
    Agent Home : /oracle/app/oracle/product/agent10g
    Agent binaries : /oracle/app/oracle/product/agent10g
    Agent Process ID : 10526
    Parent Process ID : 10511
    Agent URL : http://mesdev01:3872/emd/main/
    Repository URL : http://mesdev02:4889/em/upload/
    Started at : 2007-12-28 10:36:59
    Started by user : oracle
    Last Reload : 2007-12-28 10:36:59
    Last successful upload : (none)
    Last attempted upload : (none)
    Total Megabytes of XML files uploaded so far : 0.00
    Number of XML files pending upload : 287
    Size of XML files pending upload(MB) : 26.40
    Available disk space on upload filesystem : 40.44%
    Last attempted heartbeat to OMS : 2007-12-28 14:13:36
    Last successful heartbeat to OMS : unknown <===
    여기서 주목할 만한 에러는 Last successful heartbeat to OMS 체크 시 status가
    unknown 으로 보인다는 것입니다.
    에러 증상
    emctl status agent 수행 시 Last successful heartbeat to OMS : unknown
    또는 agent 에서 OMS의 Status를 알 수 없다는 아래의 에러가 보이는 경우임.
    Note: "The OMS status is Unknown"
    Explanation
    이러한 에러는 주로 agent가 node에 새로(newly) install 된 경우 발생합니다.
    원래 communication 이 정상적으로 보인다면 다음과 같이 수행 결과가 나와야 합니다.
    다음은 Repository database가 존재하는 mesdev02 node에서 emctl status agent를 수행한 결과
    성공적으로 보이는 결과입니다.
    mesdev02:/u01/app/oracle/OracleHomes/agent10g/bin$ emctl status agent
    Oracle Enterprise Manager 10g Release 3 Grid Control 10.2.0.3.0.
    Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
    Agent Version : 10.2.0.3.0
    OMS Version : 10.2.0.3.0
    Proto Version : 10.2.0.2.0
    Agent Home : /u01/app/oracle/OracleHomes/agent10g
    Agent binaries : /u01/app/oracle/OracleHomes/agent10g
    Agent Process ID : 29307
    Parent Process ID : 29300
    Agent URL : https://mesdev02:3872/emd/main/
    Repository URL : https://mesdev02:1159/em/upload
    Started at : 2007-12-27 11:29:22
    Started by user : oracle
    Last Reload : 2007-12-27 17:57:18
    Last successful upload : 2007-12-28 15:02:09
    Total Megabytes of XML files uploaded so far : 48.78
    Number of XML files pending upload : 0
    Size of XML files pending upload(MB) : 0.00
    Available disk space on upload filesystem : 38.29%
    Last successful heartbeat to OMS : 2007-12-28 15:02:59 <====
    Agent is Running and Ready
    원인
    이 에러가 발생하는 원인은 새로 추가한 database의 Agent process가 GRID Control 이
    install된 main server의 OMS와 연결되지 않기 때문인데, 이렇게 새로운 node를 추가한 이후에
    수행해줘야 할 절차가 있습니다.
    OMS_HOME/sysman/log 의 secure.log 를 확인해 보면 OMS is "secure locked" 라는 에러가 보입니다.
    The OMS is "secure locked", then the agent also needs to be secure.
    [ 사전 check 사항 ]
    1. 두 node 모두 DNS server에 등록되어 있는지 확인한다.
    2. 두 node 모두 쌍방 간에 /etc/hosts file에 서로의 ip address와 hostname을 넣어준다.
    위 두가지 check 사항이 충족된 경우에도 발생한다면 새로 추가한 node 쪽에서 자신의 정보를
    OMS(즉, Repository node) 에게 정보를 upload하지 않았기 때문입니다.
    Upload를 하는 방법은 간단하게 아래와 같은 emctl upload agent 라는 명령으로 가능합니다.
    emctl upload agent
    Upload를 하게 될 때 그 정보는 XML 형태로 중앙 Repository에 저장됩니다.
    Solution Description
    아래 절차 중에 1번만 Grid Control이 설치된 OMS server에서 수행을 하고,
    2번~7번까지는 새로 추가한 Agent node에서 수행합니다.
    1. On oms server: (Grid Control 이 설치된 서버에서 수행)
    <OMS_HOME>/bin/emctl status oms -secure
    OMS node인 mesdev02 node에서 우선 emctl status oms -secure 를 수행합니다.
    mesdev02:/u01/app/oracle$$OMS_HOME/bin/emctl status oms -secure
    Oracle Enterprise Manager 10g Release 3 Grid Control
    Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
    Checking the security status of the OMS at location set in /u01/app/oracle/OracleHomes/oms10g/sysman/config/emoms.properties... Done.
    OMS is secure on HTTPS Port 1159
    2. Stop Agent: Agent를 설치한 node에서 수행합니다.
    <AGENT_HOME>/bin/emctl stop agent
    [mesdev01:/oracle/app/oracle/product/agent10g/bin]./emctl stop agent
    Oracle Enterprise Manager 10g Release 3 Grid Control 10.2.0.3.0.
    Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
    Stopping agent ... stopped.
    3. Verify no residue emagent processes running:
    이와 같은 명령으로 남아 있는 emagent process가 있는지 확인합니다.
    ps -ef|grep emagent
    [mesdev01:/oracle/app/oracle/product/agent10g/bin]ps -ef|grep emagent
    oracle 4353 3125 0 16:19:34 pts/tb 0:00 grep emagent
    4. If running secure please resecure agent on :
    <AGENT_HOME>/bin/emctl secure agent
    emctl secure agent 를 수행하는 목적은 모든 agent에 대해서 secure communication 을
    하기 위해 필요합니다.
    [ 주의사항 ]
    emctl secure agent 명령은 Grid Control installation 과정에서
    --> 'Specify Security Options' 단계에서
    --> Management Service Security 항목에서
    --> OMS와 함께 Secure Communication을 하고자 하는 agent에 password 설정을 하는데,
    --> Require Secure Communication for all agents 를 check 표시하였다면
    install 이후에 아래와 같이 emctl secure agent 명령을 수행하여야 합니다.
    (기본적으로 Require Secure Communication for all agents는 enable되어야 함).
    [mesdev01:/oracle/app/oracle/product/agent10g/bin]./emctl secure agent
    Oracle Enterprise Manager 10g Release 3 Grid Control 10.2.0.3.0.
    Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
    Enter Agent Registration password :
    Agent is already stopped... Done.
    Securing agent... Started.
    Requesting an HTTPS Upload URL from the OMS... Done.
    Requesting an Oracle Wallet and Agent Key from the OMS... Done.
    Check if HTTPS Upload URL is accessible from the agent... Done.
    Configuring Agent for HTTPS in CENTRAL_AGENT mode... Done.
    EMD_URL set in /oracle/app/oracle/product/agent10g/sysman/config/emd.properties
    Securing agent... Successful.
    5. 4번 단계가 정상적으로 완료되면 이제 Agent를 기동합니다.
    Start Agent
    <AGENT_HOME>/bin/emctl start agent
    [mesdev01:/oracle/app/oracle/product/agent10g/bin]./emctl start agent
    Oracle Enterprise Manager 10g Release 3 Grid Control 10.2.0.3.0.
    Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
    Starting agent ................................. started but not ready.
    6. Then verify upload works
    <AGENT_HOME>/bin/emctl upload agent
    [mesdev01:/oracle/app/oracle/product/agent10g/bin]./emctl upload agent
    Oracle Enterprise Manager 10g Release 3 Grid Control 10.2.0.3.0.
    Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
    EMD upload error: Upload timedout before completion
    ==> timedout 메시지는 warning으로 보이며, 운영에는 지장이 없습니다.
    7. 다시 확인 차 emctl status agent 명령을 수행해 봅니다.
    Then run status of agent
    <AGENT_HOME>/bin/emctl status agent
    [mesdev01:/oracle/app/oracle/product/agent10g/bin]./emctl status agent
    Oracle Enterprise Manager 10g Release 3 Grid Control 10.2.0.3.0.
    Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
    Agent Version : 10.2.0.3.0
    OMS Version : 10.2.0.3.0
    Proto Version : 10.2.0.2.0
    Agent Home : /oracle/app/oracle/product/agent10g
    Agent binaries : /oracle/app/oracle/product/agent10g
    Agent Process ID : 4618
    Parent Process ID : 4609
    Agent URL : https://mesdev01:3872/emd/main/
    Repository URL : https://mesdev02:1159/em/upload
    Started at : 2008-01-02 16:21:03
    Started by user : oracle
    Last Reload : 2008-01-02 16:21:03
    Last successful upload : 2008-01-02 16:25:37
    Total Megabytes of XML files uploaded so far : 54.28
    Number of XML files pending upload : 40
    Size of XML files pending upload(MB) : 0.97
    Available disk space on upload filesystem : 37.22%
    Collection Status : Disabled by Upload Manager
    Last successful heartbeat to OMS : 2008-01-02 16:24:49 <== success!
    Agent is Running and Ready
    [mesdev01:/oracle/app/oracle/product/agent10g/bin]
    8. 만약 Last successful heartbeat to OMS 에 위와 같은 success로 보이지 않으면
    step 1부터 7까지 수행된 모든 output 을 수집하여 문의를 합니다.
    9. 그리고, debugging을 위해 다음의 log 화일들을 보관하시기 바랍니다.
    <OMS_HOME>/sysman/log
    <OMS_HOME>/sysman/log 의 secure.log가 특히 중요합니다.
    <AGENT_HOME>/sysman/log
    References
    Note 458033.1
    Title:Problem: Agent Upload Fails: OMS Is Locked and Agent not Secured

  • Appendbytes with larger memory consumption

    hi at all,
    when i user appendbytes, my memory consumption become too Larger, how can i reduce it.
    i tried to close the netStream and seek(0) and appendBytesAction, it's not usef.
    Thanks.

    yes,the URLStream, but i put the data in ByteArray in ProgressEvent, than use appendBytes with Timer

Maybe you are looking for

  • Small/medium business network upgrade

    Hello, I have been asked by a friend to do an upgrade to his business network.  I would appreciate any help and suggestions. There are several things which need to be accomplished: Implement a VPN solution for remote access Increase bandwidth to supp

  • Missing SYST fields in the table GB01

    Hi All, We have a subsitution that work fine in the foreground. but when we trigger the program in the background, it does not trigger the substituion. At the prequisete level, i dont seem to have SYST-CPROG in the abap system fields in the substitui

  • How to add diskgroup in ASM on linux

    Hi All, I have installed oracle 10.2.0.4 through Oracle VM- template. I install this template on OEL 5 (oracle enterprise linux 5) update 2. 10.2.0.4 template is readily avaible which I just extract and run. It install 10.2.0.4 with ASM for single in

  • Since installilng Adobe Reader 9 Firefox is very slow & "not responding" How can this be fixed?

    Yesterday I installed Adobe Reader 9. Also Firefox became incredibly slow and I keep seeing strange site names in the lower left of the screen. I keep getting "not responding messages", and in 1-2 minutes the screen I wanted comes up. I don't know if

  • How to use my 310-1020a all-in-one desktop as external monitor for my laptop?

    Hello All,  I would like to know if I can use my 310-1020a Touchsmart as second monitor for my laptop. Any ideas? Kind Regards