Implementation of Hash Bucket

Hi,
I'm working on a 10.2.0.2 RAC database and have heard of hash bucket implementation to get better performance of the database.
Can you give some knowledge on that? And also, the process of implemetation of hash bucket..
Thanks in advance..
Rgds,
Anjan

That job description is more than what most APPS DBAs would do. As the previous poster said, the APPS DBA would install and configure the system. Then the implementation team of techinical and functional experts would complete the Application setup/configurations and convert any legacy data into the newly created Oracle Financials system. If this Oracle Financials system also includes Manufacturing, Order Management, Human Resource, Advanced Supply Chain Planning, Sales, Service, Procurement, you are likely to have a large team of specialists for implementing Oracle E-Business Suite as most resources are not functional and technical experts in all modules.

Similar Messages

  • To implement a hash structure for multidimensional data

    hi
    I have to implement a hash structure (multidimensional data) i.e., like
    a Grid file. I have to find data points in the given range of the grid file.
    Any one could help me ....
    Thanks

    hello
    Thanks for your quick reply and sorry for the bad
    explanation. What I meant was a 2-dimensional graphI'm still not sure I know what you're after. I assume what you're saying here is that you have a collection of vertices and edges, and they all lie in the same plane. Is that right?
    which was divided into some fragments.So, does this mean you have a collection of subgraphs of your main graph?
    We can insert
    some points dynamically, based on their coordinate
    axes, into one of the fragments.And here do you mean that you have some rule that allows you to add another vertex and associated edge list to a given fragment at runtime?
    >
    It is similar to the implementation in the links that
    you had sent me.
    Could you give me some outline to implement it ...I don't have enough information to do that.
    You could define a data structure to represent one Vertex/Edge-List pair. But then, maybe you don't need to work with vertices so much as Edges? Then you could define a data structure to represent a Vertex, then another to represent an Edge, which would be two vertices. And you might want to think of your "main" Graph as a single entitity, in which case you will need a field to show what fragment this vertex (or edge) belongs to. But then it might make more sense to have seperate data structures for all fragments in which case you wouldn't.
    Too many open questions. What, exactly, do you want to do with this thing?
    Good Luck
    Lee

  • Fundamental question on etherchannel hashing.

    Folks,
    I had some very basic questions on the way hashing takes place on Cisco switches connected to each other over a port channel.
    Considering that 2 links are used between the switch, namely port Gi 0/1 and Gi 0/2.
    1) By default I understand that this could be a L2 etherchannel or L3 etherchannel, the default mode of hashing is still the Source and destination MAC id, am I correct?
    2) So now if the source IP of 1.1.1.1 needs to reach to 1.1.1.2 how would the hashing calculation take place? Again I understand that this is on the basis of the last 3 bits from the IP address field. 001 and 010 come to 011 which is 3. So based on the number of ports it will select which interface to assign.
    I still do not understand what has the below output got to do with the selection:
    Core-01#sh interfaces port-channel 40 etherchannel
    Age of the Port-channel   = 204d:07h:40m:10s
    Logical slot/port   = 14/3          Number of ports = 2
    GC                  = 0x00000000      HotStandBy port = null
    Passive port list   = Te1/4 Te2/4
    Port state          = Port-channel L3-Ag Ag-Inuse
    Protocol            =    -
    Port security       = Disabled
    Fast-switchover     = disabled
    Fast-switchover Dampening = disabled
    Load share deferral = disabled
    Ports in the Port-channel:
    Index   Load      Port          EC state       No of bits
    ------+------+------------+------------------+-----------
     0      E4            Te1/4                 On   4
     1      1B            Te2/4                 On   4
    Time since last port bundled:    204d:07h:19m:44s    Te2/4
    Time since last port Un-bundled: 204d:07h:19m:54s    Te1/4
    Core-01#
    What does that load-value mean? and how is that calculated? and does that have a role to play in the selection of the port?
    Thanks,
    Nik

    Disclaimer
    The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
    Liability Disclaimer
    In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
    Posting
    So it looks like how to divide the bits is decided by the switch.
    Ah, yea, reread my original post for answer to your second question.  ;)
    An actual hashing algorithm can be rather complicated.  What they try to do is to "fairly" divide the input value across the number of hash buckets.  For example, if you have two Etherchannel links, they will try to hash the hash attribute such that 50% goes to one link and 50% to the other link.
    For example, assuming you hash on source IP, how do you hash 192.168.1.1, .2, .3 and .4?  A very simple hash algorithm could just go "odd"/"even", but what if your IPs were .2, .4, .6 and .8?  A more complex algorithm would try to still hash these IPs equally across your two links.
    Again, hashing algorithms can get complex, search the Internet on the subject, and you may better appreciate this.
    Because of the possibility of such complexity, one vendor's hashing might be better than another vendor's, so they often will keep their actual algorithm secret.
    No matter, though, how good a hashing algorithm might be, they all will hash the same attribute to the same bucket.  For example, if you're still using source IP, and only source IP, as your hash attribute, all traffic from the same host is going to use the same link.  This is why what attributes are used are important.  Choices vary per Cisco platform.  Normally, you want the hash attributes to differ between every different traffic flow.
    Etherchannel does not analyze actual link loading.  If there are only two active flows, and both hash to the same link, that link might become saturated, while another link is completely idle.
    Short term Etherchannel usage, unless you're dealing with lots of flows, is often not very balanced.  Longer term Etherchannel usage, should be almost balanced.  If not, most likely you're not using the optimal hash attributes for your traffic.

  • Using JHS tables and hashing with salt algorithms for Weblogic security

    We are going to do our first enterprise ADF/JHeadstart application. For security part, we are going to do the following:
    1. We will use JHS tables as authentication for ADF security.
    2. We will use JAAS as authentication and Custom as authorization.
    2. We need to use JHeadStart security service screen in our application to manage users, roles and permission, instead of doing users/groups management within Weblogic.
    3. We will create new Weblogic SQL Authentication Provider.
    4. We will store salt with password in the database table.
    5. We will use Oracle MDS.
    There are some blogs online giving detail steps on how to create Weblogic SQL Authentication Provider and use JHS tables as authentication for ADF security. I am not sure about the implementation of hashing with salt algorithms, as ideally we'd like to use JHS security service screen in the application to manage users, roles and permission, not using Weblogic to do the users/groups management. We are going to try JMX client to interact with Weblogic API, looks like it is a flexiable approach. Does anybody have experience on working with JMX, SQL Authentication Provider and hashing with salt algorithms? Just want to make sure we are on the right track.
    Thanks,
    Sarah

    To be clear, we are planning on using a JMX client at the Entity level using custom JHS entitiy classes.
    BradW working with Sarah

  • Parallel Hash Join always swapping to TEMP

    Hi,
    I've experienced some strange behaviour on Oracle 9.2.0.5 recently: simple query hash joining two tables - smaller with 16k records/1 mb of size and bigger with 2.5m records/1.5 gb of size is swapping to TEMP when launched in parallel mode (4 set of PQ slaves). What is strange serial execution is running as expected - in-memory Hash Join occurs. It's worth to add that both parallel and serial execution properly selects smaller table as inner one but parallel query always decides to buffer the source data (no matter how big is it).
    To be more precise - all table stats are gathered, I have enough PGA memory assigned to queries (WORKAREA_POLICY_SIZE=AUTO, PGA_AGGREGATE_TARGET=6GB) and I properly analyze the results. Even hidden parameter SMMPX_MAX_SIZE is properly set to about 2GB, the issue is that parallel execution still decides to swap (even if the inner data size for each slave is about 220kb!).
    I dig into the traces (10104 event) and found some substantial difference between serial and parallel execution. It looks like some internal flag orders PQ slaves to always buffer the data, here is what I found in PQ slave trace:
    HASH JOIN STATISTICS (INITIALIZATION)
    Original memory: 4428800
    Memory after all overhead: 4283220
    Memory for slots: 3809280
    Calculated overhead for partitions and row/slot managers: 473940
    Hash-join fanout: 8
    Number of partitions: 9
    Number of slots: 15
    Multiblock IO: 31
    Block size(KB): 8
    Cluster (slot) size(KB): 248
    Hash-join fanout (manual): 8
    Cluster/slot size(KB) (manual): 280
    Minimum number of bytes per block: 8160
    Bit vector memory allocation(KB): 128
    Per partition bit vector length(KB): 16
    Maximum possible row length: 1455
    Estimated build size (KB): 645
    Estimated Row Length (includes overhead): 167
    Immutable Flags:
    BUFFER the output of the join for Parallel Query
    kxhfSetPhase: phase=BUILD
    kxhfAddChunk: add chunk 0 (sz=32) to slot table
    kxhfAddChunk: chunk 0 (lbs=800003ff640ebb50, slotTab=800003ff640ebce8) successfuly added
    kxhfSetPhase: phase=PROBE_1
    Bolded is the part that is not present in serial mode. Unfortunatelly I cannot find anything that could help identifying the reason or setting that drives this behaviour :(
    Best regards
    Bazyli
    Edited by: user10419027 on Oct 13, 2008 3:53 AM

    Jonathan,
    Distribution seems to be as expected (HASH/HASH), please have a look on the query plan:
    PLAN_TABLE_OUTPUT
    | Id  | Operation            |  Name                         | Rows  | Bytes | Cost  |  TQ    |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT     |                               |   456K|    95M|   876 |        |      |            |
    |*  1 |  HASH JOIN           |                               |   456K|    95M|   876 | 43,02  | P->S | QC (RAND)  |
    |   2 |   TABLE ACCESS FULL  | SH30_8700195_9032_0_TMP_TEST  | 16555 |   468K|    16 | 43,00  | P->P | HASH       |
    |   3 |   TABLE ACCESS FULL  | SH30_8700195_9031_0_TMP_TEST  |  2778K|   503M|   860 | 43,01  | P->P | HASH       |
    Predicate Information (identified by operation id):
       1 - access(NVL("A"."PROD_ID",'NULL!')=NVL("B"."PROD_ID",'NULL!') AND
                  NVL("A"."PROD_UNIT_OF_MEASR_ID",'NULL!')=NVL("B"."PROD_UNIT_OF_MEASR_ID",'NULL!'))Let me also share with you trace files from parallel and serial execution.
    First, serial execution (only 10104 event details):
    Dump file /opt/oracle/admin/cdwep4/udump/cdwep401_ora_18729.trc
    Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.5.0 - Production
    ORACLE_HOME = /opt/oracle/product/9.2.0.5
    System name:     HP-UX
    Node name:     ethp1018
    Release:     B.11.11
    Version:     U
    Machine:     9000/800
    Instance name: cdwep401
    Redo thread mounted by this instance: 1
    Oracle process number: 100
    Unix process pid: 18729, image: oracle@ethp1018 (TNS V1-V3)
    kxhfInit(): enter
    kxhfInit(): exit
    *** HASH JOIN STATISTICS (INITIALIZATION) ***
    Original memory: 4341760
    Memory after all overhead: 4163446
    Memory for slots: 3301376
    Calculated overhead for partitions and row/slot managers: 862070
    Hash-join fanout: 8
    Number of partitions: 8
    Number of slots: 13
    Multiblock IO: 31
    Block size(KB): 8
    Cluster (slot) size(KB): 248
    Hash-join fanout (manual): 8
    Cluster/slot size(KB) (manual): 240
    Minimum number of bytes per block: 8160
    Bit vector memory allocation(KB): 128
    Per partition bit vector length(KB): 16
    Maximum possible row length: 1455
    Estimated build size (KB): 1083
    Estimated Row Length (includes overhead): 67
    # Immutable Flags:
    kxhfSetPhase: phase=BUILD
    kxhfAddChunk: add chunk 0 (sz=32) to slot table
    kxhfAddChunk: chunk 0 (lbs=800003ff6c063b20, slotTab=800003ff6c063cb8) successfuly added
    kxhfSetPhase: phase=PROBE_1
    qerhjFetch: max build row length (mbl=110)
    *** END OF HASH JOIN BUILD (PHASE 1) ***
      Revised row length: 68
      Revised row count: 16555
      Revised build size: 1089KB
    kxhfResize(enter): resize to 12 slots (numAlloc=8, max=13)
    kxhfResize(exit): resized to 12 slots (numAlloc=8, max=12)
      Slot table resized: old=13 wanted=12 got=12 unload=0
    *** HASH JOIN BUILD HASH TABLE (PHASE 1) ***
    Total number of partitions: 8
    Number of partitions which could fit in memory: 8
    Number of partitions left in memory: 8
    Total number of slots in in-memory partitions: 8
    Total number of rows in in-memory partitions: 16555
       (used as preliminary number of buckets in hash table)
    Estimated max # of build rows that can fit in avail memory: 55800
    ### Partition Distribution ###
    Partition:0    rows:2131       clusters:1      slots:1      kept=1
    Partition:1    rows:1975       clusters:1      slots:1      kept=1
    Partition:2    rows:1969       clusters:1      slots:1      kept=1
    Partition:3    rows:2174       clusters:1      slots:1      kept=1
    Partition:4    rows:2041       clusters:1      slots:1      kept=1
    Partition:5    rows:2092       clusters:1      slots:1      kept=1
    Partition:6    rows:2048       clusters:1      slots:1      kept=1
    Partition:7    rows:2125       clusters:1      slots:1      kept=1
    *** (continued) HASH JOIN BUILD HASH TABLE (PHASE 1) ***
    Revised number of hash buckets (after flushing): 16555
    Allocating new hash table.
    *** (continued) HASH JOIN BUILD HASH TABLE (PHASE 1) ***
    Requested size of hash table: 4096
    Actual size of hash table: 4096
    Number of buckets: 32768
    kxhfResize(enter): resize to 14 slots (numAlloc=8, max=12)
    kxhfResize(exit): resized to 14 slots (numAlloc=8, max=14)
      freeze work area size to: 4357K (14 slots)
    *** (continued) HASH JOIN BUILD HASH TABLE (PHASE 1) ***
    Total number of rows (may have changed): 16555
    Number of in-memory partitions (may have changed): 8
    Final number of hash buckets: 32768
    Size (in bytes) of hash table: 262144
    kxhfIterate(end_iterate): numAlloc=8, maxSlots=14
    *** (continued) HASH JOIN BUILD HASH TABLE (PHASE 1) ***
    ### Hash table ###
    # NOTE: The calculated number of rows in non-empty buckets may be smaller
    #       than the true number.
    Number of buckets with   0 rows:      21129
    Number of buckets with   1 rows:       8755
    Number of buckets with   2 rows:       2024
    Number of buckets with   3 rows:        433
    Number of buckets with   4 rows:        160
    Number of buckets with   5 rows:         85
    Number of buckets with   6 rows:         69
    Number of buckets with   7 rows:         41
    Number of buckets with   8 rows:         32
    Number of buckets with   9 rows:         18
    Number of buckets with between  10 and  19 rows:         21
    Number of buckets with between  20 and  29 rows:          1
    Number of buckets with between  30 and  39 rows:          0
    Number of buckets with between  40 and  49 rows:          0
    Number of buckets with between  50 and  59 rows:          0
    Number of buckets with between  60 and  69 rows:          0
    Number of buckets with between  70 and  79 rows:          0
    Number of buckets with between  80 and  89 rows:          0
    Number of buckets with between  90 and  99 rows:          0
    Number of buckets with 100 or more rows:          0
    ### Hash table overall statistics ###
    Total buckets: 32768 Empty buckets: 21129 Non-empty buckets: 11639
    Total number of rows: 16555
    Maximum number of rows in a bucket: 24
    Average number of rows in non-empty buckets: 1.422373
    =====================
    .... (lots of fetching) ....
    qerhjFetch: max probe row length (mpl=0)
    qerhjFreeSpace(): free hash-join memory
    kxhfRemoveChunk: remove chunk 0 from slot tableAnd finally, PQ slave output (only one trace, please note Immutable Flag that I believe orders Oracle to buffer to TEMP):
    Dump file /opt/oracle/admin/cdwep4/bdump/cdwep401_p002_4640.trc
    Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.5.0 - Production
    ORACLE_HOME = /opt/oracle/product/9.2.0.5
    System name:     HP-UX
    Node name:     ethp1018
    Release:     B.11.11
    Version:     U
    Machine:     9000/800
    Instance name: cdwep401
    Redo thread mounted by this instance: 1
    Oracle process number: 86
    Unix process pid: 4640, image: oracle@ethp1018 (P002)
    kxhfInit(): enter
    kxhfInit(): exit
    *** HASH JOIN STATISTICS (INITIALIZATION) ***
    Original memory: 4428800
    Memory after all overhead: 4283220
    Memory for slots: 3809280
    Calculated overhead for partitions and row/slot managers: 473940
    Hash-join fanout: 8
    Number of partitions: 9
    Number of slots: 15
    Multiblock IO: 31
    Block size(KB): 8
    Cluster (slot) size(KB): 248
    Hash-join fanout (manual): 8
    Cluster/slot size(KB) (manual): 280
    Minimum number of bytes per block: 8160
    Bit vector memory allocation(KB): 128
    Per partition bit vector length(KB): 16
    Maximum possible row length: 1455
    Estimated build size (KB): 645
    Estimated Row Length (includes overhead): 167
    # Immutable Flags:
      BUFFER the output of the join for Parallel Query
    kxhfSetPhase: phase=BUILD
    kxhfAddChunk: add chunk 0 (sz=32) to slot table
    kxhfAddChunk: chunk 0 (lbs=800003ff640ebb50, slotTab=800003ff640ebce8) successfuly added
    kxhfSetPhase: phase=PROBE_1
    qerhjFetch: max build row length (mbl=96)
    *** END OF HASH JOIN BUILD (PHASE 1) ***
      Revised row length: 54
      Revised row count: 4203
      Revised build size: 221KB
    kxhfResize(enter): resize to 16 slots (numAlloc=8, max=15)
    kxhfResize(exit): resized to 16 slots (numAlloc=8, max=16)
      Slot table resized: old=15 wanted=16 got=16 unload=0
    *** HASH JOIN BUILD HASH TABLE (PHASE 1) ***
    Total number of partitions: 8
    Number of partitions which could fit in memory: 8
    Number of partitions left in memory: 8
    Total number of slots in in-memory partitions: 8
    Total number of rows in in-memory partitions: 4203
       (used as preliminary number of buckets in hash table)
    Estimated max # of build rows that can fit in avail memory: 85312
    ### Partition Distribution ###
    Partition:0    rows:537        clusters:1      slots:1      kept=1
    Partition:1    rows:554        clusters:1      slots:1      kept=1
    Partition:2    rows:497        clusters:1      slots:1      kept=1
    Partition:3    rows:513        clusters:1      slots:1      kept=1
    Partition:4    rows:498        clusters:1      slots:1      kept=1
    Partition:5    rows:543        clusters:1      slots:1      kept=1
    Partition:6    rows:547        clusters:1      slots:1      kept=1
    Partition:7    rows:514        clusters:1      slots:1      kept=1
    *** (continued) HASH JOIN BUILD HASH TABLE (PHASE 1) ***
    Revised number of hash buckets (after flushing): 4203
    Allocating new hash table.
    *** (continued) HASH JOIN BUILD HASH TABLE (PHASE 1) ***
    Requested size of hash table: 1024
    Actual size of hash table: 1024
    Number of buckets: 8192
    kxhfResize(enter): resize to 18 slots (numAlloc=8, max=16)
    kxhfResize(exit): resized to 18 slots (numAlloc=8, max=18)
      freeze work area size to: 5812K (18 slots)
    *** (continued) HASH JOIN BUILD HASH TABLE (PHASE 1) ***
    Total number of rows (may have changed): 4203
    Number of in-memory partitions (may have changed): 8
    Final number of hash buckets: 8192
    Size (in bytes) of hash table: 65536
    kxhfIterate(end_iterate): numAlloc=8, maxSlots=18
    *** (continued) HASH JOIN BUILD HASH TABLE (PHASE 1) ***
    ### Hash table ###
    # NOTE: The calculated number of rows in non-empty buckets may be smaller
    #       than the true number.
    Number of buckets with   0 rows:       5284
    Number of buckets with   1 rows:       2177
    Number of buckets with   2 rows:        510
    Number of buckets with   3 rows:        104
    Number of buckets with   4 rows:         51
    Number of buckets with   5 rows:         14
    Number of buckets with   6 rows:         14
    Number of buckets with   7 rows:         13
    Number of buckets with   8 rows:         12
    Number of buckets with   9 rows:          4
    Number of buckets with between  10 and  19 rows:          9
    Number of buckets with between  20 and  29 rows:          0
    Number of buckets with between  30 and  39 rows:          0
    Number of buckets with between  40 and  49 rows:          0
    Number of buckets with between  50 and  59 rows:          0
    Number of buckets with between  60 and  69 rows:          0
    Number of buckets with between  70 and  79 rows:          0
    Number of buckets with between  80 and  89 rows:          0
    Number of buckets with between  90 and  99 rows:          0
    Number of buckets with 100 or more rows:          0
    ### Hash table overall statistics ###
    Total buckets: 8192 Empty buckets: 5284 Non-empty buckets: 2908
    Total number of rows: 4203
    Maximum number of rows in a bucket: 16
    Average number of rows in non-empty buckets: 1.445323
    kxhfWrite: hash-join is spilling to disk
    kxhfWrite: Writing dba=950281 slot=8 part=8
    kxhfWrite: Writing dba=950312 slot=9 part=8
    kxhfWrite: Writing dba=950343 slot=10 part=8
    kxhfWrite: Writing dba=950374 slot=11 part=8
    .... (lots of writing) ....
    kxhfRead(): Reading dba=950281 into slot=15
    kxhfIsDone: waiting slot=15 lbs=800003ff640ebb50
    kxhfRead(): Reading dba=950312 into slot=16
    kxhfIsDone: waiting slot=16 lbs=800003ff640ebb50
    kxhfRead(): Reading dba=950343 into slot=17
    kxhfFreeSlots(800003ff7c068918): all=0 alloc=18 max=18
      EmptySlots:15 8 9 10 11 12 13
      PendingSlots:
    kxhfIsDone: waiting slot=17 lbs=800003ff640ebb50
    kxhfRead(): Reading dba=950374 into slot=15
    kxhfFreeSlots(800003ff7c068918): all=0 alloc=18 max=18
      EmptySlots:16 8 9 10 11 12 13
      PendingSlots:
    .... (lots of reading) ....
    qerhjFetchPhase2(): building a hash table
    kxhfFreeSlots(800003ff7c068980): all=1 alloc=18 max=18
      EmptySlots:2 4 6 1 0 7 5 3 14 17 16 15 8 9 10 11 12 13
      PendingSlots:
    qerhjFreeSpace(): free hash-join memory
    kxhfRemoveChunk: remove chunk 0 from slot tableWhy do you think it's surprising that Oracle utilizes TEMP? Basing on traces Oracle seems to be very sure it should spill to disk. I believe the key to answer is this immutable flag printing "BUFFER the output of the join for Parallel Query" - as I mentioned in one of previous posts it's opposite to "Not BUFFER(execution) output of the join for PQ" which appears in some traces found on internet.
    Best regards
    Bazyli

  • How to implement hashCode() function

    Hello everybody
    I need to overrided hashCode method in my Object. This contains 4 attributes of type Long. I need to implement compatible hash code for my equals function.
    public class MyObject {
    Long attr1;
    Long attr2;
    Long attr3;
    Long attr4;
    public boolean equals(Object obj) {
    if (this == obj)
    return true;
    if (this == null || this.getClass() != obj.getClass())
    return false;
    TSCObject object = (TSCObject) obj;
    return (this.attr1 != null && this.attr1.equals(object) &&
    this.attr2 != null &&
    this.attr2.equals(object) &&
    this.attr3 != null && this.attr3.equals(object) &&
    this.attr4 != null &&
    this.attr4.equals(object));
    public int hachCode() {
    return 0;//??????????????
    Can somebody help me. I've heard something about HashCodeBuilder
    Thank

    It is hashCode(), not hachCode().
    You can consider to return the sum of the attr1.hashCode() + attr2.hashCode() + etc. You should try to "guarantee" as much as possible that the hashCode() returns an unique identifier of the MyObject in such a fast way so that it will improve performance of hashmaps/sets/tables which are used to store those objects. If you put a new MyObject in such a hashmap/set/table then it will check based on the hashcode if the object doesn't already exist. If it does, then it will invoke the equals() to check if they are certainly equal (which is generally more expensive).
    Also see http://java.sun.com/javase/6/docs/api/java/lang/Object.html#hashCode()

  • Windows 8.1 fault bucket

    Keep getting this event 1001, windows error reporting fault bucket.  No idea what driver or app is causing this problem. Thank you for any help you can provide!
    Fault bucket -824565334, type 5
    Event Name: AEAPPINVW8
    Response: Not available
    Cab Id: 0
    Problem signature:
    P1: 101
    P2: 2
    P3: 6.3.0.0
    P4: 1033
    P5: 68
    P6:
    P7:
    P8:
    P9:
    P10:
    Attached files:
    These files may be available here:
    Analysis symbol:
    Rechecking for solution: 0
    Report Id: b09aeac4-7cdc-11e3-bea8-d850e605b79c
    Report Status: 0
    Hashed bucket: 6c3ac182050ca55d3e62cebf74de99f8
    <Event xmlns="">
    - <System>
      <Provider Name="Windows Error Reporting" />
      <EventID Qualifiers="0">1001</EventID>
      <Level>4</Level>
      <Task>0</Task>
      <Keywords>0x80000000000000</Keywords>
      <TimeCreated SystemTime="2014-01-14T05:28:26.000000000Z" />
      <EventRecordID>3823</EventRecordID>
      <Channel>Application</Channel>
      <Computer>ANewPC</Computer>
      <Security />
      </System>
    - <EventData>
      <Data>-824565334</Data>
      <Data>5</Data>
      <Data>AEAPPINVW8</Data>
      <Data>Not available</Data>
      <Data>0</Data>
      <Data>101</Data>
      <Data>2</Data>
      <Data>6.3.0.0</Data>
      <Data>1033</Data>
      <Data>68</Data>
      <Data />
      <Data />
      <Data />
      <Data />
      <Data />
      <Data />
      <Data />
      <Data />
      <Data>0</Data>
      <Data>b09aeac4-7cdc-11e3-bea8-d850e605b79c</Data>
      <Data>0</Data>
      <Data>6c3ac182050ca55d3e62cebf74de99f8</Data>
      </EventData>
      </Event>

    When this happens, I am usually away from the computer or don't notice the problem when it happens.  I receive no pop-ups or errors except for what is in the event logs.
    This may help:
    Right after the fault bucket occurs, at exactly the same second, these three events are logged everytime:
    1)event 102 ESENT
    svchost (4292) Instance: The database engine (6.03.9600.0000) is starting a new instance (0).
    2)event 105 ESENT
    svchost (4292) Instance: The database engine started a new instance (0). (Time=0 seconds)
     Internal Timing Sequence: [1] 0.000, [2] 0.000, [3] 0.000, [4] 0.016, [5] 0.000, [6] 0.000, [7] 0.000, [8] 0.000, [9] 0.000, [10] 0.000.
    3)event 326 ESENT
    svchost (4292) Instance: The database engine attached a database (1, C:\ProgramData\Microsoft\Windows\AppRepository\PackageRepository.edb). (Time=0 seconds) Internal Timing Sequence: [1] 0.000, [2] 0.000, [3] 0.000, [4] 0.000, [5] 0.000, [6] 0.000, [7] 0.000,
    [8] 0.000, [9] 0.000, [10] 0.000, [11] 0.000, [12] 0.000.
    Saved Cache: 1 0
    Then exactly 5 minutes later, these two events are recorded:
    1)event 327 ESENT
    svchost (4292) Instance: The database engine detached a database (1, C:\ProgramData\Microsoft\Windows\AppRepository\PackageRepository.edb). (Time=0 seconds) Internal Timing Sequence: [1] 0.000, [2] 0.000, [3] 0.000, [4] 0.000, [5] 0.000, [6] 0.015, [7] 0.000,
    [8] 0.000, [9] 0.000, [10] 0.000, [11] 0.000, [12] 0.000.
    Revived Cache: 0 0
    2)event 103 ESENT
    svchost (4292) Instance: The database engine stopped the instance (0).
     Dirty Shutdown: 0
     Internal Timing Sequence: [1] 0.000, [2] 0.000, [3] 0.000, [4] 0.000, [5] 0.000, [6] 0.000, [7] 0.000, [8] 0.000, [9] 0.000, [10] 0.000, [11] 0.000, [12] 0.000, [13] 0.000, [14] 0.000, [15] 0.000.
    I checked scheduled tasks and found these tasks occur at -exactly- the same time:
    Windows > Memory Diagnostics > ProcessMemoryDiagnosticEvents > The operator or administrator has refused the request (0x800710E0)
    Windows  > Memory Diagnostics > RunFullMemoryDiagnostics > The process terminated unexpectedly (0x8007042B)
    Windows > Servicing > StartComponentCleanup The last run was terminated by the user (0x41306)
    Windows > TaskScheduler > Regular Maintenance The process terminated unexpectedly (0x8007042B)
    Windows > Time Synchronization > Synchronize (0x420)
    Windows > WS > License Validation The operation completed successfully (0x0)
    Any assistance is appreciated! Thank you!!

  • MyInteger class- compile error - method doesnt implement comparable method

    I am trying to test how the code for a hash table works- I have 4 classes
    Hashable interface
    QuadraticProbableHashTable
    HashEntry
    MyInteger
    Everything is compiling but one error comes up saying that "Class must implement the inherited abstract method Comparable.compareTo(object)."
    I have a comparable method with the same signature as that in Comparable interface; in MyInteger.java where the problem is. However I still have the same problem.
         * Wrapper class for use with generic data structures.
         * Mimics Integer.
        public final class MyInteger implements Comparable, Hashable
             * Construct the MyInteger object with initial value 0.
            public MyInteger( )
                this( 0 );
             * Construct the MyInteger object.
             * @param x the initial value.
            public MyInteger( int x )
                value = x;
             * Gets the stored int value.
             * @return the stored value.
            public int intValue( )
                return value;
             * Implements the toString method.
             * @return the String representation.
            public String toString( )
                return Integer.toString( value );
             * Implements the compareTo method.
             * @param rhs the other MyInteger object.
             * @return 0 if two objects are equal;
             *     less than zero if this object is smaller;
             *     greater than zero if this object is larger.
             * @exception ClassCastException if rhs is not
             *     a MyInteger.
            public int compareTo( Comparable rhs )
                return value < ((MyInteger)rhs).value ? -1 :
                       value == ((MyInteger)rhs).value ? 0 : 1;
             * Implements the equals method.
             * @param rhs the second MyInteger.
             * @return true if the objects are equal, false otherwise.
             * @exception ClassCastException if rhs is not
             *     a MyInteger.
            public boolean equals( Object rhs )
                return rhs != null && value == ((MyInteger)rhs).value;
             * Implements the hash method.
             * @param tableSize the hash table size.
             * @return a number between 0 and tableSize-1.
            public int hash( int tableSize )
                if( value < 0 )
                    return -value % tableSize;
                else
                    return value % tableSize;
            private int value;
        }

    >
    You might want to also allow for cases where the
    value passed in is null, or the argument to the
    method is NOT a MyInteger object :-)
    Just a small note - the javadocs for Comparable#compareTo says the following:
    Throws:
    ClassCastException - if the specified object's type prevents it from being compared to this Object.
    So it's perfectly OK to blindly try to cast to the desired type in the sense that you are not violating the Comparable contract if the cast fails.

  • BDB read performance problem: lock contention between GC and VM threads

    Problem: BDB read performance is really bad when the size of the BDB crosses 20GB. Once the database crosses 20GB or near there, it takes more than one hour to read/delete/add 200K keys.
    After a point, of these 200K keys there are about 15-30K keys that are new and this number eventually should come down and there should not be any new keys after a point.
    Application:
    Transactional Data Store application. Single threaded process, that's trying to read one key's data, delete the data and add new data. The keys are really small (20 bytes) and the data is large (grows from 1KB to 100KB)
    On on machine, I have a total of 3 processes running with each process accessing its own BDB on a separate RAID1+0 drive. So, according to me there should really be no disk i/o wait that's slowing down the reads.
    After a point (past 20GB), There are about 4-5 million keys in my BDB and the data associated with each key could be anywhere between 1KB to 100KB. Eventually every key will have 100KB data associated with it.
    Hardware:
    16 core Intel Xeon, 96GB of RAM, 8 drive, running 2.6.18-194.26.1.0.1.el5 #1 SMP x86_64 x86_64 x86_64 GNU/Linux
    BDB config: BTREE
    bdb version: 4.8.30
    bdb cache size: 4GB
    bdb page size: experimented with 8KB, 64KB.
    3 processes, each process accesses its own BDB on a separate RAIDed(1+0) drive.
    envConfig.setAllowCreate(true);
    envConfig.setTxnNoSync(ourConfig.asynchronous);
    envConfig.setThreaded(true);
    envConfig.setInitializeLocking(true);
    envConfig.setLockDetectMode(LockDetectMode.DEFAULT);
    When writing to BDB: (Asynchrounous transactions)
    TransactionConfig tc = new TransactionConfig();
    tc.setNoSync(true);
    When reading from BDB (Allow reading from Uncommitted pages):
    CursorConfig cc = new CursorConfig();
    cc.setReadUncommitted(true);
    BDB stats: BDB size 49GB
    $ db_stat -m
    3GB 928MB Total cache size
    1 Number of caches
    1 Maximum number of caches
    3GB 928MB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    2127M Requested pages found in the cache (97%)
    57M Requested pages not found in the cache (57565917)
    6371509 Pages created in the cache
    57M Pages read into the cache (57565917)
    75M Pages written from the cache to the backing file (75763673)
    60M Clean pages forced from the cache (60775446)
    2661382 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    500593 Current total page count
    500593 Current clean page count
    0 Current dirty page count
    524287 Number of hash buckets used for page location
    4096 Assumed page size used
    2248M Total number of times hash chains searched for a page (2248788999)
    9 The longest hash chain searched for a page
    2669M Total number of hash chain entries checked for page (2669310818)
    0 The number of hash bucket locks that required waiting (0%)
    0 The maximum number of times any hash bucket lock was waited for (0%)
    0 The number of region locks that required waiting (0%)
    0 The number of buffers frozen
    0 The number of buffers thawed
    0 The number of frozen buffers freed
    63M The number of page allocations (63937431)
    181M The number of hash buckets examined during allocations (181211477)
    16 The maximum number of hash buckets examined for an allocation
    63M The number of pages examined during allocations (63436828)
    1 The max number of pages examined for an allocation
    0 Threads waited on page I/O
    0 The number of times a sync is interrupted
    Pool File: lastPoints
    8192 Page size
    0 Requested pages mapped into the process' address space
    2127M Requested pages found in the cache (97%)
    57M Requested pages not found in the cache (57565917)
    6371509 Pages created in the cache
    57M Pages read into the cache (57565917)
    75M Pages written from the cache to the backing file (75763673)
    $ db_stat -l
    0x40988 Log magic number
    16 Log version number
    31KB 256B Log record cache size
    0 Log file mode
    10Mb Current log file size
    856M Records entered into the log (856697337)
    941GB 371MB 67KB 112B Log bytes written
    2GB 262MB 998KB 478B Log bytes written since last checkpoint
    31M Total log file I/O writes (31624157)
    31M Total log file I/O writes due to overflow (31527047)
    97136 Total log file flushes
    686 Total log file I/O reads
    96414 Current log file number
    4482953 Current log file offset
    96414 On-disk log file number
    4482862 On-disk log file offset
    1 Maximum commits in a log flush
    1 Minimum commits in a log flush
    160KB Log region size
    195 The number of region locks that required waiting (0%)
    $ db_stat -c
    7 Last allocated locker ID
    0x7fffffff Current maximum unused locker ID
    9 Number of lock modes
    2000 Maximum number of locks possible
    2000 Maximum number of lockers possible
    2000 Maximum number of lock objects possible
    160 Number of lock object partitions
    0 Number of current locks
    1218 Maximum number of locks at any one time
    5 Maximum number of locks in any one bucket
    0 Maximum number of locks stolen by for an empty partition
    0 Maximum number of locks stolen for any one partition
    0 Number of current lockers
    8 Maximum number of lockers at any one time
    0 Number of current lock objects
    1218 Maximum number of lock objects at any one time
    5 Maximum number of lock objects in any one bucket
    0 Maximum number of objects stolen by for an empty partition
    0 Maximum number of objects stolen for any one partition
    400M Total number of locks requested (400062331)
    400M Total number of locks released (400062331)
    0 Total number of locks upgraded
    1 Total number of locks downgraded
    0 Lock requests not available due to conflicts, for which we waited
    0 Lock requests not available due to conflicts, for which we did not wait
    0 Number of deadlocks
    0 Lock timeout value
    0 Number of locks that have timed out
    0 Transaction timeout value
    0 Number of transactions that have timed out
    1MB 544KB The size of the lock region
    0 The number of partition locks that required waiting (0%)
    0 The maximum number of times any partition lock was waited for (0%)
    0 The number of object queue operations that required waiting (0%)
    0 The number of locker allocations that required waiting (0%)
    0 The number of region locks that required waiting (0%)
    5 Maximum hash bucket length
    $ db_stat -CA
    Default locking region information:
    7 Last allocated locker ID
    0x7fffffff Current maximum unused locker ID
    9 Number of lock modes
    2000 Maximum number of locks possible
    2000 Maximum number of lockers possible
    2000 Maximum number of lock objects possible
    160 Number of lock object partitions
    0 Number of current locks
    1218 Maximum number of locks at any one time
    5 Maximum number of locks in any one bucket
    0 Maximum number of locks stolen by for an empty partition
    0 Maximum number of locks stolen for any one partition
    0 Number of current lockers
    8 Maximum number of lockers at any one time
    0 Number of current lock objects
    1218 Maximum number of lock objects at any one time
    5 Maximum number of lock objects in any one bucket
    0 Maximum number of objects stolen by for an empty partition
    0 Maximum number of objects stolen for any one partition
    400M Total number of locks requested (400062331)
    400M Total number of locks released (400062331)
    0 Total number of locks upgraded
    1 Total number of locks downgraded
    0 Lock requests not available due to conflicts, for which we waited
    0 Lock requests not available due to conflicts, for which we did not wait
    0 Number of deadlocks
    0 Lock timeout value
    0 Number of locks that have timed out
    0 Transaction timeout value
    0 Number of transactions that have timed out
    1MB 544KB The size of the lock region
    0 The number of partition locks that required waiting (0%)
    0 The maximum number of times any partition lock was waited for (0%)
    0 The number of object queue operations that required waiting (0%)
    0 The number of locker allocations that required waiting (0%)
    0 The number of region locks that required waiting (0%)
    5 Maximum hash bucket length
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Lock REGINFO information:
    Lock Region type
    5 Region ID
    __db.005 Region name
    0x2accda678000 Region address
    0x2accda678138 Region primary address
    0 Region maximum allocation
    0 Region allocated
    Region allocations: 6006 allocations, 0 failures, 0 frees, 1 longest
    Allocations by power-of-two sizes:
    1KB 6002
    2KB 0
    4KB 0
    8KB 0
    16KB 1
    32KB 0
    64KB 2
    128KB 0
    256KB 1
    512KB 0
    1024KB 0
    REGION_JOIN_OK Region flags
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Lock region parameters:
    524317 Lock region region mutex [0/9 0% 5091/47054587432128]
    2053 locker table size
    2053 object table size
    944 obj_off
    226120 locker_off
    0 need_dd
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Lock conflict matrix:
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Locks grouped by lockers:
    Locker Mode Count Status ----------------- Object ---------------
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Locks grouped by object:
    Locker Mode Count Status ----------------- Object ---------------
    Diagnosis:
    I'm seeing way to much lock contention on the Java Garbage Collector threads and also the VM thread when I strace my java process and I don't understand the behavior.
    We are spending more than 95% of the time trying to acquire locks and I don't know what these locks are. Any info here would help.
    Earlier I thought the overflow pages were the problem as 100KB data size was exceeding all overflow page limits. So, I implemented duplicate keys concept by chunking of my data to fit to overflow page limits.
    Now I don't see any overflow pages in my system but I still see bad bdb read performance.
    $ strace -c -f -p 5642 --->(607 times the lock timed out, errors)
    Process 5642 attached with 45 threads - interrupt to quit
    % time     seconds  usecs/call     calls    errors syscall
    98.19    7.670403        2257      3398       607 futex
     0.84    0.065886           8      8423           pread
     0.69    0.053980        4498        12           fdatasync
     0.22    0.017094           5      3778           pwrite
     0.05    0.004107           5       808           sched_yield
     0.00    0.000120          10        12           read
     0.00    0.000110           9        12           open
     0.00    0.000089           7        12           close
     0.00    0.000025           0      1431           clock_gettime
     0.00    0.000000           0        46           write
     0.00    0.000000           0         1         1 stat
     0.00    0.000000           0        12           lseek
     0.00    0.000000           0        26           mmap
     0.00    0.000000           0        88           mprotect
     0.00    0.000000           0        24           fcntl
    100.00    7.811814                 18083       608 total
    The above stats show that there is too much time spent locking (futex calls) and I don't understand that because
    the application is really single-threaded. I have turned on asynchronous transactions so the writes might be
    flushed asynchronously in the background but spending that much time locking and timing out seems wrong.
    So, there is possibly something I'm not setting or something weird with the way JVM is behaving on my box.
    I grep-ed for futex calls in one of my strace log snippet and I see that there is a VM thread that grabbed the mutex
    maximum number(223) of times and followed by Garbage Collector threads: the following is the lock counts and thread-pids
    within the process:
    These are the 10 GC threads (each thread has grabbed lock on an avg 85 times):
      86 [8538]
      85 [8539]
      91 [8540]
      91 [8541]
      92 [8542]
      87 [8543]
      90 [8544]
      96 [8545]
      87 [8546]
      97 [8547]
      96 [8548]
      91 [8549]
      91 [8550]
      80 [8552]
    VM Periodic Task Thread" prio=10 tid=0x00002aaaf4065000 nid=0x2180 waiting on condition (Main problem??)
     223 [8576] ==> grabbing a lock 223 times -- not sure why this is happening…
    "pool-2-thread-1" prio=10 tid=0x00002aaaf44b7000 nid=0x21c8 runnable [0x0000000042aa8000] -- main worker thread
       34 [8648] (main thread grabs futex only 34 times when compared to all the other threads)
    The load average seems ok; though my system thinks it has very less memory left and that
    I think is because its using up a lot of memory for the file system cache?
    top - 23:52:00 up 6 days, 8:41, 1 user, load average: 3.28, 3.40, 3.44
    Tasks: 229 total, 1 running, 228 sleeping, 0 stopped, 0 zombie
    Cpu(s): 3.2%us, 0.9%sy, 0.0%ni, 87.5%id, 8.3%wa, 0.0%hi, 0.1%si, 0.0%st
    Mem: 98999820k total, 98745988k used, 253832k free, 530372k buffers
    Swap: 18481144k total, 1304k used, 18479840k free, 89854800k cached
    PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
    8424 rchitta 16 0 7053m 6.2g 4.4g S 18.3 6.5 401:01.88 java
    8422 rchitta 15 0 7011m 6.1g 4.4g S 14.6 6.5 528:06.92 java
    8423 rchitta 15 0 6989m 6.1g 4.4g S 5.7 6.5 615:28.21 java
    $ java -version
    java version "1.6.0_21"
    Java(TM) SE Runtime Environment (build 1.6.0_21-b06)
    Java HotSpot(TM) 64-Bit Server VM (build 17.0-b16, mixed mode)
    Maybe I should make my application a Concurrent Data Store app as there is really only one thread doing the writes and reads. But I would like
    to understand why my process is spending so much time in locking.
    Can I try any other options? How do I prevent such heavy locking from happening? Has anyone seen this kind of behavior? Maybe this is
    all normal. I'm pretty new to using BDB.
    If there is a way to disable locking that would also work as there is only one thread that's really doing all the job.
    Should I disable the file system cache? One thing is that my application does not utilize cache very well as once I visit a key, I don't visit that
    key again for a very long time so its very possible that the key has to be read again from the disk.
    It is possible that I'm thinking this completely wrong and focussing too much on locking behavior and the problem is else where.
    Any thoughts/suggestions etc are welcome. Your help on this is much appreciated.
    Thanks,
    Rama

    Hi,
    Looks like you're using BDB, not BDB JE, and this is the BDB JE forum. Could you please repost here?:
    Berkeley DB
    Thanks,
    mark

  • Outlook 2013 crashed when user sending email from Adobe Writer xi pro ver 11.0.07

    I ask Adobe, but they are signaling to Outlook problem
    PC-HP, Windows 8.1 Pro X64 connected to Windows 2008 R2 DC
    Adobe Acrobat Pro ver 11,x
    When user trying to send email from Adobe direct, Outlook 2013 Pro is crashing
    The only way to send email is to save adobe ( PDF) file and attached to the email
    All patches are installed

    Log Name:      Application
    Source:        Application Error
    Date:          7/15/2014 8:05:38 AM
    Event ID:      1000
    Task Category: (100)
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Computer:      Liz7PM-HP.armadale.local
    Description:
    Faulting application name: Acrobat.exe, version: 11.0.7.79, time stamp: 0x536b812b
    Faulting module name: SendMail.api, version: 11.0.7.79, time stamp: 0x536b7fd0
    Exception code: 0xc0000005
    Fault offset: 0x0003390b
    Faulting process id: 0x139c
    Faulting application start time: 0x01cfa024deec4218
    Faulting application path: C:\Program Files (x86)\Adobe\Acrobat 11.0\Acrobat\Acrobat.exe
    Faulting module path: C:\Program Files (x86)\Adobe\Acrobat 11.0\Acrobat\plug_ins\SendMail.api
    Report Id: 553a89c7-0c18-11e4-be97-80c16ee27020
    Faulting package full name: 
    Faulting package-relative application ID: 
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Application Error" />
        <EventID Qualifiers="0">1000</EventID>
        <Level>2</Level>
        <Task>100</Task>
        <Keywords>0x80000000000000</Keywords>
        <TimeCreated SystemTime="2014-07-15T12:05:38.000000000Z" />
        <EventRecordID>66128</EventRecordID>
        <Channel>Application</Channel>
        <Computer>Liz7PM-HP.armadale.local</Computer>
        <Security />
      </System>
      <EventData>
        <Data>Acrobat.exe</Data>
        <Data>11.0.7.79</Data>
        <Data>536b812b</Data>
        <Data>SendMail.api</Data>
        <Data>11.0.7.79</Data>
        <Data>536b7fd0</Data>
        <Data>c0000005</Data>
        <Data>0003390b</Data>
        <Data>139c</Data>
        <Data>01cfa024deec4218</Data>
        <Data>C:\Program Files (x86)\Adobe\Acrobat 11.0\Acrobat\Acrobat.exe</Data>
        <Data>C:\Program Files (x86)\Adobe\Acrobat 11.0\Acrobat\plug_ins\SendMail.api</Data>
        <Data>553a89c7-0c18-11e4-be97-80c16ee27020</Data>
        <Data>
        </Data>
        <Data>
        </Data>
      </EventData>
    </Event>
    +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    Log Name:      Application
    Source:        Windows Error Reporting
    Date:          7/15/2014 8:05:55 AM
    Event ID:      1001
    Task Category: None
    Level:         Information
    Keywords:      Classic
    User:          N/A
    Computer:      Liz7PM-HP.armadale.local
    Description:
    Fault bucket 73304412188, type 1
    Event Name: APPCRASH
    Response: Not available
    Cab Id: 0
    Problem signature:
    P1: Acrobat.exe
    P2: 11.0.7.79
    P3: 536b812b
    P4: SendMail.api
    P5: 11.0.7.79
    P6: 536b7fd0
    P7: c0000005
    P8: 0003390b
    P9: 
    P10: 
    Attached files:
    C:\Users\ewhitton\Desktop\140.pdf
    C:\Users\ewhitton\AppData\Local\Temp\WER8000.tmp.WERInternalMetadata.xml
    C:\Users\ewhitton\AppData\Local\Temp\WER8BB9.tmp.appcompat.txt
    minidump.mdmp
    These files may be available here:
    C:\Users\ewhitton\AppData\Local\Microsoft\Windows\WER\ReportArchive\AppCrash_Acrobat.exe_2a26ba191a528da71b85271ed9509f7e426babf8_0d6f35c4_1735c074
    Analysis symbol: 
    Rechecking for solution: 0
    Report Id: 553a89c7-0c18-11e4-be97-80c16ee27020
    Report Status: 4104
    Hashed bucket: f6a7a946597169e4aea5077b3b7f0848
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Windows Error Reporting" />
        <EventID Qualifiers="0">1001</EventID>
        <Level>4</Level>
        <Task>0</Task>
        <Keywords>0x80000000000000</Keywords>
        <TimeCreated SystemTime="2014-07-15T12:05:55.000000000Z" />
        <EventRecordID>66131</EventRecordID>
        <Channel>Application</Channel>
        <Computer>Liz7PM-HP.armadale.local</Computer>
        <Security />
      </System>
      <EventData>
        <Data>73304412188</Data>
        <Data>1</Data>
        <Data>APPCRASH</Data>
        <Data>Not available</Data>
        <Data>0</Data>
        <Data>Acrobat.exe</Data>
        <Data>11.0.7.79</Data>
        <Data>536b812b</Data>
        <Data>SendMail.api</Data>
        <Data>11.0.7.79</Data>
        <Data>536b7fd0</Data>
        <Data>c0000005</Data>
        <Data>0003390b</Data>
        <Data>
        </Data>
        <Data>
        </Data>
        <Data>
    C:\Users\ewhitton\Desktop\140.pdf
    C:\Users\ewhitton\AppData\Local\Temp\WER8000.tmp.WERInternalMetadata.xml
    C:\Users\ewhitton\AppData\Local\Temp\WER8BB9.tmp.appcompat.txt
    minidump.mdmp</Data>
        <Data>C:\Users\ewhitton\AppData\Local\Microsoft\Windows\WER\ReportArchive\AppCrash_Acrobat.exe_2a26ba191a528da71b85271ed9509f7e426babf8_0d6f35c4_1735c074</Data>
        <Data>
        </Data>
        <Data>0</Data>
        <Data>553a89c7-0c18-11e4-be97-80c16ee27020</Data>
        <Data>4104</Data>
        <Data>f6a7a946597169e4aea5077b3b7f0848</Data>
      </EventData>
    </Event>
    +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    Log Name:      Application
    Source:        Outlook
    Date:          7/15/2014 8:06:05 AM
    Event ID:      50
    Task Category: None
    Level:         Information
    Keywords:      Classic
    User:          N/A
    Computer:      Liz7PM-HP.armadale.local
    Description:
    The following providers do not implement fast shutdown APIs, but are being shut down using fast shutdown:
    MDConnector.dll (MAPI Store Provider)
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Outlook" />
        <EventID Qualifiers="16384">50</EventID>
        <Level>4</Level>
        <Task>0</Task>
        <Keywords>0x80000000000000</Keywords>
        <TimeCreated SystemTime="2014-07-15T12:06:05.000000000Z" />
        <EventRecordID>66132</EventRecordID>
        <Channel>Application</Channel>
        <Computer>Liz7PM-HP.armadale.local</Computer>
        <Security />
      </System>
      <EventData>
        <Data>MDConnector.dll (MAPI Store Provider)
    </Data>
      </EventData>
    </Event>

  • MPOOL, mmap and linux

    I have two processes that open and use the same database (one reads, one writes). The DBENV is opened with the following flags: DB_CREATE | DB_INIT_CDB | DV_INIT_MPOOL | DB_THREAD. The database itself is about currently 1G in size and contains 5M records. I am setting the DBENV cache size to be 2.5G.
    As linux will mmap() the database file, what does setting a DBENV cache buy me? Can't I just use the OS buffer cache to take care of my DB pages?
    Thanks for any help
    Ashley
    OS: centos5.0 x86_64
    Kernel: 2.6.18
    Ram 4G
    db_stat -m output:
    2GB 512MB Total cache size
    1 Number of caches
    2GB 512MB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    1344M Requested pages found in the cache (99%)
    2 Requested pages not found in the cache
    259317 Pages created in the cache
    2 Pages read into the cache
    2 Pages written from the cache to the backing file
    0 Clean pages forced from the cache
    0 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    259319 Current total page count
    0 Current clean page count
    259319 Current dirty page count
    262147 Number of hash buckets used for page location
    1344M Total number of times hash chains searched for a page (1344270607)
    1 The longest hash chain searched for a page
    1344M Total number of hash buckets examined for page location (1344013832)
    636 The number of hash bucket locks that required waiting (0%)
    529 The maximum number of times any hash bucket lock was waited for
    0 The number of region locks that required waiting (0%)
    259324 The number of page allocations
    0 The number of hash buckets examined during allocations
    0 The maximum number of hash buckets examined for an allocation
    0 The number of pages examined during allocations
    0 The max number of pages examined for an allocation

    Hello,
    The "Configuring the memory pool" section of the Reference Guide at:
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/mp/config.html
    has the following:
    "Mapping files into the process address space can result in better performance
    because available virtual memory is often much larger than the local cache,
    and page faults are faster than page copying on many systems. However, in the
    presence of limited virtual memory, it can cause resource starvation; and in
    the presence of large databases, it can result in immense process sizes. In
    addition, because of the requirements of the Berkeley DB transactional
    implementation, only read-only files can be mapped into process memory."
    One consideration here is the maximum size an underlying file can be and still
    be mapped into the process address space (instead of reading the file's pages
    into the cache).
    I am not sure that this answers your question, so if it does not, please
    let me know.
    Thanks,
    Sandra

  • Hotspot on Linux from 32-bit to 64-bit

    I have a natural language processing application that runs fine with the 32-bit jvm (on a WINTEL machine), but failes using the 64-bit jvm (on a SuSE 9.3 with AMD64).
    # An unexpected error has been detected by HotSpot Virtual Machine:
    # SIGSEGV (0xb) at pc=0x00002aaaaa5830fd, pid=8429, tid=1076607328
    # Java VM: Java HotSpot(TM) 64-Bit Server VM (1.5.0_05-b05 mixed mode)
    # Problematic frame:
    # C 0x00002aaaaa5830fd
    In the application, I place objects (they basically contain a String) in a Hashtable that have overwritten the hashCode() and equals() methods. There are cases where objects share the same hash value (in my case when objects contain two identical words except for differences in capitalization). I have read that this is valid as a hash bucket can contain more than 1 object. Also, in my application, the equals() method does not enforce a match to a single object. Most often, I do lookups in a case-sensitive fashion, but when I do a case-insensitive lookup, for example "java" would would equal "jaVa" and "javA" in in the hashtable.
    Could it be the cause of my troubles in 64-bit java?
    thanks,
    Dan

    It's pretty much impossible that improper implementation of hashCode() and equals() will do more than yielding inconsistent behaviour of your application.
    Is your Suse up to date? Latest kernel and all?

  • Snapshot Isoation

    Hi Everyone,
    In the documentation related to snapshot isolation level it is mentioned that for better performance with MVCC
    1) larger cache size should be used
    2) shorter transacitons should be used.
    This question is focused on second aspect.
    In my application, There are two types of transactions,
    a) first one is normal transaction started without any type of argument. This, we use for all write activities to DB.
    b) other type of transaction is opened as snapshot transaction. This is used for all read activities.
    I need to implement a couple of use cases, where I need to read full database in a single snapshot transaction. Therefore my question is that when we talk about keeping transactions short, is it write transactions or snapshot transactions. I know that ideally I should keep all the transactions short but this would help me in implementing the abovementioned functionality.
    Thanks,
    Shishir

    Thanks a lot Ashok for this really nice explanation.
    This precisely answers my question.
    I have another question which is on snapshot isolation again. It will be great if you can answer that.
    My application is basically user driven so I don't have much control on number of concurrent operations going on. I am using snapshot and default transactions both.
    Few days back, on two different occasions, I had freezer files accumulated and following that database environment became really slow. I increased the cache and restarted environment and its behaving fine now.
    My questions is that both the times db_stat -e looked fine (98% cache hit ratio etc). So what is the best way to find that this situtaion is about to happen ?
    I know that its possible that there is no simple answer to this question but this will also help me with application footprint sizing. I can send you db_stat -e output for one of these incidents. Am pasting -m part of it here.
    Thanks,
    Shishir
    ===============================
    512MB     Total cache size
    1     Number of caches
    1     Maximum number of caches
    512MB     Pool individual cache size
    0     Maximum memory-mapped file size
    0     Maximum open file descriptors
    0     Maximum sequential buffer writes
    0     Sleep after writing maximum sequential buffers
    0     Requested pages mapped into the process' address space
    366M     Requested pages found in the cache (98%)
    5662872     Requested pages not found in the cache
    28885     Pages created in the cache
    5662887     Pages read into the cache
    212793     Pages written from the cache to the backing file
    6115927     Clean pages forced from the cache
    1631     Dirty pages forced from the cache
    0     Dirty pages written by trickle-sync thread
    126503     Current total page count
    126314     Current clean page count
    189     Current dirty page count
    65537     Number of hash buckets used for page location
    377M     Total number of times hash chains searched for a page (377476452)
    10     The longest hash chain searched for a page
    787M     Total number of hash chain entries checked for page (787470619)
    420893     The number of hash bucket locks that required waiting (0%)
    6973     The maximum number of times any hash bucket lock was waited for (5%)
    90M     The number of region locks that required waiting (52%)
    9962     The number of buffers frozen
    1502     The number of buffers thawed
    0     The number of frozen buffers freed
    6387030     The number of page allocations
    2856M     The number of hash buckets examined during allocations (2856267125)
    65873     The maximum number of hash buckets examined for an allocation
    6117798     The number of pages examined during allocations
    13     The max number of pages examined for an allocation
    928725     Threads waited on page I/O
    Pool File: MetadataDB
    4096     Page size
    0     Requested pages mapped into the process' address space
    366M     Requested pages found in the cache (98%)
    5662872     Requested pages not found in the cache
    28885     Pages created in the cache
    5662887     Pages read into the cache
    212793     Pages written from the cache to the backing file
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    22049/8823442     File/offset for last checkpoint LSN
    Fri Sep 11 13:00:36 2009     Checkpoint timestamp
    0x804ed335     Last transaction ID allocated
    500000     Maximum number of active transactions configured
    295     Active transactions
    675     Maximum active transactions
    5165877     Number of transactions begun
    336     Number of transactions aborted
    5165246     Number of transactions committed
    4784     Snapshot transactions
    34220     Maximum snapshot transactions
    0     Number of transactions restored
    185MB 24KB     Transaction region size
    215222     The number of region locks that required waiting (1%)

  • Looking for help to increase performance on a DB XML database.

    I'll try to answer all the questions in the Performance Questionnaire from here.
    1) I'm primarily concerned with insertion performance. The best I've seen so far is about 6000 inserts/per second. This is running inside a VMWare VM with 3 GB of RAM. The VM is set up with 2 CPUs each with 2 cores. The host machine has 8GB of RAM with a dual core 2.67 GHZ i7 (2 logical cores per CPU). The best performance I've seen is by running 2 threads of execution. A single thread only gets me about 2500 inserts per/second.
    This is all within a very simple, isolate program. I'm trying to determine how to re-architect a more complicated system, but if I can't hope to hit 10k inserts per second with my sample, I don't see how it's possible to expand this out to something more complicated.
    2) Versions: BDBXML version 2.5.26 no special patches or config options
    3) BDB version 4.8.26, no special patches
    4) 2.67 dual core, hyperthreaded intel i7 (4 logical processors)
    5) Host: Windows 7 64-bit, Guest: RHEL5 64-bit
    6) Underlying disk is a 320GB WesternDigital barricuda (SATA). It's a laptop harddrive, I believe it's only 5400 RPM. Although the VM does not have exclusive access to the drive, it is not the same drive as the Host sytem drive. (i.e. Windows runs off of the C drive, this is the D drive). The has a 60GB slice of this drive.
    7) Drive is NTFS formatted for the host. Guest, ext3
    8) Host 8gb, guest 3gb (total usage when running tests low, i.e. no swapping by guest or host)
    9) not currently using any replication
    10) Not using remote filesystem
    11) db_cv_mutex=POSIX/pthreads/library/x86_64/gcc-assembly
    12) Using the C++ API for DBXML, and the C API for BDB
    using gcc/g++ version 4.1.2
    13) not using app server or web server
    14) flags to 'DB_ENV->open()': | DB_SYSTEM_MEM
              | DB_INIT_MPOOL
              | DB_INIT_LOCK
              | DB_INIT_LOG
              | DB_INIT_TXN
              | DB_RECOVER
              | DB_THREAD
    other env flags explicitly set:
    DB_LOG_IN_MEMORY 1
    DB_LOG_ZERO 1
    set_cachesize(env, 1, 0, 1) // 1GB cache in single block
    DB_TXN_NOSYNC 1
    DB_TXN_WRITE_NOSYNC 1
    I am not using a DB_CONFIG file at this time.
    15) For the container config:
    transactional true
    transactionsNotDurable true
    containertype wholedoc
    indexNodes Off
    pagesize 4096
    16) In my little test program, I have a single container.
    16.1) flags are the same as listed above.
    16.2) I've tried with an empty container, and one with documents already inside and haven't noticed much difference at this point. I'm running 1, 2, 3, or 4 threads, each inserting 10k documents in a loop. Each insert is a single transaction.
    16.3) Wholedoc (tried both node & wholedoc, I believe wholedoc was slightly faster).
    16.4) The best performance I've seen is with a smaller document that is about 500 bytes.
    16.5) I'm not currently using any document data.
    17)sample document:
    <?xml version='1.0' encoding='UTF-8' standalone='no'?>
    <Record xmlns='http://someurl.com/test' JID='UUID-f9032e9c-7e9a-4f2c-b40e-621b0e66c47f'>
    <DataType>journal</DataType>
    <RecordID>f9032e9c-7e9a-4f2c-b40e-621b0e66c47f</RecordID>
    <Hostname>test.foo.com</Hostname>
    <HostUUID>34c90268-57ba-4d4c-a602-bdb30251ec77</HostUUID>
    <Timestamp>2011-11-10T04:09:55-05:00</Timestamp>
    <ProcessID>0</ProcessID>
    <User name='root'>0</User>
    <SecurityLabel>unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023</SecurityLabel>
    </Record>
    18. As mentioned, I'm looked to get at least 10k documents per second for insertion. Updates are much more infrequent, and can run slower. I am not doing any partial updates, or replacing documents. In the actual system, there are minor updates that happen to document metadata, but again, these can be slower.
    19. I'm primarily concerned with insertion rate, not query.
    20. Xquery samples are not applicable at the moment.
    21. I am using transactions, no special flags aside from setting them all to 'not durable'
    22. Log files are currently stored on the same disk as the database.
    23. I'm not using AUTO_COMMIT
    24. I don't believe there are any non-transactional operations
    25. best performance from 2 threads doing insertions
    26. The primary way I've been testing performance is by using the 'clock_gettime(CLOCK_REALTIME)' calls inside my test program. The test program spawns 1 or more threads, each thread inserts 10k documents. The main thread waits for all the threads to complete, then exits. I'm happy to send the source code for this program if that would be helpful.
    27. As mentioned, I'm hoping to get at least 10k inserts per second.
    28. db_stat outputs:
    28.1 db_stat -c:
    93 Last allocated locker ID
    0x7fffffff Current maximum unused locker ID
    9 Number of lock modes
    1000 Maximum number of locks possible
    1000 Maximum number of lockers possible
    1000 Maximum number of lock objects possible
    40 Number of lock object partitions
    0 Number of current locks
    166 Maximum number of locks at any one time
    5 Maximum number of locks in any one bucket
    0 Maximum number of locks stolen by for an empty partition
    0 Maximum number of locks stolen for any one partition
    0 Number of current lockers
    35 Maximum number of lockers at any one time
    0 Number of current lock objects
    95 Maximum number of lock objects at any one time
    3 Maximum number of lock objects in any one bucket
    0 Maximum number of objects stolen by for an empty partition
    0 Maximum number of objects stolen for any one partition
    565631 Total number of locks requested
    542450 Total number of locks released
    0 Total number of locks upgraded
    29 Total number of locks downgraded
    22334 Lock requests not available due to conflicts, for which we waited
    23181 Lock requests not available due to conflicts, for which we did not wait
    0 Number of deadlocks
    0 Lock timeout value
    0 Number of locks that have timed out
    0 Transaction timeout value
    0 Number of transactions that have timed out
    784KB The size of the lock region
    10098 The number of partition locks that required waiting (0%)
    866 The maximum number of times any partition lock was waited for (0%)
    6 The number of object queue operations that required waiting (0%)
    7220 The number of locker allocations that required waiting (2%)
    0 The number of region locks that required waiting (0%)
    3 Maximum hash bucket length
    ====================
    28.2 db_stat -l:
    0x40988 Log magic number
    16 Log version number
    31KB 256B Log record cache size
    0 Log file mode
    10Mb Current log file size
    0 Records entered into the log
    0 Log bytes written
    0 Log bytes written since last checkpoint
    0 Total log file I/O writes
    0 Total log file I/O writes due to overflow
    0 Total log file flushes
    7 Total log file I/O reads
    1 Current log file number
    28 Current log file offset
    1 On-disk log file number
    28 On-disk log file offset
    0 Maximum commits in a log flush
    0 Minimum commits in a log flush
    160KB Log region size
    0 The number of region locks that required waiting (0%)
    ======================
    28.3 db_stat -m
    1GB Total cache size
    1 Number of caches
    1 Maximum number of caches
    1GB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    1127961 Requested pages found in the cache (99%)
    3622 Requested pages not found in the cache
    7590 Pages created in the cache
    3622 Pages read into the cache
    7663 Pages written from the cache to the backing file
    0 Clean pages forced from the cache
    0 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    11212 Current total page count
    11212 Current clean page count
    0 Current dirty page count
    131071 Number of hash buckets used for page location
    4096 Assumed page size used
    1142798 Total number of times hash chains searched for a page
    1 The longest hash chain searched for a page
    1127988 Total number of hash chain entries checked for page
    0 The number of hash bucket locks that required waiting (0%)
    0 The maximum number of times any hash bucket lock was waited for (0%)
    4 The number of region locks that required waiting (0%)
    0 The number of buffers frozen
    0 The number of buffers thawed
    0 The number of frozen buffers freed
    11218 The number of page allocations
    0 The number of hash buckets examined during allocations
    0 The maximum number of hash buckets examined for an allocation
    0 The number of pages examined during allocations
    0 The max number of pages examined for an allocation
    0 Threads waited on page I/O
    0 The number of times a sync is interrupted
    Pool File: temp.dbxml
    4096 Page size
    0 Requested pages mapped into the process' address space
    1127961 Requested pages found in the cache (99%)
    3622 Requested pages not found in the cache
    7590 Pages created in the cache
    3622 Pages read into the cache
    7663 Pages written from the cache to the backing file
    =================================
    28.4 db_stat -r (n/a, no replication)
    28.5 db_stat -t
    0/0 No checkpoint LSN
    Tue Oct 30 15:05:29 2012 Checkpoint timestamp
    0x8001d4d5 Last transaction ID allocated
    100 Maximum number of active transactions configured
    0 Active transactions
    5 Maximum active transactions
    120021 Number of transactions begun
    0 Number of transactions aborted
    120021 Number of transactions committed
    0 Snapshot transactions
    0 Maximum snapshot transactions
    0 Number of transactions restored
    48KB Transaction region size
    1385 The number of region locks that required waiting (0%)
    Active transactions:

    Replying with output from iostat & vmstat (including the output exceeded the character count).
    =============================
    output of vm_stat while running 4 threads, inserting 10k documents each. It took just under 18 seconds to complete. I ran vmstat a few times while it was running:
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    3 0 0 896904 218004 1513268 0 0 14 30 261 83 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    5 0 0 889588 218004 1520500 0 0 14 30 261 84 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    2 0 0 882892 218012 1527124 0 0 14 30 261 84 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    4 0 0 896664 218012 1533284 0 0 14 30 261 85 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    5 0 0 890456 218012 1539748 0 0 14 30 261 85 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    2 0 0 884256 218020 1545800 0 0 14 30 261 86 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    4 0 0 878304 218020 1551520 0 0 14 30 261 86 1 1 98 0 0
    $ sudo vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    2 0 0 871980 218028 1558108 0 0 14 30 261 87 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    5 0 0 865780 218028 1563828 0 0 14 30 261 87 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    3 0 0 859332 218028 1570108 0 0 14 30 261 87 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    2 0 0 586756 218028 1572660 0 0 14 30 261 88 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    3 2 0 788032 218104 1634624 0 0 14 31 261 88 1 1 98 0 0
    ================================
    sda1 is mount on /boot
    sda2 is mounted on /
    sda3 is swap space
    output for iostat, same scenario, 4 threads inserting 10k documents each:
    $ iostat -x 1
    Linux 2.6.18-308.4.1.el5 (localhost.localdomain) 10/30/2012
    avg-cpu: %user %nice %system %iowait %steal %idle
    27.43 0.00 4.42 1.18 0.00 66.96
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 46.53 0.00 2.97 0.00 396.04 133.33 0.04 14.33 14.33 4.26
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 46.53 0.00 2.97 0.00 396.04 133.33 0.04 14.33 14.33 4.26
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    26.09 0.00 15.94 0.00 0.00 57.97
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    26.95 0.00 29.72 0.00 0.00 43.32
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    29.90 0.00 32.16 0.00 0.00 37.94
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    40.51 0.00 27.85 0.00 0.00 31.65
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    40.50 0.00 26.75 0.50 0.00 32.25
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 3.00 0.00 2.00 0.00 40.00 20.00 0.03 17.00 17.00 3.40
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 3.00 0.00 2.00 0.00 40.00 20.00 0.03 17.00 17.00 3.40
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    30.63 0.00 32.91 0.00 0.00 36.46
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    29.57 0.00 32.83 0.00 0.00 37.59
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    29.65 0.00 32.41 0.00 0.00 37.94
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    46.70 0.00 26.40 0.00 0.00 26.90
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    32.72 0.00 33.25 0.00 0.00 34.04
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 7.00 0.00 57.00 0.00 512.00 8.98 2.25 39.54 0.82 4.70
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 7.00 0.00 57.00 0.00 512.00 8.98 2.25 39.54 0.82 4.70
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    32.08 0.00 31.83 0.00 0.00 36.09
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    33.75 0.00 31.50 0.00 0.00 34.75
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    33.00 0.00 31.99 0.25 0.00 34.76
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 3.00 0.00 2.00 0.00 40.00 20.00 0.05 24.00 24.00 4.80
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 3.00 0.00 2.00 0.00 40.00 20.00 0.05 24.00 24.00 4.80
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    53.62 0.00 21.70 0.00 0.00 24.69
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    33.92 0.00 22.11 0.00 0.00 43.97
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    8.53 0.00 4.44 0.00 0.00 87.03
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    5.58 0.00 2.15 0.00 0.00 92.27
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    0.00 0.00 1.56 12.50 0.00 85.94
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 9.00 0.00 1.00 0.00 80.00 80.00 0.23 86.00 233.00 23.30
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 9.00 0.00 1.00 0.00 80.00 80.00 0.23 86.00 233.00 23.30
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    1.49 0.00 11.90 0.00 0.00 86.61
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 1.00 0.00 8.00 8.00 0.04 182.00 35.00 3.50
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 1.00 0.00 8.00 8.00 0.04 182.00 35.00 3.50
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    0.26 0.00 21.82 0.00 0.00 77.92
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    0.00 0.00 20.48 0.00 0.00 79.52
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    9.49 0.00 13.33 0.00 0.00 77.18
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    20.35 0.00 4.77 0.00 0.00 74.87
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    6.32 0.00 13.22 1.72 0.00 78.74
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 15302.97 0.99 161.39 7.92 34201.98 210.68 65.27 87.75 3.93 63.76
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 15302.97 0.99 161.39 7.92 34201.98 210.68 65.27 87.75 3.93 63.76
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    1.83 0.00 5.49 1.22 0.00 91.46
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 21.00 0.00 95.00 0.00 91336.00 961.43 43.76 1003.00 7.18 68.20
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 21.00 0.00 95.00 0.00 91336.00 961.43 43.76 1003.00 7.18 68.20
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    ===================

  • Management Studio crashing on local SQL Server console

    Unable to start the SQL Server Management Studio on the SQL Server 2012 running on Windows Server 2012 and 2012 R2
    We have a number of SQL Servers deployed and in the last month all instances of the SSMS have stopped working with the following error message
    Problem signature:
      Problem Event Name:                        APPCRASH
      Application Name:                            
    Ssms.exe
      Application Version:                           2011.110.3000.0
      Application Timestamp:                     5081c1cd
      Fault Module Name:                          ntdll.dll
      Fault Module Version:                        6.3.9600.17476
      Fault Module Timestamp:                  54516af9
      Exception Code:                                 
    c000007b
      Exception Offset:                               
    000a36e5
      OS Version:                                         
    6.3.9600.2.0.0.272.7
      Locale ID:                                            
    1033
      Additional Information 1:                  1abe
      Additional Information 2:                  1abee00edb3fc1158f9ad6f44f0f6be8
      Additional Information 3:                  1abe
      Additional Information 4:                  1abee00edb3fc1158f9ad6f44f0f6be8
    We have done the following to resolve this without success:
    Run a repair from the installation package
    Uninstalled SSMS and reinstalled
    Copied in another ntdll.dll

    This what is generated in the log when trying to start the application:
    Date,Source,Severity,Message,Category,Event,User,Computer
    01/16/2015 09:39:58,Windows Error Reporting,Information,Fault bucket <c/> type 0<nl/>Event Name: APPCRASH<nl/>Response: Not available<nl/>Cab Id: 0<nl/><nl/>Problem signature:<nl/>P1: Ssms.exe<nl/>P2: 2011.110.3000.0<nl/>P3:
    5081c1cd<nl/>P4: ntdll.dll<nl/>P5: 6.2.9200.17046<nl/>P6: 53b485c4<nl/>P7: c000007b<nl/>P8: 00078c9e<nl/>P9: <nl/>P10: <nl/><nl/>Attached files:<nl/><nl/>These files may be available here:<nl/>C:\Users\felgersma\AppData\Local\Microsoft\Windows\WER\ReportArchive\AppCrash_Ssms.exe_89a64aa0912742cdc59239b0e49f3eea1441adb7_93419cf8<nl/><nl/>Analysis
    symbol: <nl/>Rechecking for solution: 0<nl/>Report Id: af322b86-9da6-11e4-9418-ac162d73da9f<nl/>Report Status: 2048<nl/>Hashed bucket:,(0),1001,,cvcnt42
    01/16/2015 09:39:56,Application Error,Error,Windows cannot access the file  for one of the following reasons: there is a problem with the network connection<c/> the disk that the file is stored on<c/> or the storage drivers installed on this
    computer; or the disk is missing. Windows closed the program SQL Server Management Studio because of this error.<nl/><nl/>Program: SQL Server Management Studio<nl/>File: <nl/><nl/>The error value is listed in the Additional Data
    section.<nl/>User Action<nl/>1. Open the file again. This situation might be a temporary problem that corrects itself when the program runs again.<nl/>2. If the file still cannot be accessed and<nl/>
    - It is on the network<c/> your network administrator should verify that there is not a problem with the network and that the server can be contacted.<nl/>
    - It is on a removable disk<c/> for example<c/> a floppy disk or CD-ROM<c/> verify that the disk is fully inserted into the computer.<nl/>3. Check and repair the file system by running CHKDSK. To run CHKDSK<c/> click Start<c/>
    click Run<c/> type CMD<c/> and then click OK. At the command prompt<c/> type CHKDSK /F<c/> and then press ENTER.<nl/>4. If the problem persists<c/> restore the file from a backup copy.<nl/>5. Determine whether other
    files on the same disk can be opened. If not<c/> the disk might be damaged. If it is a hard disk<c/> contact your administrator or computer hardware vendor for further assistance.<nl/><nl/>Additional Data<nl/>Error value: 00000000<nl/>Disk
    type: 0,(100),1005,,cvcnt42
    01/16/2015 09:39:56,Application Error,Error,Faulting application name: Ssms.exe<c/> version: 2011.110.3000.0<c/> time stamp: 0x5081c1cd<nl/>Faulting module name: ntdll.dll<c/> version: 6.2.9200.17046<c/> time stamp: 0x53b485c4<nl/>Exception
    code: 0xc000007b<nl/>Fault offset: 0x00078c9e<nl/>Faulting process id: 0x2384<nl/>Faulting application start time: 0x01d031b36fa19f2f<nl/>Faulting application path: C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\ManagementStudio\Ssms.exe<nl/>Faulting
    module path: C:\Windows\SYSTEM32\ntdll.dll<nl/>Report Id: af322b86-9da6-11e4-9418-ac162d73da9f<nl/>Faulting package full name: <nl/>Faulting package-relative application ID:,(100),1000,,cvcnt42
    01/16/2015 09:35:02,Windows Error Reporting,Information,Fault bucket <c/> type 0<nl/>Event Name: APPCRASH<nl/>Response: Not available<nl/>Cab Id: 0<nl/><nl/>Problem signature:<nl/>P1: Ssms.exe<nl/>P2: 2011.110.3000.0<nl/>P3:
    5081c1cd<nl/>P4: ntdll.dll<nl/>P5: 6.2.9200.17046<nl/>P6: 53b485c4<nl/>P7: c000007b<nl/>P8: 00078c9e<nl/>P9: <nl/>P10: <nl/><nl/>Attached files:<nl/><nl/>These files may be available here:<nl/>C:\Users\felgersma\AppData\Local\Microsoft\Windows\WER\ReportArchive\AppCrash_Ssms.exe_89a64aa0912742cdc59239b0e49f3eea1441adb7_9d291751<nl/><nl/>Analysis
    symbol: <nl/>Rechecking for solution: 0<nl/>Report Id: fb97f994-9da5-11e4-9418-ac162d73da9f<nl/>Report Status: 2048<nl/>Hashed bucket:,(0),1001,,cvcnt42

Maybe you are looking for

  • How to delete an element in a tree?

    how can i delete element in my tree? after that to refresh the tree?

  • Close an entry document

    Hi experts. When you want to close an entry document from the WAE02 transaction, is there any option to check the status of the disposal document assigned to the entry document and not allow to close it if the disposal document is not canceled? Thank

  • Why will my HP TrueVision HD webcam work for like 3 minutes in a somewhat dark room?

    When I use skype, the webcam truevision works for only like a minute when using enough of the light source round me.  After that minute is up te screen fades to darkness and I cant see anything.  How can I fix this so that it just stays steady?

  • How large is a flash export?

    I've built an interactive DVD presentation using Encore 1.5, and I just saw that Encore cs3 will allow you to export to flash. Does anyone know how large a flash file you end up with? My DVD project is about 1 gig, which of course is too large for yo

  • Portal Landscape - ECC

    Hi all, Our Client Plans to have the Separate Portal for LSO,Erec and XSS(ESS,MSS) and Backend will be only one instance When we have Individual portal how the UWL Should be planned to setup. We are planning to setup a single UWL.In this Case will  w