Uneven distribution

Hi All,
I have a cluster of arond 100 storage nodes and 30 proxy nodes. I have taken available memory and max memory from NodeMBeans.
Some of the nodes in the cluster are at 85% some other at 60-65%. I am using coherence 3.3.
Is this is the expected behavior of Coherence ? Is this is the right way of measuring STORAGE nodes usage ?
What are the possible cacuses of uneven distribution ? how can we avoid it ?
How do I make
Node          Avial MAX %used          ROLE
          MEM     MEM
Node 2 ,     532,     1529 ,     65.21(%) ,      STORAGE
Node 3 ,     454,     1529 ,     70.31(%) ,      STORAGE
Node 4 ,     405,     1529 ,     73.51(%) ,      STORAGE
Node 5 ,     447,     1529 ,     70.77(%) ,      STORAGE
Node 6 ,     452,     1529 ,     70.44(%) ,      STORAGE
Node 7 ,     499,     1529 ,     67.36(%) ,      STORAGE
Node 8 ,     440,     1529 ,     71.22(%) ,      STORAGE
Node 9 ,     433,     1529 ,     71.68(%) ,      STORAGE
Node 10 ,     512,     1529 ,     66.51(%) ,      STORAGE
Node 11 ,     338,     1529 ,     77.89(%) ,      STORAGE
Node 12 ,     417,     1529 ,     72.73(%) ,      STORAGE
Node 13 ,     320,     1529 ,     79.07(%) ,      STORAGE
Node 14 ,     432,     1529 ,     71.75(%) ,      STORAGE
Node 15 ,     307,     1529 ,     79.92(%) ,      STORAGE
Node 16 ,     413,     1529 ,     72.99(%) ,      STORAGE
Node 17 ,     465,     1529 ,     69.59(%) ,      STORAGE
Node 18 ,     475,     1529 ,     68.93(%) ,      STORAGE
Node 19 ,     515,     1529 ,     66.32(%) ,      STORAGE
Node 20 ,     425,     1529 ,     72.2(%) ,      STORAGE
Node 21 ,     512,     1529 ,     66.51(%) ,      STORAGE
Node 22 ,     433,     1529 ,     71.68(%) ,      STORAGE
Node 23 ,     448,     1529 ,     70.7(%) ,      STORAGE
Node 24 ,     498,     1529 ,     67.43(%) ,      STORAGE
Node 25 ,     453,     1529 ,     70.37(%) ,      STORAGE
Node 26 ,     491,     1529 ,     67.89(%) ,      STORAGE
Node 27 ,     522,     1529 ,     65.86(%) ,      STORAGE
Node 28 ,     760,     1529 ,     50.29(%) ,      STORAGE
Node 29 ,     396,     1529 ,     74.1(%) ,      STORAGE
Node 30 ,     224,     1529 ,     85.35(%) ,      STORAGE
Node 31 ,     450,     1529 ,     70.57(%) ,      STORAGE
Node 32 ,     467,     1529 ,     69.46(%) ,      STORAGE
Node 33 ,     186,     1529 ,     87.84(%) ,      STORAGE
Node 34 ,     465,     1529 ,     69.59(%) ,      STORAGE
Node 35 ,     454,     1529 ,     70.31(%) ,      STORAGE
Node 36 ,     446,     1529 ,     70.83(%) ,      STORAGE
Node 37 ,     453,     1529 ,     70.37(%) ,      STORAGE
Node 38 ,     519,     1529 ,     66.06(%) ,      STORAGE
Node 39 ,     312,     1529 ,     79.59(%) ,      STORAGE
Node 40 ,     299,     1529 ,     80.44(%) ,      STORAGE
Node 41 ,     391,     1529 ,     74.43(%) ,      STORAGE
Node 42 ,     286,     1529 ,     81.29(%) ,      STORAGE
Node 43 ,     420,     1529 ,     72.53(%) ,      STORAGE
Node 44 ,     413,     1529 ,     72.99(%) ,      STORAGE
Node 45 ,     291,     1529 ,     80.97(%) ,      STORAGE
Node 46 ,     407,     1529 ,     73.38(%) ,      STORAGE
Node 47 ,     390,     1529 ,     74.49(%) ,      STORAGE
Node 48 ,     434,     1529 ,     71.62(%) ,      STORAGE
Node 49 ,     275,     1529 ,     82.01(%) ,      STORAGE
Node 50 ,     459,     1529 ,     69.98(%) ,      STORAGE
Node 51 ,     417,     1529 ,     72.73(%) ,      STORAGE
Node 52 ,     423,     1529 ,     72.33(%) ,      STORAGE
Node 53 ,     256,     1529 ,     83.26(%) ,      STORAGE
Node 54 ,     461,     1529 ,     69.85(%) ,      STORAGE
Node 55 ,     485,     1529 ,     68.28(%) ,      STORAGE
Node 56 ,     427,     1529 ,     72.07(%) ,      STORAGE
Node 57 ,     439,     1529 ,     71.29(%) ,      STORAGE
Node 58 ,     450,     1529 ,     70.57(%) ,      STORAGE
Node 59 ,     487,     1529 ,     68.15(%) ,      STORAGE
Node 60 ,     458,     1529 ,     70.05(%) ,      STORAGE
Node 61 ,     464,     1529 ,     69.65(%) ,      STORAGE
Node 62 ,     441,     1529 ,     71.16(%) ,      STORAGE
Node 63 ,     318,     1529 ,     79.2(%) ,      STORAGE
Node 64 ,     304,     1529 ,     80.12(%) ,      STORAGE
Node 65 ,     276,     1529 ,     81.95(%) ,      STORAGE
Node 66 ,     494,     1529 ,     67.69(%) ,      STORAGE
Node 67 ,     517,     1529 ,     66.19(%) ,      STORAGE
Node 68 ,     311,     1529 ,     79.66(%) ,      STORAGE
Node 69 ,     416,     1529 ,     72.79(%) ,      STORAGE
Node 70 ,     844,     1529 ,     44.8(%) ,      STORAGE
Node 71 ,     362,     1529 ,     76.32(%) ,      STORAGE
Node 72 ,     417,     1529 ,     72.73(%) ,      STORAGE
Node 73 ,     516,     1529 ,     66.25(%) ,      STORAGE
Node 74 ,     529,     1529 ,     65.4(%) ,      STORAGE
Node 75 ,     459,     1529 ,     69.98(%) ,      STORAGE
Node 76 ,     257,     1529 ,     83.19(%) ,      STORAGE
Node 77 ,     190,     1529 ,     87.57(%) ,      STORAGE
Node 78 ,     310,     1529 ,     79.73(%) ,      STORAGE
Node 79 ,     532,     1529 ,     65.21(%) ,      STORAGE
Node 80 ,     262,     1529 ,     82.86(%) ,      STORAGE
Node 81 ,     270,     1529 ,     82.34(%) ,      STORAGE
Node 82 ,     293,     1529 ,     80.84(%) ,      STORAGE
Node 83 ,     418,     1529 ,     72.66(%) ,      STORAGE
Node 84 ,     388,     1529 ,     74.62(%) ,      STORAGE
Node 85 ,     412,     1529 ,     73.05(%) ,      STORAGE
Node 86 ,     314,     1529 ,     79.46(%) ,      STORAGE
Node 87 ,     412,     1529 ,     73.05(%) ,      STORAGE
Node 88 ,     297,     1529 ,     80.58(%) ,      STORAGE
Node 89 ,     517,     1529 ,     66.19(%) ,      STORAGE
Node 90 ,     538,     1529 ,     64.81(%) ,      STORAGE
Node 91 ,     519,     1529 ,     66.06(%) ,      STORAGE
Node 92 ,     475,     1529 ,     68.93(%) ,      STORAGE
Node 93 ,     436,     1529 ,     71.48(%) ,      STORAGE
Node 94 ,     456,     1529 ,     70.18(%) ,      STORAGE
Node 95 ,     424,     1529 ,     72.27(%) ,      STORAGE
Node 96 ,     402,     1529 ,     73.71(%) ,      STORAGE
Node 97 ,     517,     1529 ,     66.19(%) ,      STORAGE
Node 98 ,     284,     1529 ,     81.43(%) ,      STORAGE
Node 99 ,     512,     1529 ,     66.51(%) ,      STORAGE
Node 100 ,     456,     1529 ,     70.18(%) ,      STORAGE
Node 101 ,     442,     1529 ,     71.09(%) ,      STORAGE

user594809 wrote:
What is the impact of giving <partition-count> larger than the actual size ?
I am storing around 40GB primary + 40GB backup + indexes. Total cluster size is 150GB. how many patitions should i devide this into. I am using a partition-count of 8191.
How many nodes do you have?
does coherence stores backup copy's in the same partions along side primary data ? ( i.e in 1 partition can it store primary + backup of some other patition ? )
You are approaching it from the wrong side. The unit of data distribution in Coherence distributed cache services is a partition. There is a master copy of each partition. There are configurable (with the backup-count configuration element) number of backup copies of each partitions (by default one copy exists), as long as number-of-nodes >= 1 + backup-count. The backup and the primary copy never resides on the same node, and ultimate goal is that they never reside on the same physical box either. This latter may not always be achievable, e.g. if all your nodes are on the same box.
What should I do to blance the load more evenly on the whole cluster? Whenever the node is reaching 85% memory utilization the chances of getting OOM's/JVM crash are more.
In this situation you most likely need to increase the total available memory preferably with starting more nodes. You may also need to increase partition-count, but that alone may not reliably solve your problem.
Best regards,
Robert

Similar Messages

  • How to get uneven distribution of blended lines

    Is there a way to achieve uneven distrubution of blended lines in Illustrator CS5? There was one other post on the web asking a similar question to this, which was solved by changing the blend spline.
    This works great for objects, but not so much for lines.
    Here is an example of what I am looking for where the top is what I would like to achieve and the bottom is the typical blend.

    I will now have to read through to see how to calculate the distance
    If you're trying to achieve a realistic distribution as if viewing the side of uniform spaced lines on a cylinder, construct it by basic orthographic projection.
    JET

  • Query with uneven distribution

    We have to query a 40'000'000 table by amount_calculated(indexed) and time-period (creation_date).
    The distribution of amount_calculated is uneven. Some values have thousands of occurrencies,
    that in fact no one is interested in.
    We have to keep the query time below 15 seconds in any case.
    Which mechanism can we implement to:
    1) to predict the execution time, number of rows to expect
    2) to stop the query execution after 15 seconds

    I don't have sample code, but you could measure how much time it takes to retrieve, say, 1, 10, 100 and 1000 rows, and use that information together with the information in user_histograms to predict how much time it will take for a particular value to be retrieved.
    Regards,
    Rob.

  • Uneven distribution in Hash Partitioning

    Version :11.1.0.7.0 - 64bit Production
    OS :RHEL 5.3
    I have a range partitioning on ACCOUNTING_DATE column and have 24 monthly partitions.
    To get rid of buffer busy waits on index, i have created global partitioned index using below ddl
    DDL :
    CREATE INDEX IDX_GL_BATCH_ID ON SL_JOURNAL_ENTRY_LINES(GL_BATCH_ID)
    GLOBAL PARTITION BY HASH (GL_BATCH_ID) PARTITIONS 16 TABLESPACE OTC_IDX PARALLEL 8 INITRANS 8 MAXTRANS 8 PCTFREE 0 ONLINE;After index creation, i realized that only one index hash partition got all rows.
    select partition_name,num_rows from dba_ind_partitions where index_name='IDX_GL_BATCH_ID';
    PARTITION_NAME                   NUM_ROWS
    SYS_P77                                 0
    SYS_P79                                 0
    SYS_P80                                 0
    SYS_P81                                 0
    SYS_P83                                 0
    SYS_P84                                 0
    SYS_P85                                 0
    SYS_P87                                 0
    SYS_P88                                 0
    SYS_P89                                 0
    SYS_P91                                 0
    SYS_P92                                 0
    SYS_P78                                 0
    SYS_P82                                 0
    SYS_P86                                 0
    SYS_P90                         256905355As far as i understand, HASH partitioning will distribute evenly. By looking at above distribution, i think, i did not benefit of having multiple insert points using HASH partitioning as well.
    Here is index column statistics :
    select TABLE_NAME,COLUMN_NAME,NUM_DISTINCT,NUM_NULLS,LAST_ANALYZED,SAMPLE_SIZE,HISTOGRAM,AVG_COL_LEN from dba_tab_col_statistics where table_name='SL_JOURNAL_ENTRY_LINES'  and COLUMN_NAME='GL_BATCH_ID';
    TABLE_NAME                     COLUMN_NAME          NUM_DISTINCT  NUM_NULLS LAST_ANALYZED        SAMPLE_SIZE HISTOGRAM       AVG_COL_LEN
    SL_JOURNAL_ENTRY_LINES         GL_BATCH_ID                     1          0 2010/12/28 22:00:51    259218636 NONE                      4

    It looks like that inserted data has always the same value for the partitioning key: it is expected that in this case the same partition is used because
    >
    For optimal data distribution, the following requirements should be satisfied:
    Choose a column or combination of columns that is unique or almost unique.
    Create multiple partitions and subpartitions for each partition that is a power of two. For example, 2, 4, 8, 16, 32, 64, 128, and so on.
    >
    See http://download.oracle.com/docs/cd/E11882_01/server.112/e16541/part_avail.htm#VLDBG1270.
    Edited by: P. Forstmann on 29 déc. 2010 09:06

  • Uneven Distribution of Objects Around a Circle

    Hi,
    I'm trying to figure out a way to distribute objects in a circle but make them uneven. Well, "uneven" in that the objects begin to space out more as they approach the top of the circle like the sample shown. I know how to distribute object evenly but I'm stumped on how to make this happen. Any ideas as to how this can be achieved? Blend and Spine? Transforms?

  • Uneven distribution of memory in the blades even after reorg

    Hello:
    We have 3 blades in our BI-A system, and we see that one of the blades is always consuming almost twice the memory of the other 2 blades. We have tried reorg without any success. 
    Any suggestions on how to fix this issue?
    Thanks,
    Bindu

    Hello Vitaliy:
    Thanks for your reply.  We do have a stand-alone TREX admin tool on blades, but we couldn’t find the interactive graph you have mentioned, to move them with our mouse. Is it possible for you to give us the path in the TREX Admin to find the graphs?
    But we did find in Index > Landscape, we could right click and move the master indexes between the blades.
    This seems to help, but again we think that the “Total memory used” in the Hosts > Memory tab is not showing the correct number. It is still showing that one blade is almost double the other 2 blades.
    We check the Reorg>usage by service(1), and see that all the blades show equal usage. Does this mean the Hosts > Memory tab is not showing the correct values?
    Thanks,
    Bindu

  • How would you do uneven distribution of objects?

    I whish the Transform Each... tool had a feature allowing specified number of copies along with the Skew feature that is available in the Grid tool.
    So far I've been doing this by using the Grid tool as a reference and manually arranging objects which is a big pain in the a**
    Anyone knows a better way?
    or another vector program, plug in, script, etc?
    This is a piece of cake in most 3D programs which is a much more complex task for the 3D programmers and I can't believe that Illustrator still doesn't have such essential tool.

    Emil,
    Any brush approach wil lead to trouble in a case like this because they will either give you misalignment with the path or distortion.
    A blend approach may do it.
    In addition to what Steve just said (I had to leave while writing the following, and saw his new post #13):
    Your grid approach (upper row) seems to lead to a strange decrease pattern (reduction in distance between pairs of vertical stroked paths),
    The first decrease being 3.449 pt, the next three being 2.821, 2.823, and 2.822 (constant), then 3.507 and back to 2.822 again
    The last five decreases being 0.314, 0.626, 0.631, 0.338, and 0.002.
    Your blend approach (lower row) seems quite consistent with a decrease pattern (reduction in distance between (outer bounds of) pairs of squares) that has a constantly decreasing decrease (the change in the reduction to the next pair) of about 0.053 (with slight variations between 0.049 and 0.057, and leading to an intriguing end game where the reduction is actually increased.
    The blend made by Steve seems to lead to an exponentially decreasing decrease (something like 1/1.1875, corresponding to an increasing increase from right to left by the factor 1.1875). I am afraid we should have to ask Teri what the exact relation between handle lengths and nature of decreasing decrease is; I believe it is somewhat complex with (at least) two different handle lengths.
    As it appears, I believe your grid approach is less useful than it seems.
    Depending on the desired nature of the decrease/increase (depending on which end you start with) you may do it in a few different ways, including the blends mentioned above.
    If you wish to really control the decrease/increase, I am afraid you will have to use a more laborious blend approach, a bit similar to the one in this thread with a fake blend:
    http://forums.adobe.com/thread/808345?tstart=30
    Possible approach:
    Using a blend to fill a full circle, be aware that you need to cut the circle where you wish the blend to start/stop, and that the first/last objects are on top of each other so you need one extra.
    Preparing the contents of the fake blend:
    If you want a specific exponentially decreasing/increasing decrease/increase, you may, using the Alt/Option+O+T+M shortcut (or something similar) to copy:
    1) Create the basic object and the first copy, inserting the first distance;
    2) For the next distance insert the percentage distance (110% for a 10% increasing increase, or similar) and copy the percentage distance;
    3) For the following distances, until you have them all, insert the copied percentage distance instead of the current value (in this case you just get 110%);
    If you want a specific constantly decreasing/increasing decrease/increase, you may, using the Alt/Option+O+T+M shortcut (or something similar) to copy:
    1) Create the basic object and the first copy, inserting the first distance;
    2)  For the next distance add/subtract the constant change of distance (+2pt for a 2pt  increasing increase, or similar) and copy the constant change of distance;
    3) For the following distances, until you have them all, insert the constant change of distance after the current value (in this case you get the current value +2pt);
    To create the fake blend, you may:
    4) Object>Blend>Options set Spacing to Specified Steps with the value 1 and Orientation to Align to Path;
    5) Select all objects and Object>Blend>Make, now you have surplus objects in between the desired objects, see 8).
    To finish the work, you may:
    6) Apply the blend to the (cut) circle;
    7) Object>Blend>Expand;
    8) Delete the surplus objects.
    I hope this was not too easy (to understand).

  • Uneven print results

    When printing using photoshop CS4 the last part of the pint (say 25-30 %) turns out to be lighter than the first part.I use the HP 9180 photosmart printer. Printing the same file using MS Word gives no problems at all. So it cannot be a problem with the printer. Can anyone help me in finding where to look for a solution ?

    So it cannot be a problem with the printer.
    But most likely it is... Or with the print settings, anyway. I also wouldn't compare printing from Word with printing from PS. I dare a good deal that Word uses a completely different setting and prints, especially in terms of resolution and ink separation. Aside of investigating whther you realyl have enough ink and/or one of the ink heads is not clogged up, I would check the printer driver. You may simply be printing too fast, resulting in uneven distribution of ink. also make sure to use the correct paper type setting. This also influences drying time and thus the speed at which printing happens. Additionally, of course check color handling options for saturation etc..
    Mylenium

  • CSS Citrix CAG Load Balancing

    Hi,
    I'm looking to get an opinion as to whether we should see even load balancing over two services.  The content rule is configured as follows :-
    content secure_cag
      add service citrix_cag_1
      port 443
      protocol tcp
      vip address 10.80.2.150
      balance srcip
      add service citrix_cag_2
      sticky-inact-timeout 240
      flow-timeout-multiplier 1800
      active
    Services :-
    service citrix_cag_x
      keepalive type tcp
      keepalive port 443
      ip address 10.200.16.18
      active
    At present we only have around 40 users using it but at times we are seeing a very uneven distribution of sessions, as much as 80% on one server.  Do we have too few users to see effective load balancing? Maybe our long timeout settings are breaking load balancing?
    Thanks for any insight anyone can share.

    Hi Chris,
    You might want to try balance leastconn for your balancing method.  Also, note that you are not currently configured for sticky, so the sticky timeout you have configured isn't doing anything.  Do you require sticky?  If you do not require sticky, then leastconn should give you the best distribution across services at any given point in time.  Adding sticky, such as with advanced-balance sticky-srcip, will skew load balancing as clients become stuck to one service.
    Hope this helps,
    Sean

  • Partitioning design question

    I'm investigating partitioning one or more tables in our schema to see if that will speed performance for some of our queries. I'm wondering if the following structure is viable, however.
    Table structure - this is a snippet of relevant info:
    CREATE TABLE ASSET (
    asset NUMBER, -- primary key
    assetType NUMBER,
    company NUMBER,
    created SYSTIMESTAMP,
    modified SYSTIMESTAMP
    lobData CLOB
    ...)The current table has ~ 60 million rows. All queries are filtered at least on the company column, and possibly by other criteria (never/rarely by date). The number of rows a company can have in this table can vary greatly - the largest company has about 2.4 million, and the smallest about 1000. This table is joined by several other tables via the primary key, but rarely queried itself by the primary key (no range pkey queries exist).
    I'm thinking of partitioning by company (range) - however, I'm not sure if the uneven distribution of company data makes that an effective partition. The number of companies is relatively small (~6000 ) and does not grow significantly (perhaps 1-2 new companies a day). The data in this table is pretty active - ~200k deletes/inserts a day.
    Does it make sense to range partition by company? I was thinking of partitioning per company (1 partition per company) - but the partitions would be quite different in size. Is there a limit to the number of partitions a table can have (is it 64k?). Does partitioning even make sense for this table structure?
    Any thoughts or insights would be most helpful - thank you.

    kellypw wrote:
    I'm investigating partitioning one or more tables in our schema to see if that will speed performance for some of our queries. I'm wondering if the following structure is viable, however.
    Table structure - this is a snippet of relevant info:
    CREATE TABLE ASSET (
    asset NUMBER, -- primary key
    assetType NUMBER,
    company NUMBER,
    created SYSTIMESTAMP,
    modified SYSTIMESTAMP
    lobData CLOB
    ...)The current table has ~ 60 million rows. All queries are filtered at least on the company column, and possibly by other criteria (never/rarely by date). The number of rows a company can have in this table can vary greatly - the largest company has about 2.4 million, and the smallest about 1000. This table is joined by several other tables via the primary key, but rarely queried itself by the primary key (no range pkey queries exist).
    I'm thinking of partitioning by company (range) - however, I'm not sure if the uneven distribution of company data makes that an effective partition. The number of companies is relatively small (~6000 ) and does not grow significantly (perhaps 1-2 new companies a day). The data in this table is pretty active - ~200k deletes/inserts a day.
    The version of Oracle is very important.
    Partitioning on company looks like a sensible option since you ALWAYS filter on company - but list partitioning makes more sense than range partitioning because it is more "truthful"
    Unfortunately it looks, at first sight, as if you have a logical error in the design - I'm wondering if the company should be part of the primary key of the asset. If you partition by company you won't be able to do partition-wise joins to the other tables when joining on primary key (I've interpreted your statement to mean that the asset is the foreign key in other tables) unless you happen to be running 11g and use "ref partitioning".
    It's hard to predict the impact of 6,000 partitions, especially with such extreme variations in size. With list partitioning it's worth thinking about putting each large company into its own partition, but using a small number of partitions (or even just the default partition) for all the rest.
    Regards
    Jonathan Lewis

  • Compressor doesn't work anymore....

    Hello,
    I got a really strange bug,
    I can not use Compressor anymore...
    I can't see the "project" window where there is all my video-sequence...
    I can only see these windows :
    • "Settings / Destinations"
    • "Inspector"
    • "Preview" (but the preview window looks very buggy, i think...
    • "History"
    So I can't add any new video to compress/export... and I can add any setting to export the videos...
    I try 2 times, to errase all the Compressor Applications and application support and prefs..
    and try to reinstall from the original DVD.
    but it don't work again and again....
    PLEASE HELP !!!
    now, I don't know what to do...
    Perhaps I didn't erase & re-install the applications and compressor files properly,
    but, CAN ANYONE HELP ME PLEASE ?!!
    Thanks in advance.

    keyman: Try following the steps Compressor Zealot links to. Read the instructions carefully, and make sure you remove all the Compressor/Qmaster files before reinstalling.
    Eyebite wrote:
    don't plan on using Virtual Clustering. As far as I can determine, no one has that working yet.
    Uhmmm... Virtual Clustering works fine, but you can not use virtual clustering when sending your sequence directly from FCP to Compressor.
    Compressor: Don't export from Final Cut Pro using a Virtual Cluster
    To transcode your FCP-sequence using Virtual Clustering with Compressor 3, you can export your sequence as a self-contained ProRes file, and then bring that file into Compressor.
    Keep in mind that job segmenting is not always good for compression.
    Job Segmenting and Two-Pass (or Multi-Pass) Encoding
    If you choose the two-pass or the multi-pass mode, and you have distributed processing enabled, you may have to make a choice between speedier processing and ensuring the highest possible quality.
    The Apple Qmaster distributed processing system speeds up processing by distributing work to multiple processing nodes (computers). One way it does this is by dividing up the total amount of frames in a job into smaller segments. Each of the processing computers then works on a different segment. Since the nodes are working in parallel, the job is finished sooner than it would have been on a single computer. But with two-pass VBR and multi-pass encoding, each segment is treated individually so the bit-rate allocation generated in the first pass for any one segment does not include information from the segments processed on other computers. First, evaluate the encoding difficulty (complexity) of your source media. Then, decide whether or not to allow job segmenting (with the “Allow Job Segmenting” checkbox at the top of the Encoder pane). If the distribution of simple and complex areas of the media is similar throughout the whole source media file, then you can get the same quality whether segmenting is turned on or not. In that case, it makes sense to allow segmenting to speed up the processing time.
    However, you may have a source media file with an uneven distribution of complex scenes. For example, suppose you have a 2-hour sports program in which the first hour is the pregame show with relatively static talking heads, and the second hour is high-action sports footage. If this source media were evenly split into 2 segments, the bit rate allocation plan for the first segment would not be able to “donate” some of its bits to the second segment because the segments would be processed on separate computers. The quality of the more complex action footage in the second segment would suffer. In this case, if your goal were ensuring the highest possible quality over the entire 2-hour program, it would make sense to not allow job segmenting by deselecting the checkbox at the top of the Encoder pane. This forces the job (and therefore, the bit-rate allocation) to be processed on a single computer.
    Note: The “Allow Job Segmenting” checkbox only affects the segmenting of individual jobs (source files). If you are submitting batches with multiple jobs, the distributed processing system will continue to speed up processing by distributing (unsegmented) jobs, even with job segmenting turned off.
    From the Compressor User Manual

  • Partition Count

    Got couple of questions on the below thread!
    Capacity 150 GB (actual Data) So ~3times 450GB total memory needed.
    Out of this
    150 GB = CacheName1 (application1 50GB) + CacheName2 for a specifc application (application2 100GB)
    Question -
    a) When I configure the partition count in the distributed scheme they should have separate partition count in their local-scheme for CacheName1 and CacheName2. (correct me incase)
    PartitionCount Calulation:
    50 GB actual Data (150GB x 1024) MB / 50 MB = 3072 ~= 3080 (3079 PrimeNumber + 1)
    100 GB actual Data (300GB x 1024) MB / 50 MB = 6144 ~= 6152 (6151 PrimeNumber + 1)
    b) Is there anyway to set operational parameter that reflects and restricts the CacheName1 to 150GB and CacheName2 to 300GB?
    c) What is the role of High Unit(currently I have 2Gb for each of the CacheName1 and CacheName2 in the local scheme with all the JVMs running on 6GB)
    My understanding is High Unit restricts the individual cache location like CacheName1 to not exceed 2GB in that node?
    d) Does each cache name has some foot print in each JVM and is there any role by High Unit in the alloation of the cacheitems?

    Hi
    a) The partition count is preferably a prime number so do not add +1, the caches may have different partition counts but aren't required so
    b) No not directly, since the capacity is dependent on the number of machines and their respective heap-size. If you had a stable environment you could use high-units to do something similar, by setting it to 300 (or 150) * 1073741824 / number of storage enabled jvms.
    c) correct the high-units is what the local capacity of a certain scheme is.
    d) if the hashCode() implementation have even enough distribution it should generate equal footprint for the machines in the cluster. However if you have data affinity or uneven distribution in the hashCode() implementation some partitions may become more heavy than others which if you are unlucky could end up on same machine.
    Thanks
    /Charlie Helin - Coherence Dev Team

  • Multipal indexes on one column

    Hi,
    I have one table and in that table there are 5 columns. One column has 3 indexes and another column has 2 indexes.
    I want to know that if i create more indexes on one column, will performance of that database increase?
    Or if i remove the indexes means that if i keep one index on one column, will performance of that database decrease?
    Please suggest me what should i do for these indexes?
    Thanks in advance.
    Anand

    Performance is not increased just by adding indexes, it could even degrade if indexes are bad planned.
    A candidate column for indexing is the column mentioned on the where clause of a query. Once the candidate columns has been selected, the second filter is how selective it is, if it can be highly selective (ideal case unique values) then it can be indexed, otherwise (uneven distribution or low cardinality) it cannot be indexed by the regular method, on an uneven distribution column statistics have to be gathered; and on the low cardinality a bitmap index is advisable.
    There are many other circumstances where an index is not quite advisable, if you query by expression:
    where upper(column)
    or
    where numeric_col = '213'
    then you may have surprises finding that index is not being used. There are other tricks like query rewrite or function based index, but instead of going forward an specific statement has to be made on your side specifying concrete circumstances where you want to implement indexes.
    ~ Madrid.

  • Optimizer is not using the right index

    Hi gurus,
    there's something I understand. If someone can explains, it'll be greatly appreciated.
    Env:
    10gR2 on Redhat AS
    The
    SQL> desc stock_detail
    Name Null? Type
    NO NOT NULL NUMBER(15)
    BP_CODE NOT NULL VARCHAR2(10)
    STOC_CAT_CODE NOT NULL VARCHAR2(6)
    BUIL_CODE NOT NULL VARCHAR2(8)
    LOCA_CODE NOT NULL VARCHAR2(8)
    LOCA_SUB_CODE NOT NULL VARCHAR2(6)
    ITEM_NO NOT NULL NUMBER(8)
    QTY NOT NULL NUMBER(6)
    DEFAULT_SHELF NOT NULL VARCHAR2(1)
    CREATION_DATE NOT NULL DATE
    CREATION_USER NOT NULL VARCHAR2(8)
    CM_NO NUMBER(15)
    LANDING_COST NUMBER(11,2)
    SUPPLI_COST NUMBER(11,2)
    RMA_DEADLINE DATE
    MOD_USER VARCHAR2(8)
    MOD_DATE DATE
    RECEP_DATE DATE
    NOTE VARCHAR2(2000)
    FLAG VARCHAR2(1)
    REFUS VARCHAR2(1)
    STOC_MOVE_REAS_CODE VARCHAR2(6)
    I have many indexes on this table. (like 5 or 6).
    There's one with item + business_unit (lets say INDEX_A)
    And there's one with item + category + business_unit (lets say INDEX_B)
    The following sql is always using the wrong index
    select nvl(sum(sd.qty),0)
    from stock_detail sd, location lo
    where sd.item_no = 419261 <- In INDEX_A & INDEX_B
    and sd.STOC_CAT_CODE='REG' <- In INDEX_B
    and sd.bp_code = 'TECMTL' <- In INDEX_A & INDEX_B
    and sd.buil_code <> 'TRANSIT'
    and sd.buil_code = lo.buil_code
    and sd.loca_code = lo.code
    and sd.loca_sub_code = lo.sub_code
    and nvl(lo.restricted, 'N') = 'Y';
    This SQL always use the INDEX_A. INDEX_B is far better.
    Stats of the index uactually used (INDEX_A):
    Last Analyzed 2007-10-18 22:04:38
    Blevel 1
    Distinct Keys 72124
    Clustering Factor 105368
    Leaf Blocks 339
    Average Leaf Blocks Per Key 1
    Average Data Blocks Per Key 1
    Number of Rows 110285
    Sample Size 110285
    Stats of the index I want to be used (INDEX_B)
    Last Analyzed 2007-10-18 22:04:46
    Blevel 2
    Distinct Keys 77407
    Clustering Factor 103472
    Leaf Blocks 551
    Average Leaf Blocks Per Key 1
    Average Data Blocks Per Key 1
    Number of Rows 110285
    Sample Size 110285
    Is there a way to use the right index without adding a hint?
    Thanks in advance.
    Message was edited by:
    (made a mistkae on the stats of index B)

    I assume the execution path is a nested loop driving of the table with the constant inputs.
    The key difference in the stats is that the second index has a blevel of 2. I'd guess that the cost of using the first index is 1, and the cost of using the second index is three.
    The basic cost of accessing a table through an index is:
    blevel +
    index selectivity (ix_sel) * leaf blocks +
    table selectivity (ix_sel_with_filtering) * clustering_factor.
    However, if the blevel is 1, then Oracle ignores it.
    Your index and table selectivities in both cases are 1/distinct_keys (since this is 10.2)
    The numbers involved with the leaf block and clustering factor calculations are so small (and similar) that the difference of 2 in the add-on for the blevel is the deciding factor.
    According to the statistics, though, the choice of index shouldn't make much difference to the performance, since the number of rows (and blocks) visited is likely to be the same. However, if you have an uneven distribution of values for individual columns, you may need a histogram on that column so that the optimizer can see the effect it has on the expected work.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    P.S. I suppose it's probably fair to mention that I wrote a pretty good book about how the optimizer works: http://www.jlcomp.demon.co.uk/cbo_book/ind_book.html

  • Force Mapping from Modified Load Cell

    I am having some trouble finishing a program that we have been asked to write. Basically, the program is connected to a piece of hardware we fabricated. The hardware (attached below) is basically a wheel with four spokes. Each spoke has a strain gage attached to it on either side. These are connected as two full-bridge circuits. I have been able to get the force output for each of the full-bridges as the maximum value. We have now been asked to create a map of the forces to show where the most force is being placed (assuming uneven distribution). My vision is that it would use colors to show where the forces are, similar to a temperature map. What would I use to go about doing this?
    Attachments:
    Force Measurement v1.vi ‏106 KB
    Modified Load Cell.png ‏267 KB

    Look at the Intensity Graph. That will plot data as a range of colors. You will need to do some math to determine the value distributions from your set of strain gauge readings. You may also need to do some geometry/masking to set the force to zero where there is no material.
    Lynn

Maybe you are looking for

  • IPhoto Facebook Update ruined app

    I downloaded the Facebook update for iPhoto, but since my Mac OS X Package installer has never worked on my computer (for unknown reasons it always says my destination install dir. is "(null)" and I must free space), I had to use Pacifist to install

  • [SOLVED] KDE, Certain Desktop Effects require OpenGL but can't set it

    Certain desktop effects give me an error when enabled (or rather when I try to enable them) saying that they require OpenGL. I changed the Compositing type to OpenGL and the Qt graphics system to Raster (also tried with the default Native), and I get

  • IPhone 4 Bumper and Older Chargers

    Before you get a bumper you should know it does not connect with some older chargers. The ones where the connection to your iPhone is twice as thick as the newer versions. My old iPhone home charger and car charger will not fit with the bumper on. Th

  • Retrieve Data with count

    Hi All, I want to retrieve total number of records returned by a SQL query with the actual data. Is there a way or SQL keyword to get both in one execution. Here I cant use group by coz' some of the columns in the query contain uniquie records.. Foll

  • Moving bookmarks around easily?

    Hi all, I've been using Firefox and Safari has one advantage: one can move a folder of bookmarks easily into another folder. I have an enormous number of subjects I read up on, and so I move folders onto my bookmarks Bar quite (and then back off agai