High Cohesion and a dash of Low coupling

I have two classes.(well there more but i will use two to explain my problem)
Items are processed one by one, by class1, as soon as an item is processed it is sent to class2 for futher processing. Class2 contains queues which stores items sent from class1 while they wait to be processed. I have implemented this, however my problem is as follows:
There is a need for a 'routing system' class which recieves items (one at a time) from class1, works out the shortest queue lenght in class2 and then sends that item to that queue.
The biggest probelm is that the class must be able to handle different situations. E.g able to handle any number of queues, etc.
(Some may be thinking why don't you just write this as method inside class2. The answer is that this 'routing system' is going to be used by the whole program, so it must be made available to all classes without the need for)
Basically High Cohesion and low coupling is the key and this is the reason why I find it difficult to implement this class.
Can anyone help, Thanks.

My suggestion would be to put a manager on top of the queue-set.
So, for a given set of queues, there is a managing-thread that 'receives' the input (output from a prior op) and also responds to the dequeue operation. There are plenty of article on multithreading with multiple producers and multiple consumers. Just use a singleton for the manager.
http://www.google.com/search?hl=en&lr=&ie=UTF-8&oe=UTF-8&q=Java+Multithreading+multiple+producers
http://archive.devx.com/sourcebank/details.asp?resourceNum=123662

Similar Messages

  • My daq card PCI6025e' digital ports PA,PB,PC are in high state and cannot be changed even in MAX 2.1 testing panel and continuously output 5 volts even set for output.

    my digital ports PA,PB,PC are in high state and cannot be set low even if it is set for outputs.
    Thanks

    The MAX utility is the closest to the driver level and will eliminate software configuration as a possible issue. Please disconnect anything that you have connected to the DIO lines.
    Use the MAX test panel for your DIO lines, configure them as outputs, and set them to a low state. Use a multimeter (DMM) to observe the line state. If it is still high then you may have a problem with your hardware. If this is the case, I advise calling National Instruments support and investigate a possible RMA (repair).
    Best Regards,
    Justin Britten
    Applications Engineer
    National Instruments

  • Whats the difference between high-density and low-density ram?

    I bought what I thought was supposed to be a cheap 256 mb chip (pc100 or 133) but the g4 400 read only 128mb, The fellow in the store that sold it flipped out when I told him I had a Mac and said something about high density and low-density.Another guy in the store described it as being one sided and two sided.
    I don't know what my machine needs. I got my money back but he would'nt let me exchange the chip for another, me thinking it was simply mislabeled.

    The QuickSilvers use PC-133 SDRAM and are compatible with high-density memory chips (256 Megabit), so that 16-chip 512 MB and 8-chip 256 MB non-ECC DIMMs are supported. A low density, 16-chip (128 Megabit) 256 MB DIMM is also supported.

  • When I have headphones plugged into my iphone it says warning'' high volume'' and becomes first a red and then plop a number of red pops but it will not be so for someone else I know. My sound will be much lower than anyone else in the music, and when I t

    When I have headphones plugged into my iphone it says warning'' high volume'' and becomes first a red and then plop a number of red pops but it will not be so for someone else I know. My sound will be much lower than anyone else in the music, and when I talk by phone headset

    Hi! I have the same problem when I use my headphones .
    iPhone 4s England, iOS 6.1.2

  • First 2 Higher Number and Last 2 Lower Number

    Hi Experts
    I have a table with the name T there are some numeric values i want to select First 2(60, 50) Higher numbers and Last 2 (10 and 20) Lower numbers with query.
    I'm using Oracle 10g R2
    C1
         10
         20
         30
         40
         50
         60
    Please help me in this regard
    Regards,
    Nasir.

    With this solution you scan your table one time.
    REgards Salim.
    select c1
    from (
                select c1,find
                from t
                model
                dimension by ( row_number()over(partition by 1 order by c1) rn)
                  measures( c1, 0 find, count(1)over(partition by 1) cpt)
                 ( find[any]= case when cv(rn) in (1,2,cpt[1],cpt[1]-1) then 1 else 0 end )
    where find=1
    SQL> with t  as ( select        10 C1 from dual union all
      2   select 20  from dual union all
      3   select 30 from dual union all
      4   select 40 from dual union all
      5   select 50 from dual union all
      6   select 60 from dual )
      7  select c1
      8  from (
      9              select c1,find
    10              from t
    11              model
    12              dimension by ( row_number()over(partition by 1 order by c1) rn)
    13                measures( c1, 0 find, count(1)over(partition by 1) cpt)
    14               ( find[any]= case when cv(rn) in (1,2,cpt[1],cpt[1]-1) then 1 else 0 end )
    15          )
    16  where find=1
    17  /
            C1
            10
            20
            50
            60
    SQL>

  • High bytes and low bytes?

    What are high bytes and low bytes? Thanks for helping.

    This may not be enough information to answer your question. How and where are the terms High bytes and Low bytes used?
    Often, a data type is represented using multiple bytes. For example, short is 2 bytes. Sometimes the most significant byte is the high byte while the least significant byte is the low byte.

  • High brightness and low colour of display

    HP Pavilion 15-n208tx Notebook PC (ENERGY STAR)   ,high brightness and contrast for normal display setting so that colour is very less and some items like folders are not clear, Whats the solution

    Hey harikottayam,
    Thanks for the updated information.
    I understand that you are using the default settings, but can you tell me if you are able to adjust the settings to your personal preference?
    You can find the steps to do this here: Changing the Visual Appearance of Windows 8.
    The default settings for a display are not always perfect for every user, which is why the ability to adjust resolution, brightness, contrast, etcetera, are a standard feature of any display.
    If you are unable to adjust the display, or it is still poor quality I would suggest contacting HP Phone Support for service options. You're notebook is new so it should be covered under the factory warranty. You can contact HP Technical Support at 1-800-474-6836 in North America. For all other regions click here.
    Hopefully this helps.
    Please click the "Kudos, Thumbs Up" at the bottom of this post if you want to say "Thanks" for helping!
    Please click "Accept as Solution" if you feel my post solved your issue, it will help others find the solution.
    The Great Deku Tree
    I work on behalf of HP.

  • DMC-L1 RAW high ISO and garbage strips on edge

    I'm seeing some weird RAW decoding issues on some of my DMC-L1 photos that are shot with a high ISO (800 and over). There is a strip of garbage on the right edge (and sometimes on the left). If I use Adobe's DNG converter I do not get the garbage strips, so I know it's something to do with Mac OS X's RAW decoding. No photos with a lower ISO show this problem.
    Looking into this more deeply, it seems both Adobe's DNG Converter and Aperture are doing something odd. Standard photo size from the camera is 3148x2350, but the photos with the garbage strip are 3184x2358. Weirdly, the DNG conversion makes the photos 3164x2358.
    Here are cropped examples showing the width and garbage strip issue (a couple photos also have a magenta band on the left side, but not in the example):
    DMC-L1 RAW: http://www.puppethead.com/misc/dmcl1rawnoise.jpg
    DMC-L1 DNG: http://www.puppethead.com/misc/dmcl1dngnoise.jpg
    MacBook Pro 2GHz Core Duo 15"   Mac OS X (10.4.9)  

    ls -l /var/run/lighttpd/
    And how are you spawning the php instances? I don't see that in the daemons array anywhere.
    EDIT: It looks like the info in that page is no longer using pre-spawned instances, but lighttpd adaptive-spawn. The documentation has been made inconsistent it looks like.
    You will note that with pre-spawned information, the config looks different[1].
    You need to do one or the other, not both (eg. choose adaptive-spawn, or pre-spawn..not both).
    [1]: http://wiki.archlinux.org/index.php?tit … oldid=8051 "change"

  • High latency and some PL, culprit seems to be a router within comcast

    Hi: I reported this issue to Comcast support but I think I'll post it here too. I'm seeing high latency routing through the Comcast network to a variety of locations. In all endpoints I try, I see high latency except when I hit Comcast endpoints (which makes sense). Here's some output for you: WinMTR outputTraceroute output Note that in all cases, there is a significant slowdown somewhere between hop 9 and 11, always at a comcast router. This issue has been going on for a few hours. Anyone else seeing it?

    Hi: (Not sure of the correct forum for this type of request so I'm reposting here after first posting over here)  I posted a couple days ago with some issues that seem to have returned. Seeing high latency and occasional PL trying to get from my home to a variety of well-known endpoints. WinMTR output. Anyone else seeing the same? Comcast? This is starting to become a thing. Really annoying for those of us who rely on low latency connectivity when working from home!

  • Performance Degradation - High fetches and Prses

    Hello,
    My analysis on a particular job trace file drew my attention towards:
    1) High rate of Parses instead of Bind variables usage.
    2) High fetches and poor number/ low number of rows being processed
    Please let me kno as to how the performance degradation can be minimised, Perhaps the high number of SQL* Net Client wait events may be due to multiple fetches and transactions with the client.
    EXPLAIN PLAN FOR SELECT /*+ FIRST_ROWS (1)  */ * FROM  SAPNXP.INOB
    WHERE MANDT = :A0
    AND KLART = :A1
    AND OBTAB = :A2
    AND OBJEK LIKE :A3 AND ROWNUM <= :A4;
    call     count       cpu    elapsed       disk      query    current        rows
    Parse      119      0.00       0.00          0          0          0           0
    Execute    239      0.16       0.13          0          0          0           0
    Fetch      239   2069.31    2127.88          0   13738804          0           0
    total      597   2069.47    2128.01          0   13738804          0           0
    PLAN_TABLE_OUTPUT
    Plan hash value: 1235313998
    | Id  | Operation                    | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |        |     2 |   268 |     1   (0)| 00:00:01 |
    |*  1 |  COUNT STOPKEY               |        |       |       |            |          |
    |*  2 |   TABLE ACCESS BY INDEX ROWID| INOB   |     2 |   268 |     1   (0)| 00:00:01 |
    |*  3 |    INDEX SKIP SCAN           | INOB~2 |  7514 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter(ROWNUM<=TO_NUMBER(:A4))
       2 - filter("OBJEK" LIKE :A3 AND "KLART"=:A1)
       3 - access("MANDT"=:A0 AND "OBTAB"=:A2)
           filter("OBTAB"=:A2)
    18 rows selected.
    SQL> SELECT INDEX_NAME,TABLE_NAME,COLUMN_NAME FROM DBA_IND_COLUMNS WHERE INDEX_OWNER='SAPNXP' AND INDEX_NAME='INOB~2';
    INDEX_NAME      TABLE_NAME                     COLUMN_NAME
    INOB~2          INOB                           MANDT
    INOB~2          INOB                           CLINT
    INOB~2          INOB                           OBTAB
    Is it possible to Maximise the rows/fetch
    call     count       cpu    elapsed       disk      query    current        rows
    Parse      163      0.03       0.00          0          0          0           0
    Execute    163      0.01       0.03          0          0          0           0
    Fetch   174899     55.26      59.14          0    1387649          0     4718932
    total   175225     55.30      59.19          0    1387649          0     4718932
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 27
    Rows     Row Source Operation
      28952  TABLE ACCESS BY INDEX ROWID EDIDC (cr=8505 pr=0 pw=0 time=202797 us)
      28952   INDEX RANGE SCAN EDIDC~1 (cr=1457 pr=0 pw=0 time=29112 us)(object id 202995)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                  174899        0.00          0.16
      SQL*Net more data to client                155767        0.01          5.69
      SQL*Net message from client                174899        0.11        208.21
      latch: cache buffers chains                     2        0.00          0.00
      latch free                                      4        0.00          0.00
    ********************************************************************************

    user4566776 wrote:
    My analysis on a particular job trace file drew my attention towards:
    1) High rate of Parses instead of Bind variables usage.
    But if you look at the text you are using bind variables.
    The first query is executed 239 times - which matches the 239 fetches. You cut off some of the useful information from the tkprof output, but the figures show that you're executing more than once per parse call. The time is CPU time spent using a bad execution plan to find no data -- this looks like a bad choice of index, possibly a side effect of the first_rows(1) hint.
    2) High fetches and poor number/ low number of rows being processedThe second query is doing a lot of fetches because in 163 executions it is fetching 4.7 million rows at roughly 25 rows per fetch. You might improve performance a little by increasing the array fetch size - but probably not by more than a factor of 2.
    You'll notice that even though you record 163 parse calls for the second statement the number of " Misses in library cache during parse" is zero - so the parse calls are pretty irrelevant, the cursor is being re-used.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Performance issue with high CPU and IO

    Hi guys,
    I am encountering huge user response time on a production system and I don’t know how to solve it.
    Doing some extra tests and using the instrumentation that we have in the code we concluded that the DB is the bottleneck.
    We generated some AWR reports and noticed the CPU was in top wait events. Also noticed that in a random manner some simple sql take a long time to execute. We activated the sql trace on the system and noticed that for very simple SQLs (unique index access on one table) we have huge exec times. 9s
    In the trace file the huge time we had it in fetch area: 9.1s cpu and elapsed 9.2.
    And no or very small waits for this specific SQL.
    it seems like the bottle neck is on the CPU but at that point there were very few processes running on the DB. Why can we have such a big cpu wait on a simple select? This is a machine with 128 cores. We have quicker responses on machines smaller/busier than this.
    We noticed that we had a huge db_cache_size (12G) and after we scale it down we noticed some improvements but not enough. How can I prove that there is a link between high CPU and big cache_size? (there was not wait involved in SQL execution). what can we do in the case we need big DB cache size?
    The second issue is that I tried to execute an sql on a big table (FTS on a big table. no join). Again on that smaller machine it runs in 30 seconds and on this machine it runs in 1038 seconds.
    Also generated a trace for this SQL on the problematic machine:
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1    402.08    1038.31    1842916    6174343          0           1
    total        3    402.08    1038.32    1842916    6174343          0           1
      db file sequential read                     12419        0.21         40.02
      i/o slave wait                             135475        0.51        613.03
      db file scattered read                     135475        0.52        675.15
      log file switch completion                      5        0.06          0.18
      latch: In memory undo latch                     6        0.00          0.00
      latch: object queue header operation            1        0.00          0.00
    ********************************************************************************The high CPU is present here also but here I have huge wait on db file scattered read.
    Looking at the session with the select the AWG_wait for db scattered read was 0.5. on the other machine it is like 0.07.
    I though this is an IO issue. I did some IO tests at SO level and it seems like the read and writes operation are very fast…much faster than the machine that has the awg_wait smaller. Why the difference in waits?
    One difference between these two DBs is that the problem one has the db block size = 16k and the other one has 8k.
    I received some reports done at OS level on CPU and IO usage on the problematic machine (in normal operations). It seems like the CPU is very used and the IO stays very low.
    On the other machine, the smaller and the faster one, it is other way around.
    What is the problem here? How can I test further? Can I link the high CPU to low/slow IO?
    we have 10G on sun os with ASM.
    Thanks in advance.

    Yes, there are many things you can and should do to isolate this. But first check MOS Poor Performance With Oracle9i and 10g Releases When Using Dynamic Intimate Shared Memory (DISM) [ID 1018855.1] isn't messing you up to start.
    Also, be sure and post exact patch levels for both Oracle and OS.
    Be sure and check all your I/O settings and see what MOS has to say about those.
    Are you using ASSM? See Long running update
    Since it got a little better with shrinking the SGA size, that might indicate (wild speculation here, something like) one of the problems is simply too much thrashing within the SGA, as oracle decides "small" objects being full scanned in memory is faster than range scans (or whatever) from disk, overloading the cpu, not allowing the cpu to ask for other full scans from I/O. Possibly made worse by row level locking, or some other app issue that just does too much cpu.
    You probably have more than one thing wrong. High fetch count might mean you need to adjust the array size on the clients.
    Now that that is all out of the way, if you still haven't found the problem, go through http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Edit: Oh, see Solaris 10 memory management conflicts with Automatic PGA Memory Management [ID 460424.1] too.
    Edited by: jgarry on Nov 15, 2011 1:45 PM

  • HT201318 I upgraded my iCloud storage capacity for a higher tier and the device does not actually reflects said upgrade. How can this be resolved since my credit card was charged

    I upgraded my Icloud storage capacity to a higher tier and my Iphone does not reflect the change although my credit card was charged and the device is nor properly backup. How can this be resolved?

    It seems to take up to a couple of days for the upgrade to take hold, at least that's the experience of some users.  Give it 24 hours before contacting apple.
    For customer service issues, go here...
    https://expresslane.apple.com/Issues.action

  • Reg IS- HER (Higher Education and research)

    Hi all,
    I need to integrate the "Higher Education and Research" module of SAP into Enterprise Portals.
    Can anyone please let me know how to go about doing it.
    I searched for a Business package for the same but could not find one.
    Thanks in advance.
    Regards,
    Vivek

    Hi Vivek,
       I don't know anything about HER but if there isn't a business package available, you can go a couple of ways.  First way is to use the Webgui in ITS or the WebAS on your HER system to web-enable the transaction. 
    The second would be to write code (e.g. Java, etc) to call the function modules and return the data in a web enabled format.
    I suggest the first way to try it out quickly.
    Hope this is some help
    John

  • Dropped my iphone and now home button doesnt work and will not charge.lower third of it seems not to work.i assume it's had it.what will i lose off it?

    dropped my iphone and now home button doesnt work and will not charge.lower third of it seems not to work.i assume it's had it.what will i lose off it?

    rbrook0113 wrote:
    I know any device could malfucntion but if it costs $40,000 to buy you could at least warn someone of the possible malfuctions. It makes the company look bad when a large amount of your devices act up and you "cant help them" if they dont pay extra. Im just saying.
    Sorry but that makes no sense at all.  How can they possible know what things will possible malfunction on your device?  Anything that they think is highly likely to malfunction they would have addressed in desiging and manufacturing the device in the first place, during development.  They've made and sold many millions of iPhones, and many of them have lasted years without any problems (I know, I've owned such iPhones personally).  So how could they possible know if yours was any more likely to develop any issues than any of those other trouble free ones?
    It's hardware - stuff sometimes breaks randomly.  No amout of money ever guarantees or even reduces the likelyhood of random product failures.
    Take it in to Apple and see about getting it fixed.

  • System highly unstable and crashes while using Bluetooth (Plantronics)

    I got the new Macbook Pro with Leopard 10.5.6 and "some what" happy with the performance. OS looks to be buggy, in particular while using bluetooth devices or with high load and I have uploaded crash dumps couple of times to apple (support listening??) . I observed that when I use the Plantronics Pulsar 590A (http://www.plantronics.com/northamerica/enUS/products/cat1150057/cat5420035/prod29780013) watching video (iTunes, iMovie) the headset behaves strange i.e after all the hassle of going through the frequent connectivity issue (for which there's a question pending with no answer ), the sound drops or there's some crackling noise etc.
    More annoying problem is that the system crashes very frequently. i.e. while using the BT headset, watch video or listen to music for couple of minutes, close the app or put the system to sleep, then after some time try watching or listening some thing else, the system becomes unbelievably slow and finally gives up with an error message - Kernel panic, system needs to be restarted. Hold down the power button for several seconds or press the Restart button. If I don't use the bluetooth head set, the crashes are not so frequent but will crash for sure (I extensively use Virtualbox with one or two guest oses and most of the other crashes are from Virtoalbox)
    Also I observed that the Microsoft NB 5000 mouse also behaves odd i.e. Connection keeps getting dropped.
    Any ideas/suggestions ? I think this is the problem with Mac OS bluetooth driver as both these devices works without any problem on Windows XP/Vista on other hardware (not Mac x86).

    Try reinstalling OS X:
    How to Perform an Archive and Install
    An Archive and Install will NOT erase your hard drive, but you must have sufficient free space for a second OS X installation which could be from 3-9 GBs depending upon the version of OS X and selected installation options. The free space requirement is over and above normal free space requirements which should be at least 6-10 GBs. Read all the linked references carefully before proceeding.
    1. Be sure to use Disk Utility first to repair the disk before performing the Archive and Install.
    Repairing the Hard Drive and Permissions
    Boot from your OS X Installer disc. After the installer loads select your language and click on the Continue button. When the menu bar appears select Disk Utility from the Installer menu (Utilities menu for Tiger.) After DU loads select your hard drive entry (mfgr.'s ID and drive size) from the the left side list. In the DU status area you will see an entry for the S.M.A.R.T. status of the hard drive. If it does not say "Verified" then the hard drive is failing or failed. (SMART status is not reported on external Firewire or USB drives.) If the drive is "Verified" then select your OS X volume from the list on the left (sub-entry below the drive entry,) click on the First Aid tab, then click on the Repair Disk button. If DU reports any errors that have been fixed, then re-run Repair Disk until no errors are reported. If no errors are reported, then quit DU and return to the installer.
    2. Do not proceed with an Archive and Install if DU reports errors it cannot fix. In that case use Disk Warrior and/or TechTool Pro to repair the hard drive. If neither can repair the drive, then you will have to erase the drive and reinstall from scratch.
    3. Boot from your OS X Installer disc. After the installer loads select your language and click on the Continue button. When you reach the screen to select a destination drive click once on the destination drive then click on the Option button. Select the Archive and Install option. You have an option to preserve users and network preferences. Only select this option if you are sure you have no corrupted files in your user accounts. Otherwise leave this option unchecked. Click on the OK button and continue with the OS X Installation.
    4. Upon completion of the Archive and Install you will have a Previous System Folder in the root directory. You should retain the PSF until you are sure you do not need to manually transfer any items from the PSF to your newly installed system.
    5. After moving any items you want to keep from the PSF you should delete it. You can back it up if you prefer, but you must delete it from the hard drive.
    6. You can now download a Combo Updater directly from Apple's download site to update your new system to the desired version as well as install any security or other updates. You can also do this using Software Update.
    Some third-party peripherals may not be compatible with OS X. This may be especially a problem with peripherals that use proprietary drivers such as mouse software. If you are using Microsoft's driver software with your mouse consider uninstalling the software and driver. If this helps with your mouse, then you might try using SteerMouse - VersionTracker or MacUpdate - instead of the Microsoft software.

Maybe you are looking for