JNI Versus MQ Series

For one of our projects I am given a task of integrating with EJB for data access from a C++ client. Since I couldnt use corba i have two options available JNI / IBM MQ. Since I am new to the Java world I hoping you gurus could solve my dilemma with JNI and MQ.
Has anyone used JNI for accessing beans for data access. there will be atleast 10000 calls made per hour and the amount of data is not that big. What would a Java Architect do given the oppurtunity of using MQ versus JNI.
I heard anecdotes saying JNI is complex and JVM will take time to load and so on.. My initial code for access a java class is "manageable" and I load JVM once in the beginning. I wondering which is faster and reliable.
Any comments much appreciated..

You said:
So in my c++ class I load the JVM once in the beginning, load a java proxy class that will pre
load classes and execute methods for the C++ server. Clients requesting data will tell me which >>class to ask for and I execute methods on that class. The data is returned to me as an xml
string. Please explain "clients". Who are your clients? Is there such a C++ code already in place??? And some existing client? Who accesses your C++ code, and how?
Also, please explain C++ server. In what sense is it a server?
If there is none at the moment, I think you are making things a bit too complex.
If your clients are instances of servlet (JSP) invokation:
There is at least one JVM in which all servlets run.
In one of your Beans, or servlets, you declare native methods to do the data fetching stuff.
You also have instances of classes that do the XML generation.
If you first need to call the native methods which will do the data fetch for you, and return data to your Java side code as a string, array or of strings, vector of strings or in some data structure.
(RecordSet???)
And then on this data, you use one of your "XML generating class" object to generate XML.
Return this XML data to your clients, which I guess would be other Servlets or JSPs.
Once you have the basic structure in place, people here can help you with deeper problems like threads, performance etc.
Since each servlet/ JSP runs in a different thread, you can make your calls to native code thread safe my keeping most of your stuff in the native function scope. Global variable should be placed in critical sections.
I hope I made some sense.
Please update me with whatever advances you make. I intend to learn a lot from these forums.
cheers
[email protected]

Similar Messages

  • 875P Neo versus 865 series m/b

    Hi
    I am running the MSI875P Neo FIS2R m/b and apart from a few problems like the HDD LED, I am fairly happy with the board but I was just wondering what board from the 865 series of m/b's is, apart from the chipset, identical in features and has the HDD LED owrking with S-ATA drives?
    I really miss the HD LED not working and wonder if anyone can recommend a board from the 865 series which, (apart from the chipset), is identical in features and performance and will work with the 'spec' shown in my signature and also one on which the hard drive LED works with S-ATA drives?  
    Exact model please so that I check the 'spec' on the MSI website, and preferably one which I can buy in the UK.
    Regards
    Brave01Heart

    Quote
    Originally posted by REILLY875
    Dont Forget Uruk Hai, that the i875 Chipset was Released (Born) Before the i865, and so were Most of the Motherboards that emplore these Chipsets . So I think thats why they caught the S-ATA LED Problem on the i865......MSI i875 Born on 5-19-03, MSI i865 Born on 6-12-03.........Sean REILLY875
    Yes, Sorry all, once again my knowledge is found wanting!!!  
    Thanks Sean for putting me straight.
    I always thought 875's were specially selected 865 chips, but it is infact the reverse. 865 is an 875 that doesn't necessarily meet all 875 parameters. Of course as we all know (Barton 2500+, P4 2.4C) this may be true in the early stages of a chips development, but as process refinements are made and production volumes increase, these selections go out of the window. So all of you 865 guys most probably have 875 capable chips anyway (more than likely why MSI can now offer MAT on 865 board!!!)
    Ok, so at least my faith in MSI design process is restored, but I still like the bamboo idea!!!
    I would guess that this should lead us to conclude that we should always recommend 865 variants over 875 to anyone asking. Like Brave01Heart did waaaaay back at the start of this thread.  

  • SQL Server 2012 SE install time

    I installed SQL 2012 SE on a couple of VM's built on HyperVservers. Both the Hypervserver hosts are different. So when the VM's inherited the properties these are the configurations:
    VM1:
    Memory: 32GB
    OS:Windows Server 2008 R2 EE SP1
    Processor: Intel(R) Xeon(R) CPU L7555 @1.87GHz
    VM2:
    Memory: 32GB
    OS:Windows Server 2008 R2 EE SP1
    Processor:Intel(R) Xeon(R) CPU E5649 @2.53GHz
    Both the VM's are freshly built this morning SQL Server installation took like 105 Minutes on VM1 and 45 Minutes on VM2. On Both VM's the ISO's were copied locally and installed. My question is why did the installation take so long on a L-series processors
    versus E-Series. The E-series is like 4 generations behind.
    Is it because of the processor speed?
    Experts Need your valuable inputs. Thanks .

    Hi,
    Based on your description, many factors can affect the performance when you install SQL Server 2012 on a virtual machine, including the processor speed.
    According to this article:
    Hardware and Software Requirements for Installing SQL Server 2012, it is recommended to use a CPU which processor speed is 2.0 GHz or faster when installing SQL Server 2012, as a faster CPU can  speed your operating system and optimize the installation
    performance of SQL Server.
    Thanks,
    Lydia Zhang

  • My layer is moving

    Hi! I have the lovely task of redesigning a website that was
    built using frames. The main page is the same and all the content
    appears through a window. Forgive me, I'm not sure of the
    terminology. Of course this means that search engines can't really
    support something like this and searches for our products only
    direct people to the pages that appear in the window and not the
    site as a whole.
    So...
    I'm trying to fix this. I've laid out all the graphics in a
    table versus a series of layers - is a table the best option? I
    need to option to scroll. Can I have one cell in my table scroll?
    I'm not sure if this can be done, so I drew a layer over the cell
    in the table where the text should be and it the scrolling works
    great, except that as I preview it on different sized screens, the
    layer jumps around and doesn't stay where I want it to, in relation
    to the table. Any thoughts? I also tried to eliminate the table and
    use all layers, but they move around on me too. I'm a one woman
    marketing team, so I appreciate the chance to ask somebody
    questions!
    Thanks!!

    "I'm a one woman marketing team"
    "What is your experience with HTML?"
    That's kinda personal isn't it?
    Dave
    "Murray *ACE*" <[email protected]> wrote
    in message
    news:fodafb$dsu$[email protected]..
    > > I'm a one woman marketing team
    >
    > What is your experience with HTML?
    >
    > --
    > Murray --- ICQ 71997575
    > Adobe Community Expert
    > (If you *MUST* email me, don't LAUGH when you do so!)
    > ==================
    >
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    >
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    > ==================
    >
    >
    > "kmboede" <[email protected]> wrote in
    message
    > news:fodaag$dp2$[email protected]..
    > > Hi! I have the lovely task of redesigning a website
    that was built using
    > > frames. The main page is the same and all the
    content appears through a
    > > window.
    > > Forgive me, I'm not sure of the terminology. Of
    course this means that
    > > search
    > > engines can't really support something like this
    and searches for our
    > > products
    > > only direct people to the pages that appear in the
    window and not the
    site
    > > as a
    > > whole. So... I'm trying to fix this. I've laid out
    all the graphics
    in
    > > a
    > > table versus a series of layers - is a table the
    best option? I need to
    > > option
    > > to scroll. Can I have one cell in my table scroll?
    I'm not sure if this
    > > can be
    > > done, so I drew a layer over the cell in the table
    where the text should
    > > be and
    > > it the scrolling works great, except that as I
    preview it on different
    > > sized
    > > screens, the layer jumps around and doesn't stay
    where I want it to, in
    > > relation to the table. Any thoughts? I also tried
    to eliminate the table
    > > and
    > > use all layers, but they move around on me too. I'm
    a one woman
    marketing
    > > team,
    > > so I appreciate the chance to ask somebody
    questions! Thanks!!
    > >
    >

  • Netbeans versus Eclipse for JNI

    We have a significant about of code to rehost on solaris 8, from xview, to new java guis interfacing legacy C code. Current develpment environment is vi and make. We want to evolve to Junit, Ant, CVS and eand IDE either Netbeans or Eclipse (maybe JbuilderX). Have used both for java but never for JNI. Now exploring Eclipse. Can anyone lead us to a forum on Netbeans versus Eclipse for this type of effort?

    Hi Ivar, thanks for your reply.
    Actually both Netbeans and Eclipse provide a C/C++ plug-in, so you can develop the C side as well. I'm using it in Eclipse. There is a C Ant task also, so you can build the C and Java together. (Ideally, we want to be able to step, in the source level debugger, from the java into the C native method, but I don't believe that has been worked out in any IDE.) I am working with Eclipse now, and you can edit java in the java perspective, click a C file, and automatically switch to the C perspective. That seems to work pretty well. On the other hand, Netbeans is a Sun sponsored project, and we are on Solaris, so maybe Netbeans has some platform dependent goodies that might tip the balance.
    You and I both know that while a tool may have a capability, that does not mean it does the job well. Somebody else must be doing the same thing that I am, so I'm looking to learn from the experience of others who are farther into the process. Maybe I can avoid having to get deeply into one tool, only to find I should have used another.

  • Symbian versus series 40 handsets

    I've done a search for this and come up with nothing, so excuse if this has been discussed in some way or another already. I've often heard and read that you can do far more with a symbian/series 60 handset than you can with a series 40 one. What are some of the "applications" or things that you have done or can do with a series 60 phone? What kinds of applications have you got on your phone? Where do you find these applications? How do you install them? Why do the interfaces of series 60/smartphones seem more straightforward than series 40 (for example, no screen saver, no 3d menu icons, etc)/ why do they appear to be less configurable, superficially speaking? Why do you have more camera options, (e.g. filming in night mode,) on series 60, but not on series 40 handsets? How do you find out how much your particular handset is capable of doing if you own one?
    Would really love to know. Thanks to anyone who responds.

    I've used both in the past and present.
    The series 40 I would say is more of a common phone interface. It's primary function is as a phone and it's a damn fine interface for it. My experience of the 40 phone was with a 6230. I did find the OS to be much more stable than S60 phones. The features that I really need from a phone (ie it needs to make calls) were very slick and fast to operate.
    The apps for a S40 phone pretty much did what I needed at the time. You could find JAR applications galore out there, just a lot them were games.
    The S60 is a smartphone that has a closer resemblance to a PDA than a straight forward phone. It's whole interface is infinitely more customisable, like your PDA or PC would be. It also seems to have more productivity apps available for it. It also has a collection of games available as well. You also get things like Virus scanners for S60 phones because there are nasties out there written to exploit the Symbian OS.
    The phone itself does more than just make calls. I think that's the key for an S60 phone.
    What would I choose? I would like an S40 for ease of calls and an S60 for everything else, thus I have an S60 phone.
    As for camera options, that's merely down to what Nokia want to put into their camera. The Ericsson camera phones may not run either but their camera options are very good.

  • Lion versus Snow Leopard, a new MAC Book Pro series coming?

    My son is looking for a new laptop. He would opt for function over portability. Therefore larger MAC Book Pro makes sense but seems as if MAC Book Air has newer operating system so concern is life.

    A MacBook Pro is more likely to support more games.
    A MacBook Air is more portable.  An addon Apple Thunderbolt display gives almost all the same ports as a Pro, but then it loses its portability.
    A MacBook Pro has more connections for digital cameras, and the 17" version also supports faster hard drives via its Express/34 port which can have external SATA added.  The 17" version though only fits on the tray tables of 737/700 and larger tray tables of airlines.  The Pro has a built-in optical drive, where the Air does not. The Air is 2 pounds lighter than the Pro.  On the other hand being more portable, it is easier to get stolen. I would consider Lapcop or Lojack for notebooks if one gets a notebook.  The new operating system as my FAQ* explains still has numerous applications which need upgrading:
    http://www.macmaps.com/upgradefaq.html#LION
    You can through http://www.apple.com/macosx/uptodate get the new operating system later if you get a qualifying Mac.  That would give you a chance to upgrade when you desire, and still migrate any data to newer applications when you need to.  The real question is, what sort of compatibility is needed?
    As for your subject line, Apple does not pre-announce hardware updates except on rare occasions, and we can't speculate as to when they might release a new one.  Watch http://www.apple.com/pr/
    Once they do, it likely won't work with Snow Leopard.

  • Error on nexus 7k series " operation failed.the fabric is already locked"

    getting following error on nexus 7k series switch: error is " operation failed.the fabric is already locked", while removing ntp commands (no ntp server 10.101.1.1) from switch. Please help.

    I had the same error message, only in my situation Outlook would only open successfully every 4 or 5 attempts. When Outlook would open (versus just hanging at the splash screen), I would get two dialog boxes with the warning/error message. 
    After I clicked okay I could get into Outlook.
    Previously I experienced this problem running my Office 2010 client with my Exchange 2007 mailbox on my old laptop.  The problem followed my mailbox through an Exchange 2010 migration (so new Exchange org), client upgrade to Outlook 2013, and a new
    laptop (so I knew it wasn't a corrupt profile).  This led me to the conclusion that it was a corrupt item(s) in my mailbox.
    To resolve the issue, I archived *everything* in my mailbox to a PST file, ran SCANPST to fix the corruption, and then uploaded everything back into my Exchange mailbox one folder at a time, stopping after each folder to close and restart Outlook so I could
    narrow down which folder had the corrupt item if the problem recurred.  I'm happy to say my issue is now resolved.

  • JNI Com CoCreateInstance VM crash

    I'm trying to use the ActiveHome SDK for use in a project. The SDK is written in vb and c++. I have set up a a main class inside of my JNI wrapper to test the logic. I also set up a main class in the Java class, the test class in c++ works well when I call:
    hresult = CoCreateInstance( __uuidof(ActiveHomeScriptLib::ActiveHome), NULL, CLSCTX_INPROC_SERVER, __uuidof(ActiveHomeScriptLib::IActiveHome), (LPVOID *) &pActiveHome );however this is where java is dieing. I have tried to instead call CoCreate this way:
    hresult = CLSIDFromProgID(OLESTR("ActiveHomeScriptLib.ActiveHome"), &clsid);
    hresult = CoCreateInstance(clsid,NULL,CLSCTX_INPROC_SERVER,__uuidof(ActiveHomeScriptLib::IActiveHome),(LPVOID *) &pActiveHome);This doesnt crash the JVM but it cannot find the OLESTR in the dll using this method.
    Any help would be greatly appreciated.

    karlmarxxx wrote:
    Perhaps you should have read my question, I have already created a class in c++ testing this out and it works, When I run the same code in java the JVM crashes.Sorry the scrolling to see everything in single line obscured it.
    Presumably when you say crash you mean a windows system error dialog versus just an exit.
    When you created the C++ did you use default settings in VC from the console and for creating the jni dll?
    Presumably you are doing nothing else in terms of JNI except executing that line via a JNI method?

  • JNI FC1063 HBA support on solaris 10

    Hi ,
    I have upgraded Solaris 9 to Solaris 10 using Live upgrade.Server is SUN fire V880 and the HBA is JNI FC1063.
    After upgrading ,I am not able to see the FC LUNS .
    Does The JNI Fc1063 HBA has support on Solaris 10.
    Thanks

    I would beg to differ on this response. This is not a service call and the storage vendor is not responsible for making decisions on what the customer is allowed or not allowed to use.
    EG, I have a Sun StorEdge 9990 (actually a HDS USP 1100 and not the 9990 Lightning series which is on the web site) and I made the call to put the Fibre Channel HBA's of my choice into my systems and I also chose the Cisco 9509 switches. Both HDS and Cisco put out documentation regarding compatibility of the HBA's.
    So, I would suggest RPS check out the Brocade website or sales people and ask them if they recommend anything.
    http://www.brocade.com/products/pdf/Fabric_Aware_Compatibility_Matrix-060521.pdf
    states that the option X6768A is supported for Solaris 10.
    Sun 9980 will work on most of the HBA's available in the system handbook.
    The could be either the rebadged Qlobic or JNI HBA's. You could even use Emulux if you are that way inclined.
    My systems are full of dual channel HBA's such as SG-XPCI2FC-QF2.
    You should also be aware that the 1280 has only one 66 MHz PCI slot. You can upgrade the I/O boat to all 66 MHz slots if you are after good throughput. The upgrade will also give the opportunity to use the 4 Gb HBA if you can find a system to use them on.
    Stephen

  • Pros/Cons of replicating to files versus staging tables

    I am new to GoldenGate and am trying to figure out pros/cons of replicating to flatfiles to be processed by an ETL tool versus replicating directly to staging tables. We are using GoldenGate to source data from multiple transaction systems to flatfiles and then using Informatica to load thousands of flatfiles to our ODS staging. Trying to figure out if it would be better just to push data directly to staging tables. I am not sure which is better in terms of recovery, reconcilliation, etc. Any advice or thoughts on this would be appreciated.

    Hi,
    My Suggestion would be to push the data from multiple source systems directly to staging table and then populate target system using ELT tool like ODI.
    Oracle Data Integrator can be combined with Oracle Golden Gate (OGG) , that provides a cross-platform data replication and changed data capture. Oracle Golden Gate worked in a similar way to Oracle’s asynchronous change data capture but handles greater volumes and works across multiple database platforms.
    Source -> Staging -> Target
    ODI-EE supports all leading data warehousing platforms, including Oracle Database, Teradata, Netezza, and IBM DB2. This is complemented by the Oracle GoldenGate architecture, which decouples source and target systems, enabling heterogeneity of databases as well as operating systems and hardware platforms. Oracle GoldenGate supports a wide range of database versions for Oracle Database, SQL Server, DB2 z/Series and LUW, Sybase ASE, Enscribe, SQL/MP and SQL/MX, Teradata running on Linux, Solaris, UNIX, Windows, and HP NonStop platforms as well as many data warehousing appliances including Oracle Exadata, Teradata, Netezza, and Greenplum. Companies can quickly and easily involve new or different database sources and target systems to their configurations by simply adding new Capture and Delivery processes.
    ODI-EE and Oracle GoldenGate combined enable you to rapidly move transactional data between enterprise systems:
    Real-time data. - Immediately capture, transform, and deliver transactional data to other systems with subsecond latency. Improve organizational decision-making through enterprise-wide visibility into accurate, up-to-date information.
    Heterogeneous. - Utilize heterogeneous databases, packaged or even custom applications to leverage existing IT infrastructure. Use Knowledge Modules to speed the time of implementation.
    Reliability. - Deliver all committed records to the target, even in the event of network outages. Move data without requiring system interruption or batch windows. Ensure data consistency and referential integrity across multiple masters, back-up systems, and reporting databases.
    High performance with low impact. - Move thousands of transactions per second with negligible impact on source and target systems. Transform data at high performance and efficiency using E-LT. Access critical information in real time without bogging down production systems.
    Please refer to below links for more information on configuration of ODI-OGG.
    http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/odi/odi_11g/odi_gg_integration/odi_gg_integration.htm
    http://www.biblogs.com/2010/03/22/configuring-odi-10136-to-use-oracle-golden-gate-for-changed-data-capture/
    Hope this information helps.
    Thanks & Regards
    SK

  • ASCII character/string processing and performance - char[] versus String?

    Hello everyone
    I am relative novice to Java, I have procedural C programming background.
    I am reading many very large (many GB) comma/double-quote separated ASCII CSV text files and performing various kinds of pre-processing on them, prior to loading into the database.
    I am using Java7 (the latest) and using NIO.2.
    The IO performance is fine.
    My question is regarding performance of using char[i] arrays versus Strings and StringBuilder classes using charAt() methods.
    I read a file, one line/record at a time and then I process it. The regex is not an option (too slow and can not handle all cases I need to cover).
    I noticed that accessing a single character of a given String (or StringBuilder too) class using String.charAt(i) methods is several times (5 times+?) slower than referring to a char of an array with index.
    My question: is this correct observation re charAt() versus char[i] performance difference or am I doing something wrong in case of a String class?
    What is the best way (performance) to process character strings inside Java if I need to process them one character at a time ?
    Is there another approach that I should consider?
    Many thanks in advance

    >
    Once I took that String.length() method out of the 'for loop' and used integer length local variable, as you have in your code, the performance is very close between array of char and String charAt() approaches.
    >
    You are still worrying about something that is irrevelant in the greater scheme of things.
    It doesn't matter how fast the CPU processing of the data is if it is faster than you can write the data to the sink. The process is:
    1. read data into memory
    2. manipulate that data
    3. write data to a sink (database, file, network)
    The reading and writing of the data are going to be tens of thousands of times slower than any CPU you will be using. That read/write part of the process is the limiting factor of your throughput; not the CPU manipulation of step #2.
    Step #2 can only go as fast as steps #1 and #3 permit.
    Like I said above:
    >
    The best 'file to database' performance you could hope to achieve would be loading simple, 'known to be clean', record of a file into ONE table column defined, perhaps, as VARCHAR2(1000); that is, with NO processing of the record at all to determine column boundaries.
    That performance would be the standard you would measure all others against and would typically be in the hundreds of thousands or millions of records per minute.
    What you would find is that you can perform one heck of a lot of processing on each record without slowing that 'read and load' process down at all.
    >
    Regardless of the sink (DB, file, network) when you are designing data transport services you need to identify the 'slowest' parts. Those are the 'weak links' in the data chain. Once you have identified and tuned those parts the performance of any other step merely needs to be 'slightly' better to avoid becoming a bottleneck.
    That CPU part for step #2 is only rarely, if every the problem. Don't even consider it for specialized tuning until you demonstrate that it is needed.
    Besides, if your code is properly designed and modularized you should be able to 'plug n play' different parse and transform components after the framework is complete and in the performance test stage.
    >
    The only thing that is fixed is that all input files are ASCII (not Unicode) characters in range of 'space' to '~' (decimal 32-126) or common control characters like CR,LF,etc.
    >
    Then you could use byte arrays and byte processing to determine the record boundaries even if you then use String processing for the rest of the manipulation.
    That is what my framework does. You define the character set of the file and a 'set' of allowable record delimiters as Strings in that character set. There can be multiple possible record delimiters and each one can be multi-character (e.g. you can use 'XyZ' if you want.
    The delimiter set is converted to byte arrays and the file is read using RandomAccessFile and double-buffering and a multiple mark/reset functionality. The buffers are then searched for one of the delimiter byte arrays and the location of the delimiter is saved. The resulting byte array is then saved as a 'physical record'.
    Those 'physical records' are then processed to create 'logical records'. The distinction is due to possible embedded record delimiters as you mentioned. One logical record might appear as two physical records if a field has an embedded record delimiter. That is resolved easily since each logical record in the file MUST have the same number of fields.
    So a record with an embedded delimiter will have few fields than required meaning it needs to be combined with one, or more of the following records.
    >
    My files have no metadata, some are comma delimited and some comma and double quote delimited together, to protect the embedded commas inside columns.
    >
    I didn't mean the files themselves needed to contain metadata. I just meant that YOU need to know what metadata to use. For example you need to know that there should ultimately be 10 fields for each record. The file itself may have fewer physical fields due to TRAILING NULLCOS whereby all consecutive NULL fields at the of a record do not need to be present.
    >
    The number of columns in a file is variable and each line in any one file can have a different number of columns. Ragged columns.
    There may be repeated null columns in any like ,,, or "","","" or any combination of the above.
    There may also be spaces between delimiters.
    The files may be UNIX/Linux terminated or Windows Server terminated (CR/LF or CR or LF).
    >
    All of those are basic requirements and none of them present any real issue or problem.
    >
    To make it even harder, there may be embedded LF characters inside the double quoted columns too, which need to be caught and weeded out.
    >
    That only makes it 'harder' in the sense that virtually NONE of the standard software available for processing delimited files take that into account. There have been some attempts (you can find them on the net) for using various 'escaping' techniques to escape those characters where they occur but none of them ever caught on and I have never found any in widespread use.
    The main reason for that is that the software used to create the files to begin with isn't written to ADD the escape characters but is written on the assumption that they won't be needed.
    That read/write for 'escaped' files has to be done in pairs. You need a writer that can write escapes and a matching reader to read them.
    Even the latest version of Informatica and DataStage cannot export a simple one column table that contains an embedded record delimiter and read it back properly. Those tools simply have NO functionality to let you even TRY to detect that embedded delimiters exist let alone do any about it by escaping those characters. I gave up back in the '90s trying to convince the Informatica folk to add that functionality to their tool. It would be simple to do.
    >
    Some numeric columns will also need processing to handle currency signs and numeric formats that are not valid for the database inpu.
    It does not feel like a job for RegEx (I want to be able to maintain the code and complex Regex is often 'write-only' code that a 9200bpm modem would be proud of!) and I don't think PL/SQL will be any faster or easier than Java for this sort of character based work.
    >
    Actually for 'validating' that a string of characters conforms (or not) to a particular format is an excellent application of regular expressions. Though, as you suggest, the actual parsing of a valid string to extract the data is not well-suited for RegEx. That is more appropriate for a custom format class that implements the proper business rules.
    You are correct that PL/SQL is NOT the language to use for such string parsing. However, Oracle does support Java stored procedures so that could be done in the database. I would only recommend pursuing that approach if you were already needing to perform some substantial data validation or processing the DB to begin with.
    >
    I have no control over format of the incoming files, they are coming from all sorts of legacy systems, many from IBM mainframes or AS/400 series, for example. Others from Solaris and Windows.
    >
    Not a problem. You just need to know what the format is so you can parse it properly.
    >
    Some files will be small, some many GB in size.
    >
    Not really relevant except as it relates to the need to SINK the data at some point. The larger the amount of SOURCE data the sooner you need to SINK it to make room for the rest.
    Unfortunately, the very nature of delimited data with varying record lengths and possible embedded delimiters means that you can't really chunk the file to support parallel read operations effectively.
    You need to focus on designing the proper architecture to create a modular framework of readers, writers, parsers, formatters, etc. Your concern with details about String versus Array are way premature at best.
    My framework has been doing what you are proposing and has been in use for over 20 years by three different major nternational clients. I have never had any issues with the level of detail you have asked about in this thread.
    Throughout is limited by the performance of the SOURCE and the SINK. The processing in-between has NEVER been an issu.
    A modular framework allows you to fine-tune or even replace a component at any time with just 'plug n play'. That is what Interfaces are all about. Any code you write for a parser should be based on an interface contract. That allows you to write the initial code using the simplest possible method and then later if, and ONLY if, that particular module becomes a bottlenect, replace that module with one that is more performant.
    Your intital code should ONLY use standard well-established constructs until there is a demonstrated need for something else. For your use case that means String processing, not byte arrays (except for detecting record boundaries).

  • J2ME on Windows Mobile 5.0 xda exec and series 60 nokia help!

    Hi guys,
    I am running an application on my Nokia 6680 mobile phone (symbian series 60), and I am able to recieve a bluetooth connection using a hard coded address of the bluetooth device. The bluetooth device I connect to is a GPS device, HOLOX BT-321. All working fine!
    Problem occurs when I use the same application on an XDA Exec from O2 (also called i-mate Jasjar and M5000), the application does not connect to the bluetooth gps device, it pairs with it but when I run the application it doesnt communicate with the gps reciever, i.e the bluetooth light doesnt flash on the gps reciever to show a connection.
    Why does it work on one and not the other? Both Nokia 6680 and XDA Exec uses bluetooth v1.2 as does the gps device. Both support J2ME and CLDC 1.1 with MIDP 2.0! So why doesnt it work? Any ideas??
    The code i use is on the folowing website...
    http://www.hcilab.org/documents/tutorials/BT_GPS/BT_GPS.htm
    As I said you have to hard code the gps device your connecting to. It works on the Nokia 6680 fine!
    Thanks...

    Hiya,
    did you find a solution to the problem?
    I've actually tried to access the BT API from the Exec and it seemed that the KVM (Intent) had not implemented the Bluetooth API? interrogating the system.properties the bleuetooth specific ones all returned null
    Also, when trying to scan for devices, the midlet would just hang.
    The fact that it pairs, means actually little - I can access the GPS (and read the NMEA sentences) using C++ and native Windows calls (but no good from J2ME as JNI is not implemented)
    if you have solved the problem, please do let me know ([email protected])
    cheers
    Marco.

  • Compaq Laptop Model: CQ56-115DX versus Toshiba Netbook Model: NB505-N500BL

    I'm looking to see which one of these is best on performance only since I know the laptop is bigger then the netbook. Both are the same price. It seems like the Compaq would be a no brainer until you read the reviews on each. Toshiba has better reviews by far even though the Compaq looks more powerful from the specs. This would mainly be used for web browsing and facebook gaming. Any help would be appreciated. Thanks.

    Hello,
    Best Buy has brought your posting to my attention as you would like to purchase from Best Buy the HP CQ56-115DX versus a Toshiba netbook.   I am pleased to know that you are interested in purchasing an HP notebook.  HP is very proud of its line of products which its reps are encouraged to promote without criticizing its competitors.   The two products are different.  Do want a notebook or a netbook?  Redwyv provides a valuable posting of the pros and cons in:
    http://www.forums.bestbuy.com/t5/Computers-Home-Networking/Help-What-laptop-is-right-for-me/m-p/2815...
    You will see in the above link, the similarly priced HP Mini Netbook, Model 210.
    And as Redwyv has stated in this posting, you do have 14 days to return the product to Best Buy. 
    Based on your posting, I probably shouldn't distract you from your two choices as it appears you have already done your own research and have narrowed down your choice based primarily on price.  I am sure you have noticed the significant difference in screen size and memory.  You mentioned doing some Facebook gaming.  The fact that you specified "Facebook gaming" indicates you are aware of the requirements for more "intensive" (for lack of a better word) gaming.
    You do comment that the HP laptop is certainly more powerful, but you have read negative reviews.  There are also some very strong reviews for the CQ56-115DX.  You can find a summary of reviews at:
    http://www.zdnet.com/reviews/product/laptops/compaq-presario-cq56-115dx-v-series-v140-23-ghz-156-tft...
    Let me know if you need additional information.
    Regards,
    Priscilla
    HP Social Media Ambassador
    The views expressed in my contributions are my own and do not necessarily reflect the views and strategy of HP.

  • How to connect weblogic 8.1 to IBM MQ Series from remote machines?

    Hi,
              I am trying to connect WebLogic 8.1 to IBM MQ Series 6.0 both are running in a seperate machines.Can we do using JNDI services? Can anyone help me to fix this issue?

    I'm cutting/pasting my notes on the topic, including MQ specific notes. Start with the Integrating Remote JMS Providers FAQ (link below). You can also search this newsgroup for answers.
              Tom
              JMS Integration of Foreign Vendors with BEA WebLogic Server
              The following notes are derived mostly from "http://dev2dev.bea.com/technologies/jms/index.jsp".
              For additional questions, a good forum for WebLogic questions in general is "newsgroups.bea.com". These can be mined for information by using Google's newsgroup search function.
              JMS Integration Overview
              - For integration with "non-Java" and/or "non-JMS" platforms, see "Non-Java Integration Options" below.
              - For a foreign JMS vendor to participate in a WL transaction it must support XA. Specifically, it must support the javax.jms.XA* interfaces.
              - In WL versions 6.0 and up it is possible to make synchronous calls to foreign JMS vendors participate in a WL transaction as long as the foreign vendor supports XA.
              - WL 6.0 and 6.1 MDBs can be driven by foreign vendors non-transactionally. They can be driven transactionally by a select few foreign vendors (MQ is not part of the select few)
              - WL 7.0 and later, MDBs can be driven by foreign vendors transactionally and non-transationally.
              - WL 6.1 and later WL provides a messaging bridge feature. Messaging bridges forward messages between any two JMS destinations, including foreign destinations, and can transfer messages transactionally or non-transactionally.
              - WL 8.1 JMS provides additional features that simplify transactional and JNDI integration of foreign vendors. See http://edocs.bea.com/wls/docs81/jms/intro.html#jms_features and http://e-docs.bea.com/wls/docs81/faq/interop.html
              Integration with 8.1 Details
              To start, first read the "Integrating Remote JMS Providers FAQ" (released in Dec 2004) at:
              http://e-docs.bea.com/wls/docs81/faq/interop.html
              A good overview of 8.1 JMS interop capability is the presentation "Integrating Foreign JMS Providers with BEA WebLogic Server" here:
              http://www.bea.com/content/files/eworld/presentations/Wed_03_05_03/Application_Servers/1097-Foreign_JMS_Providers_WLS.pdf
              This document refers to helpful new 8.1 features, which simplify integration. These include:
              http://edocs.bea.com/wls/docs81/ConsoleHelp/jms_config.html#accessing_foreign_providers
              http://edocs.bea.com/wls/docs81/jms/j2ee_components.html#1033768
              And are also summarized here (under interoperability):
              http://edocs.bea.com/wls/docs81/jms/intro.html#jms_features
              Also read the MDB documentation, which extensively covers integrating foreign vendors:
              http://edocs.bea.com/wls/docs81/ejb/message_beans.html
              The 8.1 features are likely sufficient for most 8.1 integration needs, but you may want to refer to the "Using Foreign JMS Providers With WLS" white-paper mentioned below, which is 7.0 specific but contains specific examples of configuring non-WebLogic JMS vendors. See also notes on "MQ" below.
              Integration with 6.1 and 7.0 Details
              Read the "Using Foreign JMS Providers With WLS" white-paper:
              http://dev2dev.bea.com/products/wlserver/whitepapers/jmsproviders.jsp
              Note that this white-paper does not take into account 8.1 features.
              For 7.0 read the extensive 8.1 MDB documentation, which largely also applies to 7.0:
              http://edocs.bea.com/wls/docs81/ejb/message_beans.html
              Non-Java Integration Options
              - WL JMS has a JNI based C client which is available for Windows and some UNIX platforms. This C client supports 7.0 and up, and will be officially packaged with WLS in 9.0 (virtually unchanged). The C API is currently only supported through the jms newsgroup. See "JMS C API", here:
              http://dev2dev.bea.com/technologies/jms/index.jsp
              - WL supports direct Windows COM access through its "JCOM" feature. This doesn't include the JMS API, but one can invoke EJBs which in turn invoke JMS. See
              http://e-docs.bea.com/wls/docs61/jcom.html
              http://e-docs.bea.com/wls/docs70/jcom/
              http://e-docs.bea.com/wls/docs81/jcom/
              - Similar to JCOM, but more advanced and supported on more platforms, WL supports access via the standard IIOP protocol. You can use the BEA Tuxedo C client for this purpose (no license fee). This doesn't include the JMS API, but one can invoke EJBs which in turn invoke JMS. See
              http://e-docs.bea.com/wls/docs81/rmi_iiop/
              http://e-docs.bea.com/wls/docs70/rmi_iiop/
              http://e-docs.bea.com/wls/docs61/rmi_iiop/
              Unlike most other approaches, the IIOP client approach also allows the client to begin and commit user (JTA) transactions (not configured).
              - If you already have a BEA Tuxedo license, one option is communicate through BEA Tuxedo (which has various APIs on Windows) and configure a WebLogic Server to respond to these requests via the WTC bridge. Search for "WTC" in the BEA docs. Unlike most other approaches, the Tuxedo API approach also allows the client to begin and commit user (JTA) transactions.
              - Another approach is to interop via web-service standards. Or even to simply to invoke a servlet on the WL server using a basic HTTP call from the client. These operation in turn can invoke the JMS API. There is a white-paper on "Interoperability Study of BEA WebLogic Workshop 8.1 and Microsoft .NET 1.1 Web Services", that demonstrates web-services here:
              http://ftpna2.bea.com/pub/downloads/WebLogic-DotNet-Interop.pdf
              - Yet another approach is to use a third party product that is designed to wrap any JMS vendor. There are even open source versions. In no particular order, here are some examples: Open3 WinJMS, CodeMesh, Active JMS, SpiritSoft
              - Finally, there are .NET/C/C++ integration libraries that are not specific to JMS, some examples are JNBridge, Jace, and CodeMesh.
              Notes on MQ Remote Capable XA Clients
              Until recently, IBM MQ JMS clients could not work transactionally unless they were running on the same host as their MQ server. This is a limitation unique to MQ that was relaxed with the introduction of IBM's new "WebSphere MQ Extended Transactional Client". See:
              http://publibfp.boulder.ibm.com/epubs/pdf/csqzar00.pdf
              The product is new, and for some reason, configuration of this client seems to be tricky, even when WebLogic is not involved at all. Oddly, the main sticking point seems to be simply making sure that class paths refer to the required IBM jars:
              - Required on WLS where MQ objects are bound into JNDI:
              com.ibm.mq.jar, com.ibm.mqjms.jar
              - Required only if MQ objects are bound into JNDI on a different server:
              com.ibm.mq.jar
              If there are problems when using this client, first get it to work using a pure IBM client without any BEA classes involved. Once that is working, search the WL JMS newsgroup for answers and/or contact BEA customer support.
              Notes on Oracle AQ Integration
              If problems are encountered integrating Oracle's built-in queuing (Oracle AQ) JMS client, there is publicly available wrapper code that can aid integrating AQ directly into MDBs, JMS, or the messaging bridge. The solution is titled "Startup class to bind AQ/Referenceable objects to WLS JNDI", is not supported by BEA, and is posted to:
              http://dev2dev.bea.com/codelibrary/code/startupclass.jsp (older version)
              http://xa-compliant-oracleaq.projects.dev2dev.bea.com (newer version)
              Caveats:
              It may be that the solution doesn't directly support concurrent consumers. Perhaps Oracle requires that concurrent consumers each have a unique JMS connection? As a work-around, parallel message processing can be achieved indirectly by forwarding AQ messages into a WL JMS destination - which do support concurrent processing.
              Up-to-date versions of Oracle may be required. For more information, google search the weblogic.developer.interest.jms newsgroup for "Oracle" and "AQ".
              MDB Thread Pool Notes
              WL7.0SP? and WL8.1 and later support the "dispatch-policy" field to specify which thread pool an MDB uses to run its instances. In most cases this field should be configured to help address potential performance issues and/or dead-locks:
              http://edocs.bea.com/wls/docs81/ejb/DDreference-ejb-jar.html#dispatch-policy
              (Note that "dispatch-policy" is ignored for non-transactional foreign vendors; in this case, the MDB "onMessage" callback runs in the foreign vendor's thread.)
              MDB Concurrency Notes
              Queue MDBs driven by foreign providers can run multiple instances concurrently. Topic MDBs driven by foreign providers are limited to one instance (not sure, but transactional foreign driven topic MDBs may not have this limitation). The size of the thread pool that the MDB runs in and the "max-beans-in-free-pool" descriptor limit how many instances run concurrently.
              Design Guide-Lines and Performance Tuning Notes
              The "WebLogic JMS Performance Guide" white-paper contains detailed design, performance, and tuning information for Clustering, Messaging Bridge, JMS, and MDBs.
              http://dev2dev.bea.com/products/wlserver/whitepapers/WL_JMS_Perform_GD.jsp

Maybe you are looking for