ASCII-EBCDIC convertion between z/Os and Linux

Hi experts, we are migrating our landscape to z/Os (DB+ASCS) and Linux (PAS). We have our GLOBALHOST on z/Os but we are experimenting some problems when we try to install our application servers because the conversion between platforms.
In the planning guide we can see that there is a way to mount NFS file systems exported from z/Os, that make this convertion in an automatic way, but the commands mentioned on the guide are for UNIX and not for Linux.
Does any of you have this kind of installtion that could help us to set this parameters ok?
Or does any of you face this problems before?
Regards
gustavo

First, yes, we have z/OS systems programmers and DBAs with specific knowledge of DB2 z/OS. One of the reasons we initially went with the Z platform when we implemented SAP was that our legacy systems ran there for many years and our company had a lot of Z knowledge and experience. zSeries was one of our "core competencies".
I also need to give you a little more information about our Z setup. We actually had 2 z9 CECs in a sysplex, one in our primary data center and another close by in our DR site and connected by fiber. This allowed us to run SAP as HA on the Z platform. For highly used systems like production ERP we actually ran our DB2 instances active/active. This is one of the few advantages of the Z platform unavailable on other platforms (except Oracle RAC, which is also expensive but can at least be implemented on commodity hardware). Another advantage is that the SAP support personnel for DB2 z/OS are extremely knowledgeable and respond to issues very quickly.
We also chose the Z platform because of the touted "near-continuous availability" which sounded very good. Let me assure you, however, that although SAP has been making great strides with things like the enhancement pack installer, at present you will never have zero downtime running SAP on any platform. Specifically you will still have planned downtime for SAP kernel updates and support packs or enhancement packs, period. The "near-continuous availability" in this context refers to zero unplanned downtime. In my experience this is not the case either. We had several instances of unplanned downtime, the most recent had to do with issues when the CECs got to 100% CPU utilization for a brief period of time and could not free some asinine small memory area that caused the entire sysplex to pause all LPARs until it was dealt with(yes, this could be dealt with using system automation but our Z folks would prefer to deal with these manually since each situation can be different). We worked with IBM on a PMR for several months, but our eventual "workaround" was much better. We stopped running our DB2 instances as active/active and never had the problem again. We chose this "workaround" because we knew we were abandoning the platform and any of the test fixes from IBM required a rolling update of z/OS in all LPARs (10 total at the time), which is a major hassle, especially when you do it several times applying several different fixes until the problem is finally solved.
We also experienced some issues with DB2 z/OS itself. In one case, some data in a table in production got corrupted (yikes!!) SAP support helped us correct the data based on our QA system and IBM delivered a PTF (or maybe it was a ++APAR) to correct the problem. We also had several instances of strange poor performance in ERP or BI that were solved with a PTF or by using some special RUNSTATS output by some IBM DB2 tool our DBAs ran when we gave them the "bad" query. Every time we updated DB2 z/OS with an RSU felt like a craps shoot. Sometimes there were no issues revealed during testing, other times major issues were uncovered. This made us very hesitant when it came to patching DB2 and also made us stay well behind currently available maintenance so we could let other organizations identify problems.
Back to the topic of downtime related to DB2 z/OS itself, we know another company which runs SAP on Z that takes several hours of downtime each week (early Sunday morning I think) to REORG some large BLOB tables(if you're not in the monthly conference call for SAP on DB2 z/OS organizations, I suggest you join in). The need for RUNSTATS and REORGs to be dealt with explicitly (typically once a day for RUNSTATs and once a week for REORGs, at least for us) is a major negative of the platform, in my opinion. It is amazing what "proper" RUNSTATS can do to a previously poor performing query(hours reduced to seconds!). Also, due to the way REORGs are handled in DB2 z/OS, you'll need a lot of extra disk space for the image copies which get created. In our experience you need enough temp disk to hold the shadow copy of the largest table being REORGd and the image copies of the largest tables that are REORGd in the same time period. I recall that the image copies can be migrated to tape or virtual tape to free the image copy space back up using a periodic job, but it was a huge amount of trial and error to properly size this temp disk space, especially when the tables requiring a REORG are not the same week-to-week. We heard that with DB2 z/OS v10 that RUNSTATS and REORGs will be dealt with automatically by DB2, but I do not know if it has even been certified for SAP yet(based on recent posts in this forum it would appear not). Even when it is, I would not recommend going to it immediately(we made this mistake when DB2 z/OS v9 was certified and suffered for months getting bugs with SAP and DB2 interoperability fixed). Also, due to the way that REORGs work on BLOB tables, there will be a period of table unavailability. The caused us some issues/headaches. There are some extra REORG parameters you can set, but these issues are still always a possibility and I think that is why the company mentioned previously just took the weekly downtime to finish the REORGs on their large BLOB tables. They are very smart folks that are very experienced with zSeries and they engaged IBM experts for assistance to try and perform the REORGs online and yet they still take the downtime to perform the BLOB REORGs offline. In contrast, these periodic database tasks do not require our Basis team to do anything with SQLServer and do not cause our end-users grief when a table is unavailable.
Our reasons for moving platforms (which, let me assure you was a major undertaking and was considered long and hard) were based on 3 things:
1. Complexity
2. Performance
3. Cost
When I speak of complexity, let me give you some data... There was a time when ~50% of all of the OSS messages the Basis team opened with SAP were in the BC-DB-DB2 category. In contrast, I think we've opened 1 or 2 OSS messages in the BC-DB-MSS category ever. Many of the OSS messages for DB2 z/OS resulted in a fix from either SAP or from IBM. We've had seveal instances of applying a PTF, ++APAR, or RSU to z/OS and/or DB2 which fixed serious "unable to perform a job function" problems with SAP. We've yet to have to apply a single update to Windows or SQLServer to fix an issue with SAP.
To summarize... Comparing our previous and current SAP platforms, the performance was slower, the cost higher, and the complexity much higher. I have no doubt (especially with the newer Z10 and zEnterprise 196) that we could certainly have built a zSeries SAP solution which performed on par with what we have now, but.... I could not even fathom a guess as to the cost. I suspect this is why you don't see any data  for the standard SAP SD benchmark on zSeries.
I suspect you're already committed to the platform since deploying a Z machine, even in a lab/sandbox environment isn't as easy as going down to your local computer dealer and buying a $500 test server to install on, but... If you wanted to run SAP on DB2 I would suggest looking at DB2 LUW on either X86_64 Linux or on IBM's pSeries platform.
Brian

Similar Messages

  • What are the differences between HP-UX and linux?

    Hi,
    Could u please send me the differences between HP-UX and LINUX.
    Thanks.

    This is a forum about Oracle SQL and PL/SQL.
    It is not about Unix or Linux.
    Your question is inappropriate here and demonstrates inability to do any research on your own.
    Sybrand Bakker
    Senior Oracle DBA

  • Move encrypted filesystem between OS X and linux?

    I would like, if possible, to copy and use an encrypted filesystem between OS X and linux. I haven't found a way to do this, and would appreciate any assistance (that is, I want to create something like an encrypted sparseimage that can also be copied to, and read and mounted on, a linux system). If I build an ext2 filesystem in a sparseimage, is there any way I can get linux to understand and mount it (presumably this requires linux somehow recognising the sparseimage as a partition)? Or alternatively, is there any way to mount an encrypted linux filesystem on OS X? Cross-mounting isn't a solution to my problem; I really need to be able to put a secure filesystem in a file on a USB drive or CD or..., and then read it on a mac or linux system when I plug it in. Even the simple answer "no, it can't be done" would be a great help in saving my time... I'd like to avoid solutions that involve creating a clear copy of the encrypted system in file space (e.g. tar/gzip/encrypt/delete -> copy encrypted decrypt/gunzip/untar) if at all possible. Mechanisms that decrypt on access are much preferable.
    Thanks and Best Wishes
    Bob

    I believe TrueCrypt can do what you want.

  • Convert between simplified Chinese and traditional Chinese iWork 13

    In the latest version of iWork, services of converting between Simplified Chinese and Traditional Chinese doesn't work any more. Does it happen to me only, or to everyone? If the latter, I truly hope Apple can add the function soon back the iWork suite.

    You need to go back to Pages 4 for this to work.
    Let Apple know you want it back via
    http://www.apple.com/feedback

  • Help converting between Java Dates and SQL Dates

    Hey all,
    I'm working on a JSP to allow a user to enter information about news articles into a mySQL database. I'm using text fields for most of the data, and most of it is transferred correctly when the user hits "Submit." However, I am having a problem with the date field of the mySQL database. Basically I'm trying to retrieve the current date from the JSP page, and then insert this into the date field of the appropriate table when the user hits submit. The problem is that I can't typecast a java.util.Date or even a java.util.Calendar to a java.sql.Date object. I've tried lots of approaches, including creating a new instance of java.util.Calendar, getting all its fields and concatenating them onto the date variable, but this doesn't work.
    Anyone know how I could easily convert between java.util.Date or java.util.Calendar and java.sql.Date?
    Thanks!

    Thanks for the help!
    I can correctly display the current date on the page in java.sql.Date format now, but it's still not being inserted into the database correctly. The specific code is as follows:
    java.util.Date dt = new java.util.Date();
    java.sql.Date sqlDate = new java.sql.Date(dt.getTime());
    (As you wrote)
    Then (after connecting to the database etc.):
    PreparedStatement pstmt = con.prepareStatement("INSERT INTO NEWS(NEWSDATE,DAYOFWEEK,AUTHOR,HEADLINE,CLIP,PUBLICATION,LINK,NEWSLOCATION,DATECREATED,DATEMODIFIED,CATEGORY,KEYWORDS,PHOTOURL,PHOTOGRAPHER,AUDIOURL) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)");
    pstmt.clearParameters();
    pstmt.setString(1,date);
    pstmt.setString(2,dayofweek);
    pstmt.setString(3,author);
    pstmt.setString(4,headline);
    pstmt.setString(5,clip);
    pstmt.setString(6,publication);
    pstmt.setString(7,link);
    pstmt.setString(8,newslocation);
    pstmt.setDate(9,sqlDate);
    pstmt.setString(10,datemodified);
    pstmt.setString(11,category);
    pstmt.setString(12,keywords);
    pstmt.setString(13,photoURL);
    pstmt.setString(14,photographer);
    pstmt.setString(15,audioURL);
    int i = pstmt.executeUpdate();
    pstmt.close();
    All the other fields are retrieved with request.getParameter from text fields. Any idea why the DATECREATED field is being updated to all 0's but the others work fine?
    Thanks again.

  • Sharing Wireless Keyboard Between OS X and Linux

    So I just got an Apple wireless keyboard for use with my Mac mini which dual-boots OS X and Linux.  While it works in both, I have to re-pair it each time I switch OSes because essentially to the keyboard it's like two different computers. And while Ive read some people have been able to get the keyboard to work between multiple computers, I don't seem to be having much luck...

    I believe TrueCrypt can do what you want.

  • Converting between .mkv .mov and .mp4

    I have been given some h264 video files in .mkv format and am able to convert these to .mov by resaving them out to .mov from QuickTime Player 7.
    I can change the file extension in Finder to .mp4 and they seem to then more or less behave as .mp4 files.
    Are they actually .mp4, or QT doesn't care and just opens them anyway?
    What is the best method of handling these files for widest usage on multiple devices?
    I have read up as much as I can on the file formats but no-where does it say what OSX is actually doing when I change the extension.
    Am I right in wanting them to be .mp4 files?
    Peter

    I didn't state the audio format(s) because it has rarely been an issue, if it fails it fails at the first hurdle. The container.
    Non-AAC audio will likely fail to load/play if the MKV Extension is changed at the Finder Level. Using MPEG Streamclip to move the compressed data to a "real" MP4 file container will normally strip the audio from the final file. If you wish to retain the audio, use the QT 7 Pro "Movie to MPEG-4" option, use video passthrough, and transcode the audio to AAC.
    Most of the .mkv files have aac audio, but not all. Chapters are irrelevant, never having found any nor thought them useful.
    This is a user preference. I prefer to keep the original chapter markers for apps that use them rather than falling back on the defaulted markers added by some device players.
    I am playing the files on several Macs, 2 PCs, an iPhone, iPod Touch, iPad 2 and possibly future Android devices. There are even 2 TVs which seem to be stuck with only reading avi.
    Macs, iPhones, iPod Touch, and iPad will play H.264/AAC compressed data natively in MOV, MP4, or M4v file containers. PCs and Android playback will depend on the compatibility of the player app used on the device. Macs will play H.264 with any audio supported by your Mac codec component configuration in the MOV file container.
    My main target however is a PS3, which has turned out to be the most flexible media device in the house but does object to some files for indeterminant reasons. I will need to do comprehensive testing to find exactly what it is it doesn't like. Mostly it is good, once you update the system.
    Do not own a PS3 but believe it is supossed to be compatible iTunes and mobile device supported formats. Again, the file container of choice would depend on the content you wish to include in the file. If you want to include AC3 DD5.1 sourround soud audio, I would normally recommend the MV4 or MOV containers.
    The secondary target is iTunes so that I can get the files onto my iPod Touch and iPhone. I'm sort of used to that level of Apple devices (and I presume AppleTV) "Just not working" unless it suits Apple. So I restrict my viewing to mp4 files of my own creation or mkv files so over the top in size that recompression does little to degrade them.
    Again, the M4V file container with H.264/AAC with or without AC3 surround audio, alternative AAC audio, and/or chapters is usually the preferred norm. If you plan to use a "universal" file format, then the display dimensions, frame rate, profile, and level for encodes may depend on the specific devices involved. I.e., that is why I limit my files to 720p30 Main Profile Level 3.1 to High Profile Level 4.0 compression and rarely use even half (more commonly only about a quarter to a third) of the video data rates allowed with these settings.
    XBMC and other Media Server software seem, like VLC to be pretty tolerant, and whilst I haven't yet built myself a Media Server, it is on my longer term To-Do list.
    I use iTunes for in-house media server software to TV and mobile devices with Air Video as my primary server software externally via the internet to mobile devices when away from home. Both access the same Promise Pegasus R6 12 TB RAID storage device. (Have moved most of my content from an old Drobo Pro RAID and am in the process of upgrading the current 16 TB unit to 24 TBs as individual drives fail for the storage of raw video footage.)
    The h264 should pass straight through in a simple QT resave? Same if it has AAC audio?
    If the source MKV file is playback compatible with your system's current codec component configuration, then the QT 7 Pro or MPEG Streamclip "Save As..." option can copy the data in the MKV file directly to a new MOV file container without transcoding/recompressing any of the data. Unfortunately, the QT X player is a bit more iffy. Basically, Apple has combined the "Save, Export, and Save As..." options in to a single menu option. The result is that sometimes the app will recompress the data and at other times it may not—dependent on a numbe of variables.)
    If the MKV file contains H.264 video and AAC audio, then the MPEG Streamclip "Save As..." option allows you to select either MOV or MP4 as the target file container. If the MKV file contains H.264 video but the audio is not AAC but is still export-compatible with the QT 7 Pro app, then you can use the QT 7 Pro "Movie to MPEG-4" Export option to pass the H.264 video unchanged to a new MP4 file container while simultaneously coverting the audio to AAC. Since you still have not stated what non-AAC formats are included in some of your MKV files, I cannot tell at this point if this is a viable workflow for you.
    That covers most cases. I'm still not a 100% clear on tghe real differences between .mov and .mp4 containers and how much it really matters in the scheme of things. .mp4 seems to the go, does .mov cause a problem? If so how and how best to rectify that?
    The containers are different. They have different internal identifiers, features, capabilities, and sometimes limitations. CDs, DVDs, and BDs are all different types of optical media but each has different capacities, ratings, and features that determine how they can be used, what kind of media can be recoded on it, and what kind of a device must be used for playback. MP4 containers are very limited. Thay can only conatain MPEG-4 (MPEG-4/H.264) video and MPEG-4 (AAC) audio. M4V file containers are less limited and may typically contain H.264 video, AAC and/or AC3 audio, and chapter tracks. MOV file containers are generic and can hold up to 99 tracks of audio, video, image, text, 'tween, sprite, etc. data that is compatible with the system on which it was created. As to MOV files causing a problem—yes and no, depending on how you use it. Put "muxed" MPEG-2 data in an MOV file in an MOV file container and it will play normally in the QT 7 player (with the QT MPEG-2 component installed)—i.e., no problem. But try to play the same file in the QT X player and it will tell you that you are missing a codec component—a definite problem since you tried to play playable content in a container the media player did not expect to contain that particular form of compression. As to fixing aproblem. I would be better able to answer that if I knoew what specific problem you were referring to here.
    This is why I keep harping about knowing which player is to be used, what audio and video compression format is being used and what container is to be used. And we have not even gotten around to checking the H.264 settings. QT based players are standards conscious. Each Profile and level combination tells the player the max macroblock decoding rate, number of macroblocks allowed per frame, the maximum video data rate allowed, the highest useable resolution @ the highest frame rate, what features are supported by profile, etc. Unfortunately, some third-party venders sometimes hybridize these settings which can make the files unplayable in QT apps but they may still play on other players which do not check on or trap on the use of non-standard settings.
    BTW I did a quick hunt around on h265 and can't see what Apple is doing. Giving it a miss like it did with Bluray?
    Apple is not known for embracing such technology quickly—especially since they are still drafting and reviewing drafted standards. (I believe DivX released a draft version on the 15th of this month.) The current evolution of QT X will probably take another 5-7 years and Apple will have to design hardware capable of handling 4K and 8K  if anyone is actually going to put it to use on future Mac systems. The development of mobile devices have, for the most part, only been supporting 1080p resolutions for a relatively short period and jumping to 8K would represent something of a quantum leap at the consumer level.

  • Main difference between windows version and linux/mac version?

    Hi,
      Our lab just buy a labview package 2011 and it comes with several dvds, including
    For windows
    1) NILabView Core Software DVD1/2
    2) NILabView Core Software DVD2/2
    3) Extended Development Suite
    4) Labview Option DVD1/2
    5) LabView Option DVD 2/2,
    6) Ni device drivers
    For Mac/Linux
    1) Professional Developement SyStem
    Why there are that many dvds for windows, but only one disc for mac/linux? What's the main difference? I aslo curious if there is any advantage to use MAC/LINUX (in terms of efficiency or speed)?

    I have been using LV on the Mac almost exclusively since version 1.2.
    <soapbox mode = ON>
    It has been a major source of frustration, that so many features available to the Windows versions are not available on the Mac.  Some, like Vision, started (as iMAQ ) on the Mac, but are now Windows-only. Things like .NET are understandable, just as the Windows version probably does not support Apple Events. But support for all DAQ devices, Vision, the State Chart program and many others are missing with no good explanation from NI.
    Just trying to figure out what is supported on which platforms is a major headache. NI's website does not make that obvious, particularly at the Buy page, where is should be extremely clear as to what you get for ~$5k or not. Given the substantial discrepancy between the features on the various platforms, the price should also reflect that.
    Field representatives and phone support people often do not know these things as well.
    Nancy Pelosi is quoted as having said that the health care bill had to be passed so we could see what was in it.  Apparently some of the marketing people at NI are the same way.  But the Mac/Linux versions at full price so you can find out what is not in them.
    <\soapbox mode = OFF>
    Lynn

  • Will icloud work between my iphone and linux laptop

    will icloud work with a linux laptop

    Welcome to the Apple Support Communities
    iCloud is only compatible with iOS, OS X and Windows, not Linux, so you can't use iCloud in Linux. However, you can access to your iCloud in Linux opening http://www.icloud.com, but you can't sync data from the PC to your other computers and devices

  • How to fix "cannot convert between unicode and non-unicode string data types" :/

    Environment: SQL Server 2008 R2
    Introduction:Staging_table is a table where data is being stored from source file. Individual and ind_subject_scores are destination tables.
    Purpose: To load the data from a source file .csv while SSIS define table  fields with 50 varchar, I can still transfer the data to the entity table/ destination and keeping the table definition.
    I'm getting validation error "Cannot convert between a unicode and a non-unicode string data types" for all the columns.
    Please help

    Hi ,
    NVARCHAR = DT_WSTR
    VARCHAR = DT_STR
    Try below links:
    http://social.msdn.microsoft.com/Forums/sqlserver/en-US/ed1caf36-7a62-44c8-9b67-127cb4a7b747/error-on-package-can-not-convert-from-unicode-to-non-unicode-string-type?forum=sqlintegrationservices
    http://social.msdn.microsoft.com/Forums/en-US/eb0d1519-4be3-427d-bd30-ae4004ea9e8d/data-conversion-error-how-to-fix-this
    http://technet.microsoft.com/en-us/library/aa337316(v=sql.105).aspx
    http://social.technet.microsoft.com/wiki/contents/articles/19612.ssis-import-excel-to-table-cannot-convert-between-unicode-and-non-unicode-string-data-types.aspx
    sathya - www.allaboutmssql.com ** Mark as answered if my post solved your problem and Vote as helpful if my post was useful **.

  • Java.sql.SQLException: Fail to convert between UTF8 and UCS2: failUTF8Conv

    Hi all,
    I am writing a servlet that connects to Oracle 8.0.6 through jdbc for jdk1.2 on NT 4.0
    English version and it works fine.
    But when the servlet is deployed to a solaris with Oracle 8.0.5 (not a typo, the oracle on
    NT is 8.0.6 and oracle on solaris is 8.0.5) and jdbc for jdk1.2 (of course, for Solaris),
    the servlet failed with the Exception:
    java.sql.SQLException: Fail to convert between UTF8 and UCS2: failUTF8Conv
    (I am using JRun 3.0 as the application and web server for both NT and Solaris)
    (The database in both the NT and solaris platform are using UTF8 charset)
    My servlet looks like this: (dbConn is a Connection object proved to be connected to Oracle
    in previous segment of the same method):
    String strSQL = "SELECT * FROM test";
    try { Statement stmt = dbConn.createStatement();
    ResultSet rs = stmt.execute(strSQL);
    while (rs.next()) {
    out.println("id = " + rs.getInt("id"));
    System.out.println("id written");
    out.println("name = " + rs.getString("name")); // <-- this is the line the
    exception is thrown
    System.out.println("name written");
    } catch (java.sql.SQLException e) {
    System.out.println("SQL Exception");
    System.out.println(e);
    The definition of the "test" table is:
    create table test(
    id number(10,0),
    name varchar2(30));
    There are about 10 rows exists in the table "test", in which all rows contains ONLY chinese
    characters in the "name" field.
    And when I view the System log, the string "id written" is shown EXACTLY ONCE and then there
    is:
    SQL Exception
    java.sql.SQLException: Fail to convert between UTF8 and UCS2: failUTF8Conv
    That means the resultset is fetch back from the database correctly. The problem arise only
    during the getString("name") method.
    Again, this problem only happens when the servlet is run on the solaris platform.
    At first I would expect there are some strange code shown on the web page rather than having
    an exception. I know that I should use getBytes to convert between different encodings, but
    that's another story.
    One more piece of information: When all the rows contains ascii characters in their "name"
    field, the servlet works perfectly even in solaris.
    If anyone knows why and how to tackle the problem please let me know. You can feel free to
    send email to me at [email protected]
    Many thanks,
    Ben
    null

    Hi all,
    For the problem I previously posted, I found that Oracle had had such bug filed before in Oracle 7.3.2 (something like that) and is classified to be NOT A BUG.
    A further research leads me to the document of Oracle that the error message:
    "java.sql.SQLException: Fail to convert between UTF8 and UCS2: failUTF8Conv"
    is a JDBC driver error message of error number ORA-17037.
    I'm still wondering why this behaviour will happen only in Solaris platform. The servlet on an NT machine I am using (which has an Oracle 8.0.6 and jdbc for jdk 1.2 running) is working just fine. I also suspect that this may be some sort of mistakes from jdbc driver.
    Nevertheless, I have found a way to work around the problem that I cannot get non-English string from Oracle in Solaris and I would like to share it with you all here.
    Before I go on, I found that there are many people out there on the web that encounter the same problem. (Some of which said s/he has been working on this problem for a month). As a result, if you find this way of working around the problem does help you, please tell those who have the same problem but don't know how to tackle. Thanks very much.
    Here's the way I work it out. It's kinda simple, but it does work:
    Instead of using:
    String abc = rs.getString("SomeColumnContainsNonEnglishCharacters");
    I used this:
    String abc = new String(rs.getBytes("SomeColumnContainsNonEnglishCharacters"));
    This will give you a string WITH YOUR DEFAULT CHARSET (or ENCODING) from your system.
    If you want to convert the string read to some other encoding type, say Big5, you can do it like this:
    String abc = new String(rs.getBytes("SomeColumneContainsNonEnglishCharacters"), "BIG5");
    Again, it's simple, but it works.
    Finally, if anyone knows why the fail to convert problem happens, please kindly let me know by leaving a word in [email protected]
    Again, thanks to those of you who had tried to help me out.
    Creambun
    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by creambun creambun ([email protected]):
    Hi all,
    I am writing a servlet that connects to Oracle 8.0.6 through jdbc for jdk1.2 on NT 4.0
    English version and it works fine.
    But when the servlet is deployed to a solaris with Oracle 8.0.5 (not a typo, the oracle on
    NT is 8.0.6 and oracle on solaris is 8.0.5) and jdbc for jdk1.2 (of course, for Solaris),
    the servlet failed with the Exception:
    java.sql.SQLException: Fail to convert between UTF8 and UCS2: failUTF8Conv
    (I am using JRun 3.0 as the application and web server for both NT and Solaris)
    (The database in both the NT and solaris platform are using UTF8 charset)
    My servlet looks like this: (dbConn is a Connection object proved to be connected to Oracle
    in previous segment of the same method):
    String strSQL = "SELECT * FROM test";
    try { Statement stmt = dbConn.createStatement();
    ResultSet rs = stmt.execute(strSQL);
    while (rs.next()) {
    out.println("id = " + rs.getInt("id"));
    System.out.println("id written");
    out.println("name = " + rs.getString("name")); // <-- this is the line the
    exception is thrown
    System.out.println("name written");
    } catch (java.sql.SQLException e) {
    System.out.println("SQL Exception");
    System.out.println(e);
    The definition of the "test" table is:
    create table test(
    id number(10,0),
    name varchar2(30));
    There are about 10 rows exists in the table "test", in which all rows contains ONLY chinese
    characters in the "name" field.
    And when I view the System log, the string "id written" is shown EXACTLY ONCE and then there
    is:
    SQL Exception
    java.sql.SQLException: Fail to convert between UTF8 and UCS2: failUTF8Conv
    That means the resultset is fetch back from the database correctly. The problem arise only
    during the getString("name") method.
    Again, this problem only happens when the servlet is run on the solaris platform.
    At first I would expect there are some strange code shown on the web page rather than having
    an exception. I know that I should use getBytes to convert between different encodings, but
    that's another story.
    One more piece of information: When all the rows contains ascii characters in their "name"
    field, the servlet works perfectly even in solaris.
    If anyone knows why and how to tackle the problem please let me know. You can feel free to
    send email to me at [email protected]
    Many thanks,
    Ben<HR></BLOCKQUOTE>
    null

  • ASCII to EBCDIC converter

    Hello everybody!
    I'm looking for a function module which converts ASCII text to EBCDIC. Unfortunately I couldn't find anything about this so far. It would be great if somebody could give me some information about.
    Thankyou!
    Best regards,
    Markus

    Hi Marcus,
    You don't need a Function Module; there is a ready ABAP command 'EBCDIC(x)' available.
    You will find more information here: [Converting ASCII to EBCDIC and vice-versa|http://www.sapdb.org/7.4/htmhelp/5e/0a41edba9d11d2a97100a0c9449261/frameset.htm]
    Also, check this link : [Standard include for converting ASCII / EBCDIC|http://sap.ittoolbox.com/groups/technical-functional/sap-dev/converting-data-from-ascii-to-ebcdic-in-unicode-environment-420663]
    Hope this helps! Please let me know if you need anything else!!
    Cheers,
    Shailesh.

  • Fail to convert between UTF8 and UCS2

    : java.sql.SQLException: Fail to convert between UTF8 and UCS2: failUTF8Conv
    (BC4J throws that exception)
    I got that error message when try to show the table's content in uix page.
    I use varchar2(40). the error occurs, when i set special characters like õ,ü,Ü.. and so on.
    This affects only UTF encoded database.
    We use: 9.0.2.5, Jdev9.0.3.3., UIX, iAS 9.4.2??(i dont'know)
    As i know the problem is when there is 40 letters in the column and it contains "special character" it can't convert it, because the neccessary space requered to store values is much more.
    If i use nvarchar, i don't think it fix this problem.
    And as i know, all sql constant must use where coulmn=N'constans value' format.
    Q:
    How can i set the UTF database and BC4J to run correctly.
    I mean... what type to use, what is the column size to set.
    Thanks in advice,
    Viktor

    I had the same problem using Oracle 9i. The problem lied within the Oracle JDBC driver itself! --;
    If you're using JDBC driver from Oracle 9i, stop using it!
    You should download JDBC driver of Oracle 10g from Oracle site and use that driver instead.
    After I changed the driver, I now have no problem of getting Korean characters from the database.
    Hope this would solve your problem too.

  • Fail to convert between UTF8 and UCS2: failUTF8Conv

    We need to store possibly all UTF8 chararacter in his database
    especially $ & ( 4 8 < = > from WE8ISO8859P1 and from WE8ISO8859P15.
    So we install an UTF8 instance and set NLS_LANG to UTF8.
    We have to do select/update from a java client and sqlplus(like).
    When we insert with java client it's unreadable from sqlplus and
    when whe insert from sqlplus we've got 'Fail to convert between UTF8 and UCS2: failUTF8Conv '
    here the code made in sqlplus
    update CPW_TEST set lb_comportement='$ & ( 4 8 < = > ' WHERE ID_TEST=14805;
    here the code made in java
    update CPW_TEST set lb_comportement='$ & ( 4 8 < = > ' WHERE ID_TEST=14804;
    and then the result in database
    SELECT id_test,LB_COMPORTEMENT FROM CPW_TEST WHERE ID_TEST=14804 or ID_TEST=14805
    ID_TEST LB_COMPORTEMENT
    14804 B$ B& B( B4 B8 B< B= B> B
    14805 $ & ( 4 8 < = >
    2 rows selected
    and the dump
    SELECT id_test,dump(LB_COMPORTEMENT) FROM CPW_TEST WHERE ID_TEST=14804 or ID_TEST=14805
    ID_TEST DUMP(LB_COMPORTEMENT)
    14804 Typ=1 Len=26: 194,164,32,194,166,32,194,168,32,194,180,32,194,184,32,194,188,32,194,189,32,194,190,32,194,128
    14805 Typ=1 Len=17: 164,32,166,32,168,32,180,32,184,32,188,32,189,32,190,32,128
    2 rows selected
    I'm not sure, but it seems that sqlplus uses true UTF8 (variable length codes) and java client uses UCS-2 (2 bytes)
    How can I solve my problem?
    Our configuration
    javaclient (both thin and oci jdbc driver 8.1.7), sqlplus client and database Oracle 8.1.7.0.0 on the same computer (W2000 or NT4)
    Thank you for yoru attention.

    Hi Luis, thanks for your suggestions. You're right that problem was in JServ and his JVM.
    There was conflict between different versions of Java. While iFS was using JRE 1.3.1, JServ was configured to use JRE 1.1.8. As soon as I corrected link to the right Java, problem disappears.
    Radek

  • Creating network between Mac and Linux

    Hello guys! I need to set up network between my iMac and Ubuntu linux. iMac is connected in internet by Airport and Fon Wifi Router witch is connected in router which is connected to adsl -modem. My pc is connected directly to router. Both computers have access to internet. Is there any easy solutions to share files with mac and linux? I don't need any webservers etc, I just want to share files.
    Other question related to this, why is my mac's ip address like 192.168... and linux ip addrress is 84.250...?
    Thanks allready!

    Though I'm not newbie with computers, I'm really confused becouse I haven't set up network like this never before.
    Not a problem, no worry... we all know something somebody else doesn't and vice versa.
    My linux says that my DHCP -IP is 193.210.18.18, is that related to this in any way?
    Yes, if the Mac had an IP of 193.210.18.x, (but not 18 for x), then connection would be simple, but I think we must have two devices passing out IPs. What is the Mac's IP again?
    http://www.helpdesk.umd.edu/topics/communication/ethernet/office/winxp/2882/
    Do you have any advice where I could start looking from the right IP of my linux?
    http://www.webmasterforums.com/networks-clusters/1804-find-ip-address-my-linux-b ox.html
    I'm not sure if its even configurable.
    http://tinyurl.com/2gug9s

Maybe you are looking for