Handling unicode and multi-byte/ANSI strings in same application

I'm creating my environment handle using OCIEnvNlsCreate so all strings passed to/from oracle are supposed to be in wide string format.
This is fine until I want to bind a variable that contains an ANSI character string. My application can use mixed string types.
Which SQLT_ type do I use to bind my ANSI character string? What's the difference between SQLT_STR and SQLT_AVC?
Or do I have to convert each ANSI string to a wide string before I bind?
The SQL Server ODBC API handles this problem without any trouble by specifying the data type when binding to SQL_VARCHAR or SQL_WVARCHAR.
Any help greatly appreciated as I'm totally stuck!
Thanks,
John

Here's the relevant para from the documentation:
Specifying Character Sets in OCI
Use the OCIEnvNlsCreate function to specify client-side database and national character sets when the OCI environment is created.This function allows users to set character set information dynamically in applications, independent of the NLS_LANG and NLS_NCHAR initialization parameter settings. In addition, one application can initialize several environment handles for different client environments in the same server environment.
Any Oracle character set ID except AL16UTF16 can be specified through the OCIEnvNlsCreate function to specify the encoding of metadata, SQL CHAR data, and SQL NCHAR data. Use OCI_UTF16ID in the OCIEnvNlsCreate function to specify UTF-16 data.
Can somebody please tell me what I can set charset or ncharset to apart from OCI_UTF16ID or zero and hence make the call to OCIEnvNlsCreate return OCI_SUCCESS.
Thanks,
John

Similar Messages

  • Handling Tab Delimited File generation in non-unicode for multi byte lang

    Hi,
    Requirement:
    We are generating a Tab Delimited File in different languages (Single Byte and Multi Byte) and placing the files at application server.
    Problem:
    Our system is a Non-unicode system so we are facing problems with generation of Tab delimited file for multibyte languages like Russian, Japanese, Chinese etc.,
    I am actually using data: d_tab TYPE X value '09' but it dont work for multi byte. I cant see tab delimited file at application server path.
    Any thoughts about how to proceed on this issue?Please let me know.
    Thanks & Regards,
    Pavan

    >
    Pavan Ravikanti wrote:
    > Thanks for your answer but do you reckon cl_abap_char_utilities will be a work around for data: d_tab type X VALUE '09' .
    > Pavan.
    On a non-unicode system the X Variant is working, but not on a unicode system. Here you must use the class. On the other hand you can use the class on a non-unicode system und your char var will always be correct (one byte/twobyte depending on which system your report is running).
    What you are planning to do is to put a file with an amount of possible characters into a system with has a less amount of characters. Thats not working in no way.
    What you can do is to build up a multi-code-page system where the codepage is bound to the user or bound to the logon-language. Here you you can read and process textfiles in several codepages - but not a textfile in unicode. You have to convert the unioce textfile into a non-unicode textfile before processing it.
    Remember that SAP does not support multi-code-page Systems anymore and multi-code-page systems will result in much more work when converting the system to unicode.
    Even non-unicode system will not be maintained by SAP in the near future.
    What you encounter here are problems for what unicode was developped. A unicode system can handle non-unicode textfiles, but the other way round will always lead to problems which cant be solved.

  • I bought an application 10 months ago. When I did an update on my ipad, I needed to re-download, and Apple charged again for the same Applicative.

    I bought an application 10 months ago. When I did an update on my ipad, I needed to re-download, and Apple charged again for the same Applicative.

    Did you follow the directions:
    Downloading past purchases from the App Store, iBookstore, and iTunes Store

  • Single and multi byte settings

    Hello,
    We are trying to implement multibyte char loading and I have a few questions:
    1) Our current char coding is in UTF-8. What char coding should we use for multi byte loading?
    2) In DDL, the column can be declared as a BYTE or CHAR, such as varchar2(20 CHAR). For multi byte, we can either change the size of the column or change from BYTE to CHAR for column definition. Which is a better way of implementation?
    3) Any other setting changes we need to be aware of from single to multi bye implementation?
    Regards

    First off, I'm a bit confused. If your database's character set us UTF-8, you already have a multi-byte character set. I'm not sure what it is that you're converting in this case.
    As to changing the table definition-- that depends primarily on your application(s). Generally, I find it easier to declare a field with character length semantics, which gives users in every language certainty about the number of characters a field can support. There are probably people that think the other way because they're allocating memory in a client application based on bytes and want to ensure that the definitions on the client and the server match.
    Since I don't quite understand what it is that you're converting, I'm hard pressed to come up with what "other setting changes" might be appropriate.
    Justin

  • JDBC2.0 API and Multi-Bytes Characters

    I use the JDBC2.0 API with the thin Driver816 for jdk1.2.X,
    it works well with English characters ,
    but i get wrong with Multi-Bytes Characters.
    Does anyone else know the reason?
    Thanks in advance.

    I have the same problem!!!!!!!!!!!
    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by huang Jian-chang:
    I use the JDBC2.0 API with the thin Driver816 for jdk1.2.X,
    it works well with English characters ,
    but i get wrong with Multi-Bytes Characters.
    Does anyone else know the reason?
    Thanks in advance.<HR></BLOCKQUOTE>
    null

  • Using both Enterprise library and Entity framework as DAL for same application

    We have been using EF for large amount of data  retrieval in our current application. We faced performance related issues with using EF with large data retrieval and manipulation.
    We need to extend the same project with some additional functionality similar to what currently exists in the application and uses EF.
    For the new functionality, we do not want to use EF and want to use enterprise library for Data access.
    My question is if we use both entity framework for parts of the application data access mechanism and enterprise library for other parts of application data access, are there any known issues?
    if there are any best practices to be followed please share .

    We have been using EF for large dataset retrieval in our current application. We faced performance related issues with using EF for large data set's  .
    Dataset? What are you talking about?  If you are using the salad bowl, the dataset with datatables, then here is the reason not to use them.
    http://lauteikkehn.blogspot.com/2012/03/datatable-vs-list.html
    My question is if we use both entity framework for parts of the application data access mechanism and enterprise library for other parts of application data access, are there any known issues?
    What is Entlib going to buy you in performance? It's going to buy you nothing. You'll be better of going to the EF backdoor, use SQL command objects, inline T-SQL, sprocs, datareder and using custom objects or objects off of the virtual model returning a
    single object or objects in a collction., if you are concerned about performance.
    http://blogs.msdn.com/b/alexj/archive/2009/11/07/tip-41-how-to-execute-t-sql-directly-against-the-database.aspx
    You'll probably be better of going to Entity SQL, using a datareader, collection and using custom objects or objects off of the model, if you are concerned about query performance.
    https://msdn.microsoft.com/en-us/library/vstudio/bb738684(v=vs.100).aspx
    https://msdn.microsoft.com/en-us/library/vstudio/bb387145(v=vs.100).aspx
    https://msdn.microsoft.com/en-us/library/vstudio/bb399560(v=vs.100).aspx
    My question is if we use both entity framework for parts of the application data access mechanism and enterprise library for other parts of application data access,
    are there any known issues?
     A nightmare, no consistency and complete Helter Skelter is what I see. Been there and seen it in action with different technologies doing the same thing in a solution.

  • Should i create a client and a server socket in a same application

    Greetings,
    In which situation sould i have a server socket ? and in which situation should i have a client socket ?and in which situation should i have both?
    I 'm making a app. who receives info (like alarms ) from a automation machine and also that same app. as a scheduler who send commands according to dates to the automation machine.
    Now the automation machine is programmed like a socket tcp/ip server who is always on and the app. is a client.
    Every time there's an alarm the machine sends me the info and i put it in a mysql database.
    Every time there's an event programmed the app. sends a string to the machine.
    The question is that i can't maintain that socket always connected, Sometimes disconnets.
    I was thinking of making the change of creating in both sides a server and a client, so that, for example, in the app. the client woul d handle the event msgs and the server would accecpt connections with alarms from the automaition machine.
    Since i'm a newbie in Java could somebody give me some tips, please?
    Thanks

    Thanks Peter....
    But i already do that....
    I have a thread who handles the connection management.
    If by some reaseon the connection is lost the thread reconnects it.
    My problem is that sometimes it reconnects every second, and i loose info provided by the automatian machine.
    The best thing to have it would be a socket listener, but Java does have any.
    Is there any API that does a socket listener?

  • ADF – Could I have HttpSession and EJB 3 Statefull session in same applicat

    I am using JDeveloper 10.3.2. Fist I create a single demo application to create EJB Session Bean (Statefull) and is working fine.
    When I am trying to use the same EJB to my large Application a got runtime Error,
    Without any exception. (Losing the information of EJB).
    And the Question is, Shall I use HttpSession and EJB 3 Statefull session in same application?

    Hi,
    if it is a Web application you need the https session anyway and the two are different kind of beasts, one handled by teh EJB container, the other by the web container.
    The question is why you ned stateful session beans - which seems to be a rare usecase? Usually state is persisted and tracked in the business service.
    However, without an error message its hard to tell what going wrong here
    Frank

  • Forms 9 f90servlet and Forms 10 frmservlet on same application server?

    Is it possible to install the f90servlet and frmservlet oc4j components on the same application server, so that can provide one url to one group of users to run some existing Forms 9 applications, and another url to a different group of users to run a new Forms 10 application?
    Or do I need 2 application server instances on the app server box?
    If it IS possible, how can I go about installing just the forms90 component into my existing 10.1.2.0.2 app server?
    Cheers

    Yes it is separate applications, some built in forms 9, 1 build in forms 10.
    How is it possible to do this just using a separate formsweb.cfg?
    I have tried this, but when trying to use the forms 10 OC4J to run forms built in forms 9, then you get the error :
    FRM-40011: Form was created by an old version of Form Builder
    spilgrim
    Separate machines is not an option. Largely due to licensing costs.
    There is 1 application server box, we need to use this to run some forms9 apps and forms10 apps.
    So we either install 2 versions of the application server software, or 1 version of the application server software which contains a forms9 and forms10 engine. At least this is my understanding of it.
    The 2nd option sounds preferable because we dont really need 2 whole application servers.

  • Solman 7.1 and Netweaver 7.3 on the same server

    Hi,
    I am trying to install a new test environment for SAP Manufacturing execution 6.0.
    That software requires Netweaver 7.3, which in turn requires Solution manager for applying Service packs.
    My question is:
    Can I install Solution Manager 7.1 and Netweaver 7.3 on the same application server running Windows 2008 R2 64-bit?
    The DB will be MS SQL server 2008 64-bit.
    I Have noticed the following so far:
    - Solman requires much lower versions of JCE policy files than Netweaver. Will there be a conflict?
    - Solman installation asks for Kernel NW 7.20. Why not Kernel NW 7.3?
    Hope that anyone can answer this
    Br,
    Johan

    Hi,
    As right said above, Solman is a standalone installation on web application server,
    You can refer this FAQ for further guidance [Solman FAQ|http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/b442d4ea-0c01-0010-a8ad-a4c2eaaf6b76]
    check more of SAP solman 7.1 inst guides here
    https://websmp210.sap-ag.de/~form/sapnet?_SHORTKEY=01100035870000735220&
    Thanks,
    Jansi

  • Handling Multi-byte/Unicode (Japanese) characters in Oracle Database

    Hello,
    How do I handle the Japanase characters with Oracle database?
    I have a Java application which retrieves some values from the database; makes some changes to these [ex: change value of status column, add comments to Varchar2 column, etc] and then performs an UPDATE back to the database.
    Everything works fine for the English. But NOT for Japanese language, which uses Multi-byte/Unicode characters. The Japanese characters are garbled after the performing the database UPDATE.
    I verified that Java by default uses UTF16 encoding. So there shouldn't be any problem with Java/JDBC.
    What do I need to change at #1- Oracle (Database) side or #2- at the OS (Linux) side?
    /* I tried changing the NLS_LANG value from OS and NLS_SESSION_PARAMETERS settings in Database and tried 'test' insert from SQL*plus. But SQL*Plus converts all Japanese characters to a question mark (?). So could not test it via SQL*plus on my XP (English) edition.
    Any help will be really appreciated.
    Thanks

    Hello Sergiusz,
    Here are the values before & after Update:
    --BEFORE update:
    select tar_sid, DUMP(col_name, 1016) from table_name where tar_sid in ('6997593.880');
    /* Output copied from SQL-Developer: */
    6997593.88 Typ=1 Len=144 CharacterSet=UTF8: 54,45,53,54,5f,41,42,53,54,52,41,43,54,e3,81,ab,e3,81,a6,4f,52,41,2d,30,31,34,32,32,e7,99,ba,e7,94,9f,29,a,4d,65,74,61,6c,69,6e,6b,20,e3,81,a7,e7,a2,ba,e8,aa,8d,e3,81,84,e3,81,9f,e3,81,97,e3,81,be,e3,81,97,e3,81,9f,e3,81,8c,e3,80,81,52,31,30,2e,32,2e,30,2e,34,20,a,e3,81,a7,e3,81,af,e4,bf,ae,e6,ad,a3,e6,b8,88,e3,81,bf,e3,81,ae,e4,ba,8b,e4,be,8b,e3,81,97,e3,81,8b,e7,a2,ba,e8,aa,8d,e3,81,a7,e3,81,8d,e3,81,be,e3,81,9b,e3,82,93,2a
    --AFTER Update:
    select tar_sid, DUMP(col_name, 1016) from table_name where tar_sid in ('6997593.880');
    /* Output copied from SQL-Developer: */
    6997593.88 Typ=1 Len=144 CharacterSet=UTF8: 54,45,53,54,5f,41,42,53,54,52,41,43,54,e3,81,ab,e3,81,a6,4f,52,41,2d,30,31,34,32,32,e7,99,ba,e7,94,9f,29,a,4d,45,54,41,4c,49,4e,4b,20,e3,81,a7,e7,a2,ba,e8,aa,8d,e3,81,84,e3,81,9f,e3,81,97,e3,81,be,e3,81,97,e3,81,9f,e3,81,8c,e3,80,81,52,31,30,2e,32,2e,30,2e,34,20,a,e3,81,a7,e3,81,af,e4,bf,ae,e6,ad,a3,e6,b8,88,e3,81,bf,e3,81,ae,e4,ba,8b,e4,be,8b,e3,81,97,e3,81,8b,e7,a2,ba,e8,aa,8d,e3,81,a7,e3,81,8d,e3,81,be,e3,81,9b,e3,82,93,2a
    So the values BEFORE & AFTER Update are the same!
    The problem is that sometimes, the Japanese data in VARCHAR2 (abstract) column gets corrupted. What could be the problem here? Any clues?

  • How to set Multi Byte Character Set ( MBCS ) to Particular String In MFC VC++

    I Use Unicode Character Set in my MFC Application ( VC++) .
    now i get the output   ठ桔湡潹⁵潦⁲獵 (like this )character and i want to convert this character in english language (means MBCS),
    But i need Unicode to My Applicatiion. when i change the Multi-Byte Character set It give Correct output in English but other Objects ( TreeCtrl Selection ) will perform wrongly .  so i need to convert the particular String to MBCS
    how can i do that ? In MFC

    I assume your string read from your hardware device is an plains "C" string (ANSI string). This type of string has one byte per character. Unicode has two bytes per character.
    From the situation you explained I'd convert the string returned by the hardware to an Unicode string using i.e. MultibyteTowideChar with CP_ACP. You may also use mbstowcs or some similar functions to convert your string to an Unicode string.
    Best regards
    Bordon
    Note: Posted code pieces may not have a good programming style and may not perfect. It is also possible that they do not work in all situations. Code pieces are only indended to explain something particualar.

  • Unicode and non-unicode string data types Issue with 2008 SSIS Package

    Hi All,
    I am converting a 2005 SSIS Package to 2008. I have a task which has SQL Server as the source and Oracle as the destination. I copy the data from a SQL server view with a field nvarchar(10) to a field of a oracle table varchar(10). The package executes fine
    on my local when i use the data transformation task to convert to DT_STR. But when I deploy the dtsx file on the server and try to run from an SQL Job Agent it gives me the unicode and non-unicode string data types error for the field. I have checked the registry
    settings and its the same in my local and the server. Tried both the data conversion task and Derived Column task but with no luck. Pls suggest me what changes are required in my package to run it from the SQL Agent Job.
    Thanks.

    What is Unicode and non Unicode data formats
    Unicode : 
    A Unicode character takes more bytes to store the data in the database. As we all know, many global industries wants to increase their business worldwide and grow at the same time, they would want to widen their business by providing
    services to the customers worldwide by supporting different languages like Chinese, Japanese, Korean and Arabic. Many websites these days are supporting international languages to do their business and to attract more and more customers and that makes life
    easier for both the parties.
    To store the customer data into the database the database must support a mechanism to store the international characters, storing these characters is not easy, and many database vendors have to revised their strategies and come
    up with new mechanisms to support or to store these international characters in the database. Some of the big vendors like Oracle, Microsoft, IBM and other database vendors started providing the international character support so that the data can be stored
    and retrieved accordingly to avoid any hiccups while doing business with the international customers.
    The difference in storing character data between Unicode and non-Unicode depends on whether non-Unicode data is stored by using double-byte character sets. All non-East Asian languages and the Thai language store non-Unicode characters
    in single bytes. Therefore, storing these languages as Unicode uses two times the space that is used specifying a non-Unicode code page. On the other hand, the non-Unicode code pages of many other Asian languages specify character storage in double-byte character
    sets (DBCS). Therefore, for these languages, there is almost no difference in storage between non-Unicode and Unicode.
    Encoding Formats: 
    Some of the common encoding formats for Unicode are UCS-2, UTF-8, UTF-16, UTF-32 have been made available by database vendors to their customers. For SQL Server 7.0 and higher versions Microsoft uses the encoding format UCS-2 to store the UTF-8 data. Under
    this mechanism, all Unicode characters are stored by using 2 bytes.
    Unicode data can be encoded in many different ways. UCS-2 and UTF-8 are two common ways to store bit patterns that represent Unicode characters. Microsoft Windows NT, SQL Server, Java, COM, and the SQL Server ODBC driver and OLEDB
    provider all internally represent Unicode data as UCS-2.
    The options for using SQL Server 7.0 or SQL Server 2000 as a backend server for an application that sends and receives Unicode data that is encoded as UTF-8 include:
    For example, if your business is using a website supporting ASP pages, then this is what happens:
    If your application uses Active Server Pages (ASP) and you are using Internet Information Server (IIS) 5.0 and Microsoft Windows 2000, you can add "<% Session.Codepage=65001 %>" to your server-side ASP script.
    This instructs IIS to convert all dynamically generated strings (example: Response.Write) from UCS-2 to UTF-8 automatically before sending them to the client.
    If you do not want to enable sessions, you can alternatively use the server-side directive "<%@ CodePage=65001 %>".
    Any UTF-8 data sent from the client to the server via GET or POST is also converted to UCS-2 automatically. The Session.Codepage property is the recommended method to handle UTF-8 data within a web application. This Codepage
    setting is not available on IIS 4.0 and Windows NT 4.0.
    Sorting and other operations :
    The effect of Unicode data on performance is complicated by a variety of factors that include the following:
    1. The difference between Unicode sorting rules and non-Unicode sorting rules 
    2. The difference between sorting double-byte and single-byte characters 
    3. Code page conversion between client and server
    Performing operations like >, <, ORDER BY are resource intensive and will be difficult to get correct results if the codepage conversion between client and server is not available.
    Sorting lots of Unicode data can be slower than non-Unicode data, because the data is stored in double bytes. On the other hand, sorting Asian characters in Unicode is faster than sorting Asian DBCS data in a specific code page,
    because DBCS data is actually a mixture of single-byte and double-byte widths, while Unicode characters are fixed-width.
    Non-Unicode :
    Non Unicode is exactly opposite to Unicode. Using non Unicode it is easy to store languages like ‘English’ but not other Asian languages that need more bits to store correctly otherwise truncation will occur.
    Now, let’s see some of the advantages of not storing the data in Unicode format:
    1. It takes less space to store the data in the database hence we will save lot of hard disk space. 
    2. Moving of database files from one server to other takes less time. 
    3. Backup and restore of the database makes huge impact and it is good for DBA’s that it takes less time
    Non-Unicode vs. Unicode Data Types: Comparison Chart
    The primary difference between unicode and non-Unicode data types is the ability of Unicode to easily handle the storage of foreign language characters which also requires more storage space.
    Non-Unicode
    Unicode
    (char, varchar, text)
    (nchar, nvarchar, ntext)
    Stores data in fixed or variable length
    Same as non-Unicode
    char: data is padded with blanks to fill the field size. For example, if a char(10) field contains 5 characters the system will pad it with 5 blanks
    nchar: same as char
    varchar: stores actual value and does not pad with blanks
    nvarchar: same as varchar
    requires 1 byte of storage
    requires 2 bytes of storage
    char and varchar: can store up to 8000 characters
    nchar and nvarchar: can store up to 4000 characters
    Best suited for US English: "One problem with data types that use 1 byte to encode each character is that the data type can only represent 256 different characters. This forces multiple
    encoding specifications (or code pages) for different alphabets such as European alphabets, which are relatively small. It is also impossible to handle systems such as the Japanese Kanji or Korean Hangul alphabets that have thousands of characters."<sup>1</sup>
    Best suited for systems that need to support at least one foreign language: "The Unicode specification defines a single encoding scheme for most characters widely used in businesses around the world.
    All computers consistently translate the bit patterns in Unicode data into characters using the single Unicode specification. This ensures that the same bit pattern is always converted to the same character on all computers. Data can be freely transferred
    from one database or computer to another without concern that the receiving system will translate the bit patterns into characters incorrectly.
    https://irfansworld.wordpress.com/2011/01/25/what-is-unicode-and-non-unicode-data-formats/
    Thanks Shiven:) If Answer is Helpful, Please Vote

  • Connecting to EMS fails with No mapping for the Unicode character exists in the target multi-byte code page

    I am getting the following error when trying to connect to both my exchange servers.
    New-PSSession : [ex2013-002.nafa.ca] Connecting to remote server ex2013-002.nafa.ca failed with the following error
    message : No mapping for the Unicode character exists in the target multi-byte code page. For more information, see
    the about_Remote_Troubleshooting Help topic.
    At line:1 char:12
    + $Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri ht ...
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        + CategoryInfo          : OpenError: (System.Manageme....RemoteRunspace:RemoteRunspace) [New-PSSession], PSRemotin
       gTransportException
        + FullyQualifiedErrorId : 1113,PSSessionOpenFailed
    EMS used to connect ok. I am not sure if there is any connection but Outlook was installed recently on the exchange server to enable mailbox level backups.
    Any help would be appreciated.
    Steve Hurst

    Hello Steve,
    Firstly, you cannot install Outlook with Exchange because they share certain dll files.
    About the EMS question, I suggest we try rebuilding the powershell virtual directory. If it still does not work, check the application log for more referernce.
    Thanks,
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Simon Wu
    TechNet Community Support

  • No mapping for the Unicode character exists in the target multi-byte code page

    hi,
    i have an issue with sharepoint 2013 and IE 10.
    im using the sharepoint  rest web service and make an ajax data call to retrive data from sharepoint lists, the call fail and return a server error: "No mapping for the Unicode character exists in the target multi-byte code page". 
    i have to say that everything works fine with chrome and firefox. 
    what can i do for fixing it?
    Thanks a lot
    alon

    Hi,
    From your description, I know you get an issue with IE 10 in SharePoint 2013 when you use SharePoint REST API to retrieve data from SharePoint list.
    I am not quite sure what cause your issue. Could you provide your code, so I could test it in my environment and troubleshoot for you.
    In addition, you could test your issue in another computer or another version of IE.
    Best Regards
    Vincent Han
    TechNet Community Support

Maybe you are looking for

  • ITunes will not transfer songs and podcasts to iPod

    Whenever I try to synch, or manually transfer a song/podcast, I get an error message saying that the is "not enough room". But when I open the device through iTunes, it shows that there are only two songs on it. I tried restoring iTunes and I ran the

  • BPM outbound status?

    Hi all, I am building on a simple scenario using BPM (S/A bridge). At first it was working fine, but after some test cases (also wrong), I canu2019t make it work again. I always get the initial message stuck in queue XBQO$PE_WS90000001 with error u20

  • JTable Grid Lines became very light when changed to JGoodies L&f

    After changing to the JGoodies Look and Feel, my Table Grid Lines (Row and Column Lines) look very light. They are hardly visible. The former orinignal black grid lines looks light as gray. Is there a way to change the color of Grid lines using the T

  • Save for web export crashing photoshop

    HI In was wondering if anyone could help me.  I have written a code that edits images but i am having trouble with the save for web export everytime it gets to saving it photoshop just crashes. I have tried a few different ways of coding it but all o

  • Device manager error when trying to load applications

    I've been using the Desktop Manager successfully for several years with my old 8703e.  I finally upgraded to a Style 9670, and updated the Desktop Manager to v6.0.2.44.  It syncs fine with the phone.  However, when I go to work with apps, the DM tell