HAL mounts USB flash devices without support for non-latin characters

Hey folks!
How can I make HAL mount USB flash devices with support for German characters such as ä, ö, ü, ß, etc? At the moment there's only a silly question mark instead: http://www12.file-upload.net/20.08.08/3cpcpx.png .
Can anyone help me?
Thanks.
Henrik

I think it's a bug in KDE4, because no other window manager or desktop environment has such problems. I had it as well, when I used KDE4.0.* for testing and seems like this remained. Do you have a completely german KDE4? Check the system settings!
A thing comes to my mind, did you check it with konqueror? I guess this bug appears in dolphin, richtig? I schätz mal schon Try it with konqueror, if it doesn't appear there, it's a dolphin thing

Similar Messages

  • Support for non-English characters in Safari?

    When I browse Japanese and Chinese websites, Safari only shows blank lines and empty blocks, it seems Safari doesn't support non-English encoding or UTF-8?
      Windows XP Pro  

    It (usually) works if you use an English version of Windows. Not if you use a Chinese or Japanese version.

  • Form2xml generate xml files with "??????????" for non-latin characters

    i used form2xml in oracle 10 g suit to convert forms 5   .fmb to .xml . using the command:
    frmf2xml.bat OVERWRITE=YES myform.fmb
    The forms contains arabic character set, but the xml file is generated with character "????????? "  , and the xml file is unusable
    what can i do to keep arabic characters in the generated  xml files.
    Edit:
    I run form2xml on windows xp SP3 with arabic support (codepage =1256)
    The xml file is generated as UTF-8

    I resolved the problem.
    step1: search in the registry for every NLS_Lang key and modify its value to codepage 1256 using regedit
    step2: in control panel-Region and Language , I  modified language to be Arabic
    Now every thing is good

  • Problems when searching for non-Latin characters

    In my studies I often need to search for Japanese terms. I usually do this from the Smart Search Field to go straight to Google. However, Safari seems not to be able to handle the Japanese characters. For example, I might type 日本 into the Smart Search Field. I would then expect to see results about the country of Japan (for that is what I searched). Instead, I see this: http://puu.sh/1MwvG
    This is pretty much a dealbreaker for me re: Safari. Is there something that will fix it?

    I can't seem to duplicate this.  Are you typing in the same place I am? I see I only have 6.0.1...

  • DVDPro3- "Device Not Supported for requested burn operation"

    Hello All,
    In build and format I get "device not supported for requested burn operation."
    On my iMac G4
    Using DVD Pro 3.0.2
    Mac OSX 10.3.9
    And a LaCie Superwriter external burner.
    Thanks for any suggestions,
    Al

    in case anyone else has a similar problem... SOLUTIONS:
    formating to hard disc as an .img and then burning with toast works.
    Also I found downloading and installing the device profile patches from www.patchburn.de got the burner to be recognised by both DVDSP and Disc Utility. Lot's of people recommend this source.
    Another tip was to zap the PRAM but that did nothing. Nor did deleting the preferences file.
    No idea why things stopped working in the first place but hope this helps others.

  • Pagination support for non-Oracle databases?

    Hi,
    I just read this thread (Pagination Support on pagination support. Is there any way to get pagination with non-Oracle databases? We are using an IBM iSeries / AS/400 DB2 database right now, and we're planning to use some local lightweight database in the near future as well (probably Cloudscape/Derby or "IBM Everyplace database".)
    We currently use code like this:
    String sql = "SELECT art FROM Artikel art" +
                /* dynamically generated where statement is added here */
                + "ORDER BY art.artikelNummer";
    Query q = em.createQuery(sql);
    q.setFirstResult(firstResult);
    q.setMaxResults(maxResults);If I look in the TopLink logs, I see queries like this:
    SELECT ARTNR, ARALT, ARAFJ, ARXII, ARAVJ, ARXIV, ARANJ, AHGCD, ARNVJ, ARCRJ, ARARK, ARFKJ, ARTNK, ARGP1, ASGCD, ARGP2, ARPR1, ARGP3, ARPR2, AREX1, ARPR3, AREX2, ARPR4, AREX3, ARASA, ARINA, ASSCD, ARIA1, ARBAN, ARIN1, ARBAV, ARIA2, ARBAK, ARIN2, ARCES, ARIA3, ARCDT, ARIN3, ARCRE, ARIA4, ARCWK, ARIN4, ARHBH, ARIA5, ARDFA, ARIN5, ARDFG, ARIA6, ARDOS, ARIN6, AREPW, ARINN, ARFOD, ARIAS, ARFOE, ARINS, ARFOF, ARNAB, ARFOI, ARNIB, ARFON, ARNIA, ARFOS, ARNN1, ARFTA, ARNA2, ARVIV, ARNO2, ARGAP, ARNN3, ARGPT, ARNA4, ARGPD, ARNO4, ARGPA, ARNN5, ARGPO, ARNA6, ARHIS, ARNN6, ARISP, ARNIO, ARKHM, ARNNS, MAGCD, AROVJ, MTGCD, ARPL1, ARMXM, ARPL2, MRKCD, ARPL3, ARMVR, ARVKJ, ARMIM, ARV12, ARMDT, ARVVJ, ARMTE, AR#VR, ARMTU, ARZLS, ARMTM, ARIAT, ARMWK, ARAVS, MPCCD, ARNVS, ARBTW, ARFJS, ARXI2, ARG2S, ARXI3, ARE1S, ARXI4, ARE3S, ARXI6, ARIB1, ARXI1, ARIB2, ARXI5, ARIB3, AROPI, ARIB4, ARPRV, ARIB5, SZGCD, ARIB6, ARSPC, ARINO, ARSMF, ARIOS, VEAAN, ARNIS, ARSYN, ARNO1, ARVR1, ARNA3, ARV1S, ARNN4, ARVR2, ARNO5, ARV2S, ARNIN, ARVR3, ARNOS, ARV3S, ARP1S, ARTFA, ARP3S, ARTFG, ARS12, ARUVC, ARZLD, ARUCW, ARAJS, ARBKV, ARCJS, ARVVI, ARG3S, ARVVP, ARINB, VPOCD, ARIO2, VPECD, ARIO4, ARVIH, ARIO6, ARVHG, ARNBS, ARVRW, ARNN2, ARVPR, ARNA5, ARVVR, ARNAS, ARVVS, ARP2S, ARVV1, ARSVV, ARZK1, ARNJS, ARNA1, ARNO3, ARIO1, ARNO6, ARIO5, AROJS, ARE2S, ARVJS, ARIBS, ARIAD, ARIO3, ARG1S FROM ART WHERE ((((ARUVC = 'N') AND (ARHIS = 'N')) AND (ASGCD = 7)) AND (AHGCD = 15)) ORDER BY ARTNR ASC
    (Yeah, I know we have too much columns in the table...)
    So, no pagination in the query. As you can see, we have a mechanism in place to dynamically generate a where clause. This is because the user can set filters. The problem is, if our user sets a filter that causes the result set to be significantly smaller, the performance is way better than when he sets no filter at all. We suppose this is because the whole result set is sent to TopLink, regardless of the values of firstResult and maxResults.
    We are using TopLink Essentials 2.1-10, by the way.
    Message was edited by:
    Bart Kummel

    Hi all,
    I'm trying to subclass <tt>DatabasePlatform</tt> to add pagination support for the AS/400 DB2 database of my customer. To be fair, it is not going very well so far.
    The first problem is, the query Chris found by googling (Re: Pagination support for non-Oracle databases?), does not work for AS/400s version of DB2. In fact, although it is called "DB2", the database on the AS/400 system is a whole other database than the "normal" DB2 database that runs on Windows and *nix. The AS/400 DB2 simply does not have a "ROW_NEXT" function.
    Another option would be to use the <b>row_number() over()</b> mehtod. But, as can be read here, this function is only available from version V5R4 of OS/400. And guess what? We're stuck on V5R3 at this client. (We cannot upgrade, because there's an application in use that's written in Delphi and IBM dropped the Delphi binding from V5R4...)
    So I pretty much ran out of options. On the mailing list I linked to above, someone mentions the option to make a sort of stored procedure that generates a row count number. An example of how to do this can be found here. I implemented it, and ended up with this code:
    package com.myclientsname.persistence;
    import java.sql.Connection;
    import java.sql.PreparedStatement;
    import java.sql.ResultSet;
    import java.sql.SQLException;
    import org.eclipse.persistence.expressions.ExpressionBuilder;
    import org.eclipse.persistence.internal.databaseaccess.DatabaseCall;
    import org.eclipse.persistence.internal.expressions.ExpressionSQLPrinter;
    import org.eclipse.persistence.internal.expressions.SQLSelectStatement;
    import org.eclipse.persistence.internal.sessions.AbstractSession;
    import org.eclipse.persistence.logging.SessionLog;
    import org.eclipse.persistence.platform.database.DatabasePlatform;
    import org.eclipse.persistence.sessions.SessionProfiler;
    public class AS400Platform extends DatabasePlatform {
        private static final long serialVersionUID = 0L;
        public AS400Platform(){
             super();
             super.setShouldBindAllParameters(false);
        public void printSQLSelectStatement(DatabaseCall call, ExpressionSQLPrinter printer, SQLSelectStatement statement) {
            int max = 0;
            int firstRow = 0;
            if (statement.getQuery()!=null){
                max = statement.getQuery().getMaxRows();
                firstRow = statement.getQuery().getFirstResult();
            if ( !(max>0) && !(firstRow>0) ){
                super.printSQLSelectStatement(call, printer,statement);
                return;
            } else {
                statement.setUseUniqueFieldAliases(true);
                ExpressionBuilder builder = new ExpressionBuilder();
                statement.addField(builder.getField("COUNTER() AS CNTR"));
                printer.printString("SELECT * FROM (");
                call.setFields(statement.printSQL(printer));
                printer.printString(") AS R WHERE R.CNTR >= ");
                printer.printParameter(DatabaseCall.FIRSTRESULT_FIELD);
                if ( max > 0 ){
                    // Use of binding parameters is not allowed here, so use
                    // String concatenation instead...
                    printer.printString(" FETCH FIRST " + max + " ROWS ONLY");
            call.setIgnoreFirstRowMaxResultsSettings(true);
        public boolean wasFailureCommunicationBased(SQLException exception, Connection connection, AbstractSession sessionForProfile){
            if (connection == null || this.pingSQL == null){
                //Without a connection we are  unable to determine what caused the error so return false.
                //The only case where connection will be null should be External Connection Pooling so
                //returning false is ok as there is no connection management requirement
                    //If there is no ping sql then we can not perform the ping.
                return false;
            PreparedStatement statement = null;
            try{
                sessionForProfile.startOperationProfile(SessionProfiler.ConnectionPing);
                if (sessionForProfile.shouldLog(SessionLog.FINE, SessionLog.SQL)) {// Avoid printing if no logging required.
                     sessionForProfile.log(SessionLog.FINE, SessionLog.SQL, getPingSQL(), (Object[])null, null, false);
                statement = connection.prepareStatement(getPingSQL());
                ResultSet result = statement.executeQuery();
                result.close();
                statement.close();
            }catch (SQLException ex){
                try{
                    // Had to add this check because of NullPointerExceptions
                    // (maybe a bug?)
                    if(statement != null){
                        //try to close statement again in case the query or result.close() caused an exception.
                        statement.close();
                } catch (SQLException exception2) {
                    //ignore;
                return true;
            }finally{
                sessionForProfile.endOperationProfile(SessionProfiler.ConnectionPing);
            return false;
    }(As you can see, I had to override the <tt>wasFailureCommunicationBased()</tt> method as well, due to some unexpected NPE's. (A bug, perhaps?))
    This code does work. However, the performance is not very well. The first page comes relatively fast, but as you browse further in the table, each page comes slower. I assume this is because the counter() method has to be evaluated for each row in the table.
    I have to get the performance better and constant. Does anyone have an idea how to optimize this further?
    Best regards,
    Bart Kummel

  • What is current CommSuite support for non-ASCII passwords?

    Hello all,
    Some of our users managed to change their passwords to non-ASCII strings (via replication from MSAD by ISW) and no longer have access to their communications services.
    While replicating the problem, I have set a (UTF-8 non-ASCII) string as my password in DSEE directly, and *can* log in to Convergence with this password. However, if I change the working password to a non-ASCII string from Convergence itself - it is accepted during the secondary password check, there is no error returned, SOME password is apparently saved into the LDAP directory, but neither of the original non-ASCII plaintext strings can be used for login back into Convergence. Restoration of access is only doable by admin at this point.
    Checking email by IMAP from Thunderbird no longer works with a changed non-ASCII password (including the state when it still works for Convergence).
    Delegated Admin has an explicit check for non-ASCII characters in the password and refuses to set a misbehaving one.
    I see that among the standards supported by CommSuite, there is IMAP4rev1, and RFC 5255 refers to it as the reason that non-ASCII passwords and usernames are for now not supported, though this is expected to be a temporary state of things, and software can prepare for the future by implementing checks for valid UTF-8 strings as well.
    https://wikis.oracle.com/display/CommSuite/Messaging+Server+Supported+Standards
    http://tools.ietf.org/html/rfc5255
    5.1.  Unicode Userids and Passwords
       IMAP4rev1 currently restricts the userid and password fields of the
       LOGIN command to US-ASCII.  The "userid" and "password" fields of the
       IMAP LOGIN command are restricted to US-ASCII only until a future
       standards track RFC states otherwise.  Servers are encouraged to
       validate both fields to make sure they conform to the formal syntax
       of UTF-8 and to reject the LOGIN command if that syntax is violated.
       Servers MAY reject the LOGIN command if either the "userid" or
       "password" field contains an octet with the highest bit set.
       When AUTHENTICATE is used, some servers may support userids and
       passwords in Unicode [RFC3490] since SASL (see [RFC4422]) allows
       that.  However, such userids cannot be used as part of email
       addresses.
    So, the main question at this point is: does or does not all of the CommSuite stack support non-ASCII passwords?
    If no - please confirm, so we can instruct the users to not create problems for themselves (and maybe manage to set up some policy to not accept non-ASCII passwords to MSAD/DSEE in the first place).
    If yes - what should be done to enable support in Convergence/IMAP/SMTP/XMPP/WCAP/WABP/... services - perhaps, setting the LANG/LC_ALL locale environment variables or equivalent JVM flags for UTF-8 in server startup scripts, etc.? (I know that DSEE ldapsearch requires either envvars or a command-line flag for charset encoding of values, so I figure similar quirks may be relevant for some other software)
    Thanks in advance for either response,
    //Jim Klimov

    I can't respond for the suite, but the Messaging Server product should work with UTF-8 usernames and passwords as long as the standard SASL authentication mechanisms that are documented to use UTF-8 are used (e.g. SASL PLAIN). IMAP LOGIN may work fine with UTF-8 as well even though that's non-standard. We do not implement SASLprep, however, so the strings provided by the client to the server must be identical UTF-8 strings for authentication to succeed. If they are provided in a different decomposition, different canonical form or non-standard charset, that's not supported and will fail. We don't test this scenario extensively, so you may encounter bugs (that we'd have to prioritize and fix as with other bugs). Messaging Server recently implemented a restricted option (broken_client_login_charset) for a customer who was stuck with broken clients that sent ISO-8859-1 for the IMAP login command arguments.

  • Where is Adobe support for non-working software??

    Where is Adobe support for non-working software??
    Thanks,
    Jerry

    Bill,
    Thank you for your reply!
    You may have guessed I am a bit  exassperated. (See this discussion from yesterday.) (Re: How do I get past the Error: 16 problem?
    I am trying to move Adobe Acrobat IX, Photoshop CS6, Photoshop CC and Bridge to a new computer (Mac Pro (late2013), Mavericks 10.9.3) from an older Mac Pro using the same OSX.
    I keep getting the error 16 message whether I try to open Acrobat or Photoshop (I have a paid for DVD of CS6). I have uninstalled and re-installed. I have reset permissions on 2 folders ion the system Library. I have either deactivated or signed out of ALL the Adobe products on my old computer. I signed in as 'Root' and tried to open the software. I have re-downloaded all the products except CS6 which I own. I have restarted the computer. I have repair the drive using Disk Utility and Disc Warrior. In short I have tried everything I can think of and that which has been suggested my others on this Forum.
    Thanks again,
    Jerry

  • Support for double-byte characters

    Does RH6 have support for double-byte characters for
    localization/translation to Asian languages? This feature was in
    X3, removed from X5, but did it make it into 6?
    Thanks,
    Mike

    Sorry but no.
    Here's a link to what did go into RH6.
    http://www.adobe.com/devnet/logged_in/mhu_rh_whatsnew.html

  • Beryl: support for non power of two textures missing

    Is anyone else seeing Beryl crash X with a message about missing support for non-power-of-two textures, and something about no manageable screens being found?
    Also, I've gotten something about GL_EXT_texture_from_pixmap not being available at least once, when it is definitely available:
    [proteus@chameleon ~]$ glxinfo | grep -i texture_from_pixmap
    libGL warning: 3D driver claims to not support visual 0x46
    GLX_EXT_import_context, GLX_EXT_texture_from_pixmap, GLX_OML_swap_method,
    GLX_SGIX_visual_select_group, GLX_EXT_texture_from_pixmap
    [proteus@chameleon ~]$
    I do wonder if the "3D driver claims to not support visual 0x46" thing has anything to do with this...

    Unichrome here. Neither are powerful hardware but both should should have full support for AIGLX, barring some kind of weird driver bug (which wouldn't be very surprising in view of the EXA bug).
    Is anyone getting this with Intel or ATI hardware?

  • SetMnemonic for non-english characters

    Does anybody knos how to set JButtons mnemonic for non-english characters?
    My mnemonic is loaded from a resource bundle, and in the documentation the setMnemonic(char) is only limited to english and it is written that the user should call setMnemonic(int) instead.
    So what value should this int contains in order to display the non-english char which is loaded from resource bundle?
    Thanks in advanve,
    Hanoch

    It seems that this is an issue that has popped up in various forums before, here's one example from last year:
    http://forum.java.sun.com/thread.jspa?forumID=16&threadID=490722
    This entry has some suggestions for handling mnemonics in resource bundles, and they would take care of translated mnemonics - as long as the translated values are restricted to the values contained in the VK_XXX keycodes.
    And since those values are basically the English (ASCII) character set + a bunch of function keys, it doesn't solve the original problem - how to specify mnemonics that are not part of the English character set. The more I look at this I don't really understand the reason for making setMnemonic (char mnemonic) obsolete and making setMnemonic (int mnemonic) the default. If anything this has made the method more difficult to use.
    I also don't understand the statement in the API about setMnemonic (char mnemonic):
    "This method is only designed to handle character values which fall between 'a' and 'z' or 'A' and 'Z'."
    If the type is "char", why would the character values be restricted to values between 'a' and 'z' or 'A' and 'Z'? I understand the need for the value to be restricted to one keystroke (eliminating the possibility of using ideographic characters), but why make it impossible to use all the Latin-1 and Latin-2 characters, for instance? (and is that in fact the case?) It is established practice on other platforms to be able to use characters such as '�', '�' and '�', for instance.
    And if changes were made, why not enable the simple way of specifying a mnemonic that other platforms have implemented, by adding an '&' in front of the character?
    Sorry if this disintegrated into a rant - didn't mean to... :-) I'm sure there must be good reasons for the changes, would love to understand them.

  • PDF generation for Non English Characters from ADF

    Hi
    We are using below piece of code to generate pdf from ADF Managed bean. It works fine. However for non English Characters(eg. Japanese,Vietnamese,Arabic)  it puts
    I got few blogs
    https://blogs.oracle.com/BIDeveloper/entry/non-english_characters_appears
    However we are not using BI Publisher product . We are using its API's
    Can anyone tell where do we need to setup fonts within ADF or Weblogic or Server ?
    Input Parameters are
    a)xml Data
    b)InputStream  ie rtf Template
    import oracle.apps.xdo.XDOException;
    import oracle.apps.xdo.template.FOProcessor;
    import oracle.apps.xdo.template.RTFProcessor;
        public static byte[] genPdfRep(String pOutFileType,byte[] pXmlOut ,InputStream pTemplate)
            byte[] dataBytes = null;
            try {
                //Process RTF template to convert to XSL-FO format
                RTFProcessor rtfp = new RTFProcessor(pTemplate);
                ByteArrayOutputStream xslOutStream = new ByteArrayOutputStream();
                rtfp.setOutput(xslOutStream);
                rtfp.process();
                //Use XSL Template and Data from the VO to generate report and return the OutputStream of report
                ByteArrayInputStream xslInStream = new ByteArrayInputStream(xslOutStream.toByteArray());
                FOProcessor processor = new FOProcessor();
                ByteArrayInputStream dataStream = new ByteArrayInputStream((byte[])pXmlOut);  
                processor.setData(dataStream);
                processor.setTemplate(xslInStream);
                ByteArrayOutputStream pdfOutStream = new ByteArrayOutputStream();
                processor.setOutput(pdfOutStream);
                byte outFileTypeByte = FOProcessor.FORMAT_PDF;
                processor.setOutputFormat(outFileTypeByte); //FOProcessor.FORMAT_HTML
                processor.generate();
                dataBytes = pdfOutStream.toByteArray();
            } catch (XDOException e) {
                e.printStackTrace();
            return dataBytes;
    Appreciate your help.
    Thanks,
    Abhijit

    Fonts are defined in the template you use to generate the pdf. Your application add the data and both is processed yb the FOP processor. Now there are two possible causes of the '???' :
    1. the data you sent to the template contains the '???' already
    2. the template can't digest the data (the special characters) and puts '???' in the pdf.
    Before going on you have to find out which one is your problem. The 2nd is the problem you better ask this in a FOP forum as you have to solve it by changing the template.
    Timo

  • Word Replacements for Non- English Characters

    Hi
    Does anyone have an idea on implementing Word Replacements for non- english characters in TCA- DQM 11i.
    We are trying to identify, capture and cleanse common accented characters like à, â , ê
    However, the default language for replacement is American English , So even if we add these in the existing lists it will not take any effect
    Is creating a new Word replacement list for every language the solution ?? any patch recommendations???
    Thanks in advance

    It seems that this is an issue that has popped up in various forums before, here's one example from last year:
    http://forum.java.sun.com/thread.jspa?forumID=16&threadID=490722
    This entry has some suggestions for handling mnemonics in resource bundles, and they would take care of translated mnemonics - as long as the translated values are restricted to the values contained in the VK_XXX keycodes.
    And since those values are basically the English (ASCII) character set + a bunch of function keys, it doesn't solve the original problem - how to specify mnemonics that are not part of the English character set. The more I look at this I don't really understand the reason for making setMnemonic (char mnemonic) obsolete and making setMnemonic (int mnemonic) the default. If anything this has made the method more difficult to use.
    I also don't understand the statement in the API about setMnemonic (char mnemonic):
    "This method is only designed to handle character values which fall between 'a' and 'z' or 'A' and 'Z'."
    If the type is "char", why would the character values be restricted to values between 'a' and 'z' or 'A' and 'Z'? I understand the need for the value to be restricted to one keystroke (eliminating the possibility of using ideographic characters), but why make it impossible to use all the Latin-1 and Latin-2 characters, for instance? (and is that in fact the case?) It is established practice on other platforms to be able to use characters such as '�', '�' and '�', for instance.
    And if changes were made, why not enable the simple way of specifying a mnemonic that other platforms have implemented, by adding an '&' in front of the character?
    Sorry if this disintegrated into a rant - didn't mean to... :-) I'm sure there must be good reasons for the changes, would love to understand them.

  • Waiting the best solution to strange characters for non-latin songs?

    I've tried to find a solution to the "Strange Characters" when importing songs with non-latin characters (I'm fan of greek music)... I`ve changed the ID3 tags to 2.4 and nothing happened...should I wait for the upgrade?

    On re-reading and following your directions, I really should have given you 10 pts. for that answer, Tom, and not 5.  I'd somehow misunderstood the glyph view, thinking it was only a matrix of a certain standard, limited set of symbols, like arrow dingbats and such. I see the way it works now, and it's quite practical.
    Thanks very much.
    Rob

  • How to mount USB Flash Drive

    Hello again Arch-Linux users,
    I always search google very hard and for months (off and on) I search for how to mount a USB Flash Drive.  Nothing ever work so I give up than try again months latter.  I don't want to give up anymore and that is why I join this forum.
    Here are some of the code I found on the net.  Some of these commands will only re-mount the entire Arch-Linux running system in  /mnt/usbstick.
    sudo mount -o rw,noauto,async,user,umask=1000 /dev/sda1 /mnt/usbstick
    ... don't work it only re-mount the entire system
    /dev/sda15 /mnt/usbstick  vfat   user,noauto,unhide   0      0
    ... don't work... I found nothing inside /mnt/usbstick
    mount -t vfat -o rw,nosuid,nodev,quiet,shortname=mixed,uid=1001,gid=100,umask=077,iocharset=utf8 /dev/sda1 /mnt/usbstick
    ... don't work... here is the error i get:
    mount: /dev/sda1 already mounted or /mnt/usbstick busy
    mount: according to mtab, /dev/sda1 is mounted on /
    It strange that most USB mount commands use the operating system partition itself- example: /dev/sda1 as the device for USB device, while others use /dev/cdrom for cd - /dev/dvd for dvd and /dev/fd0 for floppy.
    Could someone post the code that will easily mount a usb device under Arch_Linux.
    I'm using Arch-Linux core-64 (08-2009)
    Thanks in advance
    Last edited by sharris (2010-06-17 21:58:43)

    Thanks fsckd,
    I needed a rapid reply because I been wasting too much time not getting anything done because I needed to get pass this flash-drive thing.  I can now have a secure back-up in my pocket on USB to go.  Arch-Linux does it better for what I seen while just dd'ing on disk.
    ...lsusb
    Found it
    http://gd.tuwien.ac.at/linuxcommand.org … susb8.html
    Thanks demian,
    sdb is the location for my single hard-drive machine.  I saw it before in my list above but I changed it to sda1 because I knew no better.  I had no clue it represent a 2nd hard-drive for LINUX if one is not already present.
    This did it for me
    Thanks again

Maybe you are looking for

  • What is the best way to get data from a spreadsheet into an HTML table?

    I am using libreoffice, and I have it set to collate all the information I enter and output it to a simple table within the program like this: http://imageshack.us/a/img717/5144/spreadsheeta.jpg I want to get this information after sorting it and put

  • Problems with exporting in premiere

    Hi there, Having a few problems with rendering out from premiere and was hopping someone could help? I am making a showreel and all the clips I'm using have been rendered out from After effects at either 720 x 576 (16 x 9) or 1024 x 576 (square pixel

  • Where can i buy iphone 6 plus tampered glass screen protector and waterproof casing

    where can i buy iphone 6 plus tampered glass screen protector and waterproof casing. how much is it and where can i get it.FROM SINGAPORE

  • Standard Navigation Toolbar Items Missing

    Just changed my Windows XP PC on which I have been running Firefox for years, for a new PC with Windows 7 Pro. Installed Firefox on the new PCbut the Navigation Tool Bar looks different (dull grey instead of light) & 'Reload, Stop Reload etc. items a

  • Get week start and end dates

    I'm currently coding a calendar where I want to diplay all event for the current week. Can anyone tell me how to get the dates to use in the query? Thanks