Datapump export to /dev/null

Using normal export we can export the data to /dev/null as
exp full=y file=/dev/null log=/home/abc.log
but can this be done in data pump something like ?
expdp DIRECTORY=dev TABLES=HDHILLON.T1 DUMPFILE=null
SQL> select * from dba_directories where DIRECTORY_NAME='DEV';
OWNER DIRECTORY_NAME DIRECTORY_PATH
SYS DEV /dev/
Any help will be appreciated. I want to export the database to /dev/null
--Harvey
Edited by: Harvey on Jul 8, 2011 10:47 AM
Edited by: Harvey on Jul 8, 2011 10:48 AM

Hi,
Your solution will only create a file called datafile.dmp but I want to export the data to /dev/null
I think you know what /dev/null is.
--Harvey.                                                                                                                                                                                                                                                                                                                               

Similar Messages

  • /dev/null link destroyed. Need help

    I have Forms 6i applications deployed on the internet under Oracle 9iAS 1.0.2.2 on Solaris 2.9 64 bits OS.
    I am starting the 9iAS running the command "apachectl startssl" as 'root' because that's the only way I am able to grab port 443. However as soon as any output is redirected to /dev/null, the /dev/null link is broken and replaced by a /dev/null file writable to only by root.
    My question is what can I do to prevent that?

    Steve Walter (guest) wrote:
    : The "Deploying Applications on the WEB" docs are good for
    : release 2.1, but lack all detail in 6.0. Does anyone have a
    : guideline for building a forms cartridge in 6.0. I've
    followed
    : the 2.1 documentation, knowing it is a bit different, but I
    : still can't get it to work.
    : Thanks for any and all help.
    : Steve
    The error I recieve is Can not service this request, Please try
    again later
    null

  • [SOLVED] /dev/null: Permission Denied

    just tryin to run makechrootpkg, and was working fine 'till something happened recently.. now I get this:
    /usr/bin/makepkg: line 390: /dev/null: Permission denied
    ==> Retrieving Sources...
    /usr/bin/makepkg: line 461: /dev/null: Permission denied
    can anyone make sense of it? =/
    EDIT: nvm.. thankies #archlinux =P
    Last edited by Stythys (2010-06-05 17:33:33)

    I know it's solved, but I had the same issue today and fixed it like this:
    Add the following to the "mem" section in /usr/lib/udev/rules.d/50-udev-default.rules:
    SUBSYSTEM=="mem", KERNEL=="null|zero|full|random|urandom", MODE="0666"

  • /dev/null issues

    Hello All,
    I have an issue since the last couple Leopard updates that I can't seem to get a handle on and think I am out of my element here so any advice appreciated.
    I run 2 scripts every 10 minutes for the past 3 years and always had good results with them.
    In Cronnix I have them setup as "pathToScript" >/dev/null 2>&1
    One sends an email and the other runs a php cron. Both work well even now.
    The only issue is that I now get an email saying:
    /bin/sh: dev/null: No such file or directory
    When the scripts run there is an entry in Console :
    5/17/09 3:16:23 PM MyMac Ambiguous output redirect.
    Thinking there is a directory missing I listed /dev/ in terminal and did receive the list of dev contents... but no null directory in the list.
    I viewed the hard drive using invisibles and saw that /dev existed, but it appears when viewed as an alias, don't know if that is normal or not. I cannot move or delete the alias due to not enough permissions (might be a good thing lol)
    I chowned dev/null to root:wheel from googled advice, and got no errors either.
    My problem is that I don't want these emails, and wonder what is wrong with the way /dev/null is working , or not working now, and how to fix it. Can I rebuild dev/null? or fix it otherwise? Is there a fix all commend for this phenomena?
    Any advice is greatly appreciated.
    Jamy
    Message was edited by: Jamy

    Was I supposed to get something?
    It depends what your script outputs. The idea was to have bash handle the output redirection to see if it you could suppress the ambiguous output redirect message. If you included the >/dev/null 2>&1 part, then I would expect to see no output. My guess was that maybe connix is using tcsh. The following tests seem to support that theory:
    <pre style="border: 1px solid #ddd; padding-left: .75ex; padding-top: .25em; padding-bottom: .25em; margin-top: .5em; margin-bottom: .5em; margin-left: 1ex; max-width: 40ex; overflow: auto; font-size: 10px; font-family: Monaco, 'Courier New', Courier, monospace; color: #444; background: #eee; line-height: normal">tcsh -c 'echo hello >/dev/null 2>&1'
    tcsh: Ambiguous output redirect.
    sh -c 'echo hello >/dev/null 2>&1'
    (no output)
    </pre>
    Cole

  • /dev/null changes result, why? ( How to know if svn repository exists )

    Hi,
    Because I am writing last few weeks some scripts for a project, I do want to put those under svn ( subversion ). As a scripting exercise I want to be able to create Repos and project folders in one go and add changes to the repo's svn conf files.
    I want to check if there is already an existing repository and if the name=passw line is already added to the passw db.
    I did use:
    svnlook youngest /Volumes/Development/_svnRepo/SSHToolNew 2>&1 | grep "No such file or directory" -c
    to check for the error message returned by svnlook
    svnlook: Can't open file '/Volumes/Development/_svnRepo/SSHToolNew/format': No such file or directory
    and want to continue doing my stuff when the result is bigger then 0, otherwise create the repo first with some options.
    With the following info
    /Volumes/Development/_svnRepo/SSHTools exists
    /Volumes/Development/_svnRepo/SSHToolNew doesn't exists
    using ( typed in the sh )
    svnlook youngest /Volumes/Development/_svnRepo/SSHToolNew 2>&1 | grep "No such file or directory" -c
    1
    echo "rv:" $?
    1
    gives as result 1 in the shell and when using echo "rv:" $?
    however
    svnlook youngest /Volumes/Development/_svnRepo/SSHToolNew 2>&1 | grep "No such file or directory" -c &> /dev/null
    echo "rv:" $?
    0
    gives 0 as result.
    And when the folder do exists it return the opposite!?
    Just to understands the happening. Why the difference with &> /dev/null ?
    What do I mis here?

    Hi Thanks you for your reply,
    I want that the result goes into a var so that I can compare it in an 'if' statement.
    If I leave & out then I get the following same result
    bash-3.2$ svnlook youngest /Volumes/Development/_svnRepo/SSHToolNew 2>&1 | grep "No such file or directory" -c > /dev/null
    bash-3.2$ echo "rv: "$?
    rv: 0
    It should be one ( 1 ) because the result count is one. Or am I wrong?
    So basicly, why is it changing the result? or is $? not the result of my line?
    What is the best way to check if f.e a command is failed like the one I use in an if statement? Can I move the result directly into a var, something like?
    bash-3.2$ svnlook youngest /Volumes/Development/_svnRepo/SSHToolNew 2>&1 | grep "No such file or directory" -c | MyVar=$? ( or whatever needed )
    Thanks again

  • Parallel Sessions on Datapump Export  (10.2.0.4)

    Hi,
    We are using Oracle 10.2.0.4 on Solaris and I'm exporting a table using Datapump export.
    The export includes a query which selects from three tables based on relevant conditions. The parfile specifies 'parallel=4' and the dumpfile setting uses %U so that it creates an appropriate number of files.
    When I run the export using my own (DBA) account (i.e. expdp mr_dba parfile=exp_xyz.par) the export completes in 15 minutes and creates four dumpfiles. When I run the export as the schema owner using the exact same parfile (i.e. expdp schema_own parfile=exp_xyz.par) the export takes over two hours and only creates two dumpfiles.
    Could anyone suggest things that I could look at to find out why there is such a difference in the elapsed time? The exports have been run a number of times as both users with the box having similar loads and the results are fairly consistent i.e. 15 mins for my user and two hours for the schema owner.
    The schema owner does have a different profile and a different Resource Consumer Group but both my profile and the schema owners profile have 'sessions_per_user' set to Unlimited. In Resource Manager the Parallel_Degree_Limit_P1 value is set to 16 for my consumer group and is not set at all for the schema owners consumer group.
    I have observed that when exporting under the schema owner the DBA_DATAPUMP_SESSIONS showed a DBMS_DATAPUMP session, a MASTER session and two WORKER sessions. When I run it under my user id it shows these four sessions but also shows three EXTERNAL TABLE sessions. This suggests that it is using a different approach but I'm not sure what would cause this.
    Any advice would be very welcome. I haven't posted any specific information about the parameter file or the tables as I'm not sure what info people might require - so if you need specific details of anything please let me know.
    Many thanks.

    Sorry for the delay in responding - it took a couple of days for our security people to give me the go-ahead to make the changes (red tape is ridiculous here!)
    The tweak to the consumer groups in Resource Manager didn't seem to make much difference and it continued to use the same plan (but it was worth trying it). I then granted the EXP_FULL_DATABASE role and it did indeed result in much better performance (and it created the four dumpfiles instead of two).
    I'm still not sure why it makes such a difference - the export is only exporting a table from the users schema but it does query a table in someone else's schema to identify appropriate candidates. You would assume that providing it can access all the necessary information it would run at the optimum speed but obviously the EXP_FULL_DATABASE role makes a considerable difference.
    Thanks again for both replies, much appreciated. Well done Dean for identifying the solution - great call.
    Edited by: user2480656 on 21-Aug-2012 08:35

  • Which background process involves in datapump export/import?

    Hi guys,
    Could any one please tell me which background process involves in datapump export and import activity? . any information please.
    /mR

    Data pump export and import is done by foreground server processes (master and workers), not background.
    http://www.acs.ilstu.edu/docs/Oracle/server.101/b10825/dp_overview.htm#sthref22

  • Datapump Export stops at "Estimate in progress...."

    Hi,
    I am facing an issue while doing Schema level Datapump Export in Oracle 10g. The export for a particular schema stops at "Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA" and more over it only spawns one worker(DW01) irrespective of the PARALLEL parameter value. For other schema the export works fine and even the table level export for the problematic schema.
    I am clue less, because the alert log does not show anything, can any one please advice....
    Here is how my Parfile looks like:
    userid=id/password
    directory=impdir
    parallel=2
    schemas=prod11sep12
    dumpfile=expC2P_20120925_%U.dmp
    logfile=expC2P_20120925.log
    job_name=expC2P_20120925
    tail -f expC2P_20120925.log
    bash-3.00$ expdp parfile=expC2P.par ESTIMATE=STATISTICS
    Export: Release 10.2.0.4.0 - 64bit Production on Wednesday, 26 September, 2012 16:44:30
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "SYSTEM"."EXPC2P_20120925": parfile=expC2P.par ESTIMATE=STATISTICS
    Estimate in progress using STATISTICS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Alert log:
    kupprdp: master process DM00 started with pid=38, OS id=15156
    to execute - SYS.KUPM$MCP.MAIN('EXPC2P_20120925', 'SYSTEM', 'KUPC$C_1_20120926164430', 'KUPC$S_1_20120926164430', 0);
    kupprdp: worker process DW01 started with worker id=1, pid=46, OS id=15201
    to execute - SYS.KUPW$WORKER.MAIN('EXPC2P_20120925', 'SYSTEM');
    Thanks in Advance...

    Pl enable trace as per this MOS Doc to see if additional debug information can be gathered
    Export/Import DataPump Parameter TRACE - How to Diagnose Oracle Data Pump [ID 286496.1]
    HTH
    Srini

  • Getting Datapump Export Dump file to the local machine

    I apologize to everyone as this is a duplicate post.
    Re: Getting Datapump Export Dump file to the local machine
    My initial thread(started yesterday)was in 'Database General' and didn't get much response today. Where do i post questions on EXPORT/IMPORT utilities?
    Anyway, here is my problem:
    I want to take the export dump of itemrep schema in orcl database (in a remote machine). I have an Oracle server (10G Rel2) running in my local Windows machine. I have created a user john with necessary EXPORT/IMPORT privileges in my local db. Then i created a Directory object,ie a folder named datapump in my local hard drive and granted READ WRITE privileges to john.
    So john, who is a user in my local machine's oracle db is going to run the expdp utility.
    expdp john/jhendrix@my_local_db_alias SCHEMAS=itemrep directory=datapump logfile=itemrepexp.log
    The above command will fail because it will look for itemrep schema inside my local db, not the remote db where the itemprep is actually located. And you can't qualify the schemaname with its db in the SCHEMAS parameter (like SCHEMAS=itemrep@orcl).
    Can anyone provide me a solution for this?

    I think you can initiate the datapump exp utility from your client machine to export a schema in a remote database.But, Upon execution,oracle looks for the directory in the remote database and not on your local machine.
    You're inovoking expdp from a client (local DB) to export data from a remote DB.
    So, With this method, you can create the dumpfiles only on the Remote server and not on the local Machine.
    You can perform a direct import instead of export using the NETWORK_LINK option.
    Create a DBlink from your local and Remote DB and verify the connection.
    Then,Initiate the Impdp from Your local machine DB using the parameter network_link=<db_link of the Remote DB> to import the schema.
    The advantage of this option eliminates the Dumpfile creation at the Server side.
    There are no dumpfiles during the import process. the Data is imported directly to the target schema.

  • Problem with /dev/null

    Hi,
    I faced this problem twice and I do not know how to reproduce it.
    There are 2 log files ERRORFILE and EVENTFILE.
    I have a script as show below which takes a backup of the log files,every hour. Processes are continously writing to the log files. Occasianaly(twice till now), I see that the inode of the ERRORFILE and EVENTFILE gets changed, due to this no precesses can write to the log file and they need to be restarted to initiate logging. Is there something worng with the command "cp /dev/null".?
    TIME=`date '+%Y%m%d_%H%M'`
    cp $ERRORFILE $ERRORFILE:$TIME:BACKUP
    cp $EVENTFILE $EVENTFILE:$TIME:BACKUP
    /usr/bin/gzip $ERRORFILE:$TIME:BACKUP
    /usr/bin/gzip $EVENTFILE:$TIME:BACKUP
    cp /dev/null $ERRORFILE
    cp /dev/null $EVENTFILE

    Hi,
    Just to make sure we do not deviate the topic to "/dev/null" getting corrupted.
    The actual concern is that the inode of the log files got changed, even though there was no "mv" or "rm" operation performed on it.
    The problem for which I need your assistance and expertise is to know whether in any scenario a "cp" will change the inode number, say for example, when we are coping from a device character file(/dev/null). I have pasted the snippet of the script below. Could there be some erroneous condition which could lead to such a problem, say for example gzip returned a error or something like that.
    TIME=`date '+%Y%m%d_%H%M'`
    cp $ERRORFILE $ERRORFILE:$TIME:BACKUP
    cp $EVENTFILE $EVENTFILE:$TIME:BACKUP
    /usr/bin/gzip $ERRORFILE:$TIME:BACKUP
    /usr/bin/gzip $EVENTFILE:$TIME:BACKUP
    cp /dev/null $ERRORFILE
    cp /dev/null $EVENTFILEThanks

  • Attach datapump export job

    Hi Guys,
    I am using Oracle 10g Release 2 on Solaris.
    I have database that is 1.5 TB and I am doing datapump export of this database of which datapump estimate is 500GB.
    Now after the 300GB export the server crashed.
    Will I be able to attach the data pump export job and continue from 300GB after database startup?
    NB I am using the parameter flashback_time for data consistency.
    Please Help !!!!!!!!!!!!!!
    Thanks.

    Thanks for the reply...
    I tried to attach the job after the database startup and here is what I get:
    expdp \"/ as sysdba\" attach=SYS_EXPORT_FULL_01Export: Release 10.2.0.2.0 - 64bit Production on Saturday, 30 July, 2011 17:50:31
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    ORA-39002: invalid operation
    ORA-39068: invalid master table data in row with PROCESS_ORDER=-59
    ORA-39150: bad flashback time
    ORA-00907: missing right parenthesis
    I guess I just have to restart the job as i cannot attach that job...
    Thanks...

  • Can I /dev/null my access log while running?

    I have a Directory Server 5.2 system running
    on Linux Red Hat.
    My access log is huge - 1.7 G in 2 days, using
    default logging.
    Can I manage this by copying the access log
    to another directory, then
    cat /dev/null > /sunusr/slapd-xxxx/access
    w/o negative effects ?
    If I can , this will save me much grief,
    JYard
    UCLA

    You can cat /dev/null, but you should not need to. You should be able to configure, the size, the rotation time, and the max file system utilization. You can actually have logs that do not exceed 10MB or whatever you want. Go to the configuration/logs tab in the console and set it any way you want.

  • S10u 7 - smpatch in BE fails - patchadd: /dev/null: cannot create

    Solaris 10 5/09 s10x_u7wos_08 X86 with ZFS Root + Zones
    smpatch on non active BE with Zones fails:
    # smpatch update -b s10x_u7wos_08p1
    Ckecking the currently running boot enviornment ...
    Currently running boot enviornment name is [s10x_u7wos_08p].
    Checking the destination boot environment [s10x_u7wos_08p1] ...
    Installing patches from /var/sadm/spool...
    Copying the currently running BE into inactive BE [s10x_u7wos_08p1] ...
    (This process will take some time, please wait a moment.)
    Installing update(s) onto the inactive boot environment [s10x_u7wos_08p1] ...
    Failed to install patch 119901-07.
    Utility used to install the update failed with exit code 1.
    System has findroot enabled GRUBNo entry for BE <s10x_u7wos_08p1> in GRUB menuValidating the contents of the media </var/sadm/spool/119901-07.jar.dir>.The media contains
    1 software patches that can be added.All 1 patches will be added because you did not specify any specific patches to add.Mounting the BE <s10x_u7wos_08p1>.Adding patche
    s to the BE <s10x_u7wos_08p1>.Validating patches...Loading patches installed on the system...Done!Loading patches requested to install.Done!Checking patches that you spe
    cified for installation.Done!Approved patches will be installed in this order:119901-07 Preparing checklist for non-global zone check...Checking non-global zones...This
    patch passes the non-global zone check.119901-07 Summary for zones:Zone master-templateRejected patches:None.Patches that passed the dependency check:119901-07 Zone mast
    er-template-cloneRejected patches:None.Patches that passed the dependency check:119901-07 Patching global zoneAdding patches...Checking installed patches...Verifying suf
    ficient filesystem capacity (dry run method)...Installing patch packages...Patch 119901-07 has been successfully installed.See /a/var/sadm/patch/119901-07/log for detail
    sPatch packages installed: SUNWPython SUNWTiff SUNWTiff-devel SUNWgnome-img-viewer-shareDone!Patching non-global zones...Patching zone master-templateAdding patches.
    ..Checking installed patches...Patchadd is terminating.Done!Unmounting the BE <s10x_u7wos_08p1>.The patch add to the BE <s10x_u7wos_08p1> failed (with result code <8>).
    /usr/lib/patch/patchadd[4]: /dev/null: cannot create/usr/lib/patch/patchadd[6]: /dev/null: cannot createsort: insufficient available file descriptorsPatch 119901-07 fail
    ed in non-global zone SUNWlu-master-template.Patch 119901-07 wasn't installed in zones:master-template-clone
    Failed to install patch 119901-07.
    ALERT: Failed to install patch 119901-07.
    Any ideas?
    TIA.
    /phs
    Peter H. Seybold

    Post your question in the Developer Forums:
    http://discussions.apple.com/category.jspa?categoryID=164

  • Advantage of datapump export and import over original export and import

    Hi,
    let me know the Advantage of datapump export(expdm) and import(impdm) over original export(exp) and import(imp).

    Hello,
    let me know the Advantage of datapump export(expdm) and import(impdm) over original export(exp) and import(imp). There're many advantages on using DATAPUMP.
    For instance, with INCLUDE / EXCLUDE parameters you can filter exactly which Object and / or Object Type you intend to Export or Import. Which is not easy with the Original Export / Import (except for Tables, Index, constraints,...).
    You can Import straightly from a NETWORK_LINK without using a Dump.
    You have many interesting features as COMPRESSION, FLASHBACK_SCN / _TIME*,... .
    You can use PL/SQL API to perform your Export / Import rather than using the Command Line Interface.
    More over, the DATAPUMP is much more optimized than the Original Export/Import and use Direct Path or External Tables, ... and what to say about the REMAP_% parameters which let you rename Datafiles, Schema, Tablespaces, ...
    There would be many things to tell about DATAPUMP. You'll find an overview of this very good tool on the following links:
    http://www.oracle-base.com/articles/10g/OracleDataPump10g.php
    http://www.oracle-base.com/articles/11g/DataPumpEnhancements_11gR1.php
    Hope this help.
    Best regards,
    Jean-Valentin

  • [solved] systemd times out waiting for dev-null.device on encrypted fs

    I have a relatively new install of Arch on my laptop. The relevant sections of the drive setup are a luks encrypted root device (with associated unencrypted /boot), and a large ntfs device encrypted with truecrypt. That contains my home directory, which is shared with the Windows 8 dual-boot on the machine.
    I have all this mounted on boot, so my typical usage is to enter my password for the encrypted root, followed by the password for the truecrypt-encrypted data drive, then normal login.
    This has been working fine for a couple of weeks. This afternoon I rebooted my machine from Windows to Linux, at which point the secondary encrypted drive failed to mount. The root device mounts fine.
    On further examination, in the form of journalctl -xb, I'm getting the following errors after mounting the encrypted root device:
    Feb 12 21:29:54 kafka systemd[1]: Job dev-null.device/start timed out.
    Feb 12 21:29:54 kafka systemd[1]: Timed out waiting for device dev-null.device.
    -- Subject: Unit dev-null.device has failed
    -- Defined-By: systemd
    -- Support: http://lists.freedesktop.org/mailman/li … temd-devel
    -- Documentation: http://www.freedesktop.org/wiki/Softwar … e9d022f03d
    -- Unit dev-null.device has failed.
    -- The result is timeout.
    Feb 12 21:29:54 kafka systemd[1]: Dependency failed for Cryptography Setup for cryptdata.
    -- Subject: Unit [email protected] has failed
    -- Defined-By: systemd
    -- Support: http://lists.freedesktop.org/mailman/li … temd-devel
    -- Documentation: http://www.freedesktop.org/wiki/Softwar … e9d022f03d
    -- Unit [email protected] has failed.
    As my root home directory is on the root filesystem, I can still log in as root. If I try to mount or unmount the truecrypt device (located at /data) the command hangs. After removing "auto,x-systemd.automount" from that device from /etc/fstab and rebooting, I have a significant delay at boot, but I can then mount the /data device as normal.
    I can't find anything related to this either here or on Google. Any ideas?
    Last edited by tealeaf (2014-02-13 12:01:55)

    WonderWoofy wrote:A bit OT, but I just want to mention that having your $HOME on ntfs is probably not the best idea either.  It may work, but because it is not a POSIX compliant filesystem, there is a good chance you might run into some issues.
    Thanks for the warning. Sadly, as there are applications I need for work that only run in Windows, I need to dual boot this machine with Windows 8.1. This is the best option I can find for sharing my home directory, which is also a necessity due to the amount of data I have to share between the systems. I would much rather trust to ntfs-3g-ar and its UserMapping (and all the fiddling with ACLs that I had to do) than to the Windows ext2 drivers I can find. They all seem to be several years out of date. Linux is much better at talking NTFS than Windows is at talking EXT. (To be honest, I'm actually quite impressed with Windows 8 since the upgrade to 8.1. As a long term Arch user and a fan of tiling window managers it's interesting to see Windows moving in the right direction. )
    With 'permissions' in the /etc/fstab and the .NTFS-3G/UserMapping file in place it works almost seamlessly. (It took a lot of tinkering with ACLs in Windows and Linux, but it's working very well now. One tip that I'll write here in case it's of use is that you want the last line of the UserMapping file to be a 'generic' mapping. When I didn't have that there were very strange things going on.)
    The only two minor problems I have now are:
    1) A few applications don't like FUSE filesystems. Steam worked for a while and then broke; reinstalling it failed at every stage. When I checked, it seems that FUSE is a known problem for Steam. (I don't think it's restricted to NTFS.) My response to that was to create /home/.local/$HOME on my root (ext4) filesystem and symlink out to it for troublesome applications.
    2) There are a few characters for filenames that Windows doesn't like, making those files inaccessible in Windows. (Colons are the major culprit.) They're usually quite easily renamed. (The exception being my .maildir folder, which I have had to duplicate natively in cygwin.)
    Neither of these are anything more than minor niggles. I appreciate the warning, though.
    Having said that, if you have an alternative that lets me share a truecrypt-encrypted drive between Linux and Windows that is better than the NTFS-3G approach, I'd love to hear it for future reference.

Maybe you are looking for

  • Weird problem connecting HD to Airport Extreme

    I have an Airport Extreme with a powered USB Hub connected to it, which connects to two HDD's and a printer. The problem: One HDD shows up through direct USB connection on both Airport and directly into my Macbook, but disappears (never shows up) whe

  • IDOC from external system

    Hi, I have scenarion in that we will receive IDOC from external system (SAP system) in XI i.e. IDOC- XI -IDOC scenrio. from my Understanding ,to configure the scenario I have to do ALE from sender side and IDOC receiver channel in receiving side. Ext

  • Help !!!  Print to Tape question

    I'm trying to print to tape to the UVW 1800. The recorder is giving a "No reference" error message.

  • Photoshop import question?

    How can I take a screen shot in Premiere Pro (CS6 or CC) without it becoming distorted? Lately, when doing this, the images are elongated. Then, when I bring the image into Photoshop, it looks fine, except that the Pixel Aspect Ratio has been automat

  • Recovery partition not found

    Hello, My computer model is an HP p6330f. I'm running Windows 7 Home Premium, 64-bit. I actually got this PC during May 2010 and just started getting to making the recovery discs. When I first  got it, I made a disc called the "repair disc" and I ass