Using tar to extract hadoop stable

I keep getting the following error regardless which version of stable hadoop archive I download. Did anyone face this issue before?. I have latest and greatest, updated Mavericks OS.
tar: Unrecognized archive format
Any help is appreciated.

Thanks for the quick reply. You were right. I compared the bytes and it looks like my "curl" did not download the entire file. Downloaded via web and was able to "untar" with no issues.

Similar Messages

  • When I use SSIS for extract from OLAP Database, then the error random occurred,Error Code = 0x80040E05

     I have tired for this!
    When I use SSIS for extract data from ssas, that means,I use mdx query.
    then random error occured.
    Hope some one can understand my poor English....
    And the Error Info show below.
    Code Snippet
    Error: 0xC0202009 at Data Flow Task - For Individual User Tech Points, OLE DB Source 1 1 [31]: SSIS Error Code DTS_E_OLEDBERROR.  An OLE DB error has occurred. Error code: 0x80040E05.
    An OLE DB record is available.  Source: "Microsoft OLE DB Provider for Analysis Services 2005"  Hresult: 0x00000001  Description: "Error Code = 0x80040E05, External Code = 0x00000000:.".
    Error: 0xC004701A at Data Flow Task - For Individual User Tech Points, DTS.Pipeline: component "OLE DB Source 1 1" (31) failed the pre-execute phase and returned error code 0xC0202009.

    I have had the same error on SQL2008 and now on SQL2012 SSIS, but have been able to eliminate / workaround.
    We have a Loop Container in our Control flow that contains a data-flow task with an MDX source. The MDX query for the data-flow source is dynamically built (via an expression) on each iteration of the Loop container (however it always returns the "same shaped"
    results - only the filters in the WHERE clause are different).
    We've found the error to be somewhat intermittent - sometimes the package will complete successfully, other times it will fail with the 0x80040E05 error at varying iterations thru the container loop.
    To alleviate the problem we setup the SQL Agent job-step for this package to re-try on failure for up to 5 retries - not an ideal workaround, but it helped to improve the success rate of the Job.
    We have no idea why this error is occurring or what is causing it, however it appears to be timing-related in some way and I have only seen the issue when using a SSAS OLE-DB data source with a dynamically generated MDX query. I have managed to virtually
    eliminate the error from occurring with a not ideal workaround in the SSIS package - no idea why this works/helps (hopefully Microsoft will be able to work it out and resolve the issue as it's been plaguing us since SQL2008 and is still here in SQL2012
    SP1...
    Workaround for MDX causing 0x80040E05 error:
    Within our loop container we have added a Script task with OnSuccess precedent constraint to the data-flow task that contains the dynamically generated MDX source query. The script task simply introduces a WAIT in the processing immediately after the
    data-flow task completes of about 5 seconds, before allowing SSIS to continue with the next iteration (e.g. System.Threading.Thread.Sleep(5000)).
    With this delay in place we have had much more stable SSIS package executions - dont know why, but that's what we havce observed. Also note that when we migrated to SQL2012 SSIS packages the 0x80040E05 error returned, however we were able to eliminate it
    once more by increasing the WAIT time to 10 seconds on this script task.
    Now waiting for 10 seconds is not an ideal solution / workaround to this problem - particularly when it is contained within a Loop Container (in our case it has added nearly 30 minutes of "WAIT time" to the package execution duration), however this workaround
    is better than having the package fail 80%+ of the time...
    regards,
    Piquet

  • Use LINQ to extract the data from a file...

    Hi,
    I have created a Subprocedure CreateEventList
    which populates an EventsComboBox
    with a current day's events (if any).
    I need to store the events in a generic List communityEvents
    which is a collection of
    communityEvent
    objects. This List needs to be created and assigned to the instance variable
    communityEvents.
    This method should call helper method ExtractData
    which will use LINQ to extract the data from my file.
    The specified day is the date selected on the calendar control. This method will be called from the CreateEventList.
    This method should clear all data from List communityEvents.  
    A LINQ
    query that creates CommunityEvent
    objects should select the events scheduled for selected
    day from the file. The selected events should be added to List
    communityEvents.
    See code below.
    Thanks,
    public class CommunityEvent
    private int day;
    public int Day
    get
    return day;
    set
    day = value;
    private string time;
    public string Time
    get
    return time;
    set
    time = value;
    private decimal price;
    public decimal Price
    get
    return price;
    set
    price = value;
    private string name;
    public string Name
    get
    return name;
    set
    name = value;
    private string description;
    public string Description
    get
    return description;
    set
    description = value;
    private void eventComboBox_SelectedIndexChanged(object sender, EventArgs e)
    if (eventComboBox.SelectedIndex == 0)
    descriptionTextBox.Text = "2.30PM. Price 12.50. Take part in creating various types of Arts & Crafts at this fair.";
    if (eventComboBox.SelectedIndex == 1)
    descriptionTextBox.Text = "4.30PM. Price 00.00. Take part in cleaning the local Park.";
    if (eventComboBox.SelectedIndex == 2)
    descriptionTextBox.Text = "1.30PM. Price 10.00. Take part in selling goods.";
    if (eventComboBox.SelectedIndex == 3)
    descriptionTextBox.Text = "12.30PM. Price 10.00. Take part in a game of rounders in the local Park.";
    if (eventComboBox.SelectedIndex == 4)
    descriptionTextBox.Text = "11.30PM. Price 15.00. Take part in an Egg & Spoon Race in the local Park";
    if (eventComboBox.SelectedIndex == 5)
    descriptionTextBox.Text = "No Events today.";

    Any help here would be great.
    Look, you have to make the file a XML file type -- Somefilename.xml.
    http://www.xmlfiles.com/xml/xml_intro.asp
    You can use NotePad XML to make the XML and save the text file.
    http://support.microsoft.com/kb/296560
    Or you can just use Notepad (standard), if you know the basics of how to create XML, which is just text data that can created and saved in a text file, which, represents data.
    http://www.codeproject.com/Tips/522456/Reading-XML-using-LINQ
    You can do a (select new CommunityEvent) just like the example is doing a
    select new FileToWatch and load the XML data into the CommunityEvent properties.
    So you need to learn how to make a manual XML textfile with XML data in it, and you need to learn how to use LINQ to read the XML. Linq is not going to work against some  flat text file you created. There are plenty of examples out on Bing and Google
    on how to use Linq-2-XML.
    http://en.wikipedia.org/wiki/Language_Integrated_Query
    <copied>
    LINQ extends the language by the addition of query
    expressions, which are akin to
    SQL statements, and can be used to conveniently extract and process data from
    arrays, enumerable
    classes, XML documents,
    relational databases, and third-party data sources. Other uses, which utilize query expressions as a general framework for readably composing arbitrary computations, include the construction of event handlers<sup class="reference" id="cite_ref-reactive_2-0">[2]</sup>
    or
    monadic parsers.<sup class="reference" id="cite_ref-parscomb_3-0">[3]</sup>
    <end>
    <sup class="reference" id="cite_ref-parscomb_3-0"></sup>

  • How can I use Automator to extract specific Data from a text file?

    I have several hundred text files that contain a bunch of information. I only need six values from each file and ideally I need them as columns in an excel file.
    How can I use Automator to extract specific Data from the text files and either create a new text file or excel file with the info? I have looked all over but can't find a solution. If anyone could please help I would be eternally grateful!!! If there is another, better solution than automator, please let me know!
    Example of File Contents:
    Link Time =
    DD/MMM/YYYY
    Random
    Text
    161 179
    bytes of CODE    memory (+                68 range fill )
    16 789
    bytes of DATA    memory (+    59 absolute )
    1 875
    bytes of XDATA   memory (+ 1 855 absolute )
    90 783
    bytes of FARCODE memory
    What I would like to have as a final file:
    EXCEL COLUMN1
    Column 2
    Column3
    Column4
    Column5
    Column6
    MM/DD/YYYY
    filename1
    161179
    16789
    1875
    90783
    MM/DD/YYYY
    filename2
    xxxxxx
    xxxxx
    xxxx
    xxxxx
    MM/DD/YYYY
    filename3
    xxxxxx
    xxxxx
    xxxx
    xxxxx
    Is this possible? I can't imagine having to go through each and every file one by one. Please help!!!

    Hello
    You may try the following AppleScript script. It will ask you to choose a root folder where to start searching for *.map files and then create a CSV file named "out.csv" on desktop which you may import to Excel.
    set f to (choose folder with prompt "Choose the root folder to start searching")'s POSIX path
    if f ends with "/" then set f to f's text 1 thru -2
    do shell script "/usr/bin/perl -CSDA -w <<'EOF' - " & f's quoted form & " > ~/Desktop/out.csv
    use strict;
    use open IN => ':crlf';
    chdir $ARGV[0] or die qq($!);
    local $/ = qq(\\0);
    my @ff = map {chomp; $_} qx(find . -type f -iname '*.map' -print0);
    local $/ = qq(\\n);
    #     CSV spec
    #     - record separator is CRLF
    #     - field separator is comma
    #     - every field is quoted
    #     - text encoding is UTF-8
    local $\\ = qq(\\015\\012);    # CRLF
    local $, = qq(,);            # COMMA
    # print column header row
    my @dd = ('column 1', 'column 2', 'column 3', 'column 4', 'column 5', 'column 6');
    print map { s/\"/\"\"/og; qq(\").$_.qq(\"); } @dd;
    # print data row per each file
    while (@ff) {
        my $f = shift @ff;    # file path
        if ( ! open(IN, '<', $f) ) {
            warn qq(Failed to open $f: $!);
            next;
        $f =~ s%^.*/%%og;    # file name
        @dd = ('', $f, '', '', '', '');
        while (<IN>) {
            chomp;
            $dd[0] = \"$2/$1/$3\" if m%Link Time\\s+=\\s+([0-9]{2})/([0-9]{2})/([0-9]{4})%o;
            ($dd[2] = $1) =~ s/ //g if m/([0-9 ]+)\\s+bytes of CODE\\s/o;
            ($dd[3] = $1) =~ s/ //g if m/([0-9 ]+)\\s+bytes of DATA\\s/o;
            ($dd[4] = $1) =~ s/ //g if m/([0-9 ]+)\\s+bytes of XDATA\\s/o;
            ($dd[5] = $1) =~ s/ //g if m/([0-9 ]+)\\s+bytes of FARCODE\\s/o;
            last unless grep { /^$/ } @dd;
        close IN;
        print map { s/\"/\"\"/og; qq(\").$_.qq(\"); } @dd;
    EOF
    Hope this may help,
    H

  • Reg. can we display alv grid using field groups (extracts)

    Hi,
    can we display alv grid using field groups (extracts). is this possible. i have to develop a blocked alv.
    tnks
    Yerukala Setty

    No, you will need the data in an internal table to use ALV.
    Cheers
    Allan

  • Using XSLT to extract value of a XML node with namespace

    I have a XML source code here.
    <?xml version="1.0" encoding="utf-8" ?>
    <rss version="2.0" xmlns:job="http://www.pageuppeople.com">
      <channel>
        <title>SMH Jobs</title>
        <link>internalrecruitment.smhgroup.com.au/jobsrss.ashx?stp=di</link>
        <description>A listing of jobs available here</description>
        <item>
          <title>eCommerce Optimisation Advisor</title>
          <description>A new and exciting opportunity exists for an experienced eCommerce Advisor to join</description>
          <job:location PUReferenceID="3711">Sydney - Inner Suburbs & CBD</job:location>
        </item>
      </channel>
    </rss>
    I want to use XSLT to extract value of a XML node with namespace <job:location>, and the returned value should be string 'Sydney - Inner Suburbs & CBD'. I tried a few XSL code below, but failed with error or nothing was returned.
    <xsl:value-of select="job:location" disable-output-escaping="yes"/>
    <xsl:value-of select="job/location" disable-output-escaping="yes"/>
    <xsl:value-of select="job\location" disable-output-escaping="yes"/>
    <xsl:value-of select="location" disable-output-escaping="yes"/>
    This might be an easy question for you, but I would appreciate if anyone can help.

    Hi Suncorp IT Learner,
    We need to tell the XSLT that some elements are in another namespace. Copy the xmls declarations for the prefixes you need to use. Then use the xsl format as:
    <xsl: value-of select=”job:location/@PUReferenceID”/>
    In following issue, Chriztian has a good explanation:
    http://our.umbraco.org/forum/developers/xslt/33353-XSLT-reading-XML-attribute-value
    Thanks,
    Qiao Wei
    TechNet Community Support

  • Using tar with zip for Datafiles backup on windows machine

    Dear all,
    We have R12 e business suite instance on windows.
    I was implementing cold backup for this. Unfortunately i am not a windows guy.
    I do have MKS tools installed on my windows machine, which contains the tar command.
    Can i use tar -cvzf <bkp>.tgz <location of datafiles> for this?, My main concern is, is it going to harm the source like corruption etc..
    Thanks

    If MKS tools are supported under Windows it should not corrupt input files whether input files are Oracle database files or any other Windows files.
    You could try instead to use RMAN for cold backup if you have already RMAN experience:
    - RMAN does not backup empty blocks
    - RMAN can also compress backups
    - RMAN is fully supported on Windows if corresponding Oracle version is certified on your Windows server.
    See an example of cold backup for database running in NOARCHIVELOG mode in http://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmquick.htm#BRADV90059.
    If your database runs in ARCHIVELOG mode just backup your database online: http://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmquick.htm#i766544.

  • What are the delta mechanisms and tables used for  Lo Extraction & COPA

    Hi all
    what are the delta mechanisms and tables used for  Lo Extraction & COPA.
    please explain clealry.
    Thanks & Regards,
    James

    James,
    Please go through Roberto's weblog :
    /people/sap.user72/blog/2005/02/14/logistic-cockpit--when-you-need-more--first-option-enhance-it
    Anyways,
    As you know LO cockpit consists of different modules(MM, PP, SD, etc)
    They are called appl components. Each of them have a number (eg.MM=02) and for each appl comp they might be different Data sources and for each DS they might be different tables. So, unless you be specific we cant tell a specific table for a DS.
    coming to the delta mechanisms, there are " direct delta, queqed delta and serialized delta".
    Copa is based on the oepration concern. it can be created on " accouting based" or "costing based".
    Assign points if helpful
    Kalyan

  • Using "Channels" to extract

    How does one use "Channels" to extract a portion of a photo for use in another piece? Thanks!

    it all depends upon your image.  When you use channels, you want to pick a channel that has the most contrast where you want to make the cut out.  Drag that channel down to the new channel icon to duplicate it.  Then you can use levels or curves to increase the contrase.  Painting on the channle with a brush set to overlay with either black or white can also help refine the edge.  Once you're happy with what you have, you cmd/ctrl click on the channel icon to turn it into a selection, which you can then use to create a layer mask.

  • Using tar command to copy faster

    Hi,
    I want to copy the apps directories and subdirectories to another location on same machine for cloning purpose.
    I know this can be achieved using tar command.
    Will you pls send me the tar command to archive and copy at the same time using pipe "|" .
    Thanks,
    Dnyanesh

    Hi Funky,
    Permissions are assigned based on uid, not username. When you unpack the tar file with the 'p' option as root, the files will have the same uid on the target system as they did on the source system. In this case, preserving permissions for 'root' should not be a problem. The uid for root (0) does not usually change. The complication is with the uid values for non-root users.
    As far as preserving the uid for non-root users (like oracle or applmgr)...mostly I just try to keep these in sync across my systems as much as possible. :) For example, although the applmgr user on my production system is 'applprod', and on my test system is 'appltest', they both have the same uid (1003). That way, when I unpack a tarball from my production system on my test system, all the files that were owned by 'applprod' show as being owned by 'appltest'.
    In your example, if user 'xxx' on production and user 'yyy' on the clone server had different uids (for example, 1003 and 1004), the easiest thing to do is to try changing the uid of user 'yyy' on the clone system:
    usermod -u 1003 yyy
    This will only work, of course, if there is no other user on your clone/target system with uid 1003. :-)
    If you do have another user on your target system with uid 1003, then when you unpack the tar file on the target, your files will definitely have the wrong owner. You can still change that without touching root-owned files by using chown to change just the files owned by the "wrong" user:
    chown -R --from wronguser correctuser $ORACLE_HOME
    If your Linux doesn't support the --from syntax for chown (I primarily use SUSE/SLES and don't have my Redhat VM handy to check), you could also use find:
    find $ORACLE_HOME -user wronguser -exec chown correctuser {} \;
    Again, the point of the chown and find commands is that they'll leave the files that are supposed to be owned by root untouched.
    Sorry for the long post, I tried to go for comprehensive instead of brief this time. :-)
    Regards,
    John P.

  • Problem using the IMAQ Extract function.Not getting two different image out

    Hi, I am trying to use the multiple IMAQ extract functions to get certain parts of the webcam image and then using the color extract function to get the average RGB values. The problem is that both the IMAQ extract functions give the same image on its output port. The block diagram snippet and VI are attached below. Also, please let me know if there is a better way of doing this. I need to expand this later to extract rgb values of about 40-50 different parts of the image instead of just two shown below.
    Solved!
    Go to Solution.
    Attachments:
    Extract RGB Data.vi ‏104 KB

    You did not create another image that you should wire to the input "Image Dst" of "IMAQ Extract".  That's why you always operate on the original image (which you should not do if you intend to operate on several regions of the image).
    Solution: 
    (1) Create a new image and wire this to "Image Dst".
    (2) Apply the histogram operation sequentially (e.g. in a loop).
    That way you will notice, that Image Dst contains actually(!) the region you have specified.  (Put a probe on the image wire that goes from the Extract VI to the Histogram VI).

  • What is the keyword used to not extract the duplicate datas

    what is the keyword used to not extract the duplicate datas

    After select
    write delete adjacent duplicate records from itab comparing field names.
    before this sort itab by field names.
    Thanks

  • Getting Error "too large to archive" while using Tar

    Dear All,
    I am getting below error while trying to use TAR to acrhive multiple files in one file:
    <FileName> too large to archive
    The file is of size 2 GB (there are other files with same capacitiy but i am not getting any error for them). Here is waht i am trying to acheive:
    (1) Create one tar file from all files in a folder (using tar)
    (2) compress the tar file (using compress)
    (3) copy the compressed file to the tape (using tar)
    One more question, when i use compress (or gzip) command, it create comressed files but original files are not preserved. For example; if i use compress for files a.txt & b .txt it creates a new file (say ab.Z) but removes the files a.txt and b.txt. Is there any option (or any other command) using which i can comress the files without getting them removed?
    Thanks in Advance.

    Dear Robert,
    Thanks for your help. While trying to create tar file using gtar i am now getting below error:
    No space left device
    So appearantly i don't have enough free space on my file system on which i am trying to create two questions:
    (1) Is it possible to compress the tar file resulting from gtar in one command (tar file creation + compress). I don't want files get removed after compression.
    (2) Can we directly write to Tape Drive using gtar? Will the size of file resulting from gtar will be same as that of total size of all files or it will be less?
    regards,

  • Backup tech_st using tar??

    we have EBS R12.1 on Linux system. I remember ORACLE instructor told us when backup TECH_ST using "tar". the command must include "h" like "tar cvfh ...".
    I have been tried this "tar cvfh" and untar it using "tar xvf" or "tar xvfh". I found link is gone and change to 'physical file".
    My question is using "tar cvfh " to perform backup or clone is right way?
    Thanks.

    The topic of backup of apps tier has been discussed before. Pl use the search option to find threads like these - Re: should i backup just the db or the apps too?
    You can use either tar or copy to perform a backup. Always remember to test recovery using the backup files - this is the only way to verify that your backup is good and what steps you need to perform in case of a real recovery scenario.
    HTH
    Srini

  • Activating and using 0CUSTOMER Hierarchy extraction in R3

    Dear Expert,
    Could anybody explain me about Activating and using 0CUSTOMER Hierarchy extraction in R3 and then extract to BW systems from R3 in step by step?
    Points will be assign granted
    Regards
    Sanjiv

    You can load Hierarchies (sets) from R/3 in the same as other Master Data. Just you need to create Hierarchy DataSource using Tcode BW07. Where you have to give the name of the table and field on which you created Hierarchy iin R/3.
    Also there are many standard avilable datasources also for hierarchy for which you can load Hierarchy Directly.
    Here are the step by step procedures:
    Hi,
    If you are using standard infoobject and you want to load r3 hierachy you dont have to configure any additional thing in order to load it in BW.
    The Steps are:
    1)Activate the hierarcie datasource in R3
    2)Replicate it in BW
    3)Assign the datasource for an Infosource, and make sure you select the hierarchie datasource.
    4)Active the Infosource
    5)Create Infopackage
    6)Select the desire hierachie in the infopackage(if you dont see it, click on Available Hierarchies from OLTP and select it)
    Regards
    Asigns point if useful please
    http://help.sap.com/saphelp_nw04/helpdata/en/80/1a6729e07211d2acb80000e829fbfe/content.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/3d/320e3d89195c59e10000000a114084/frameset.htm
    Re: Hierarchies

Maybe you are looking for