How to extract a column out of a large ASCII file?

Hi all.
After searching the board and applying several solution approaches my problem still remains. Maybe you can help me.
The data source i've to deal with are large ASCII files (~540 MB) with 14 columns (delimiter: TAB). Each column represents one channel. The number of characters in each "field" is variable. I have to read user defined columns (=channels) out of each data set. Needless to say that reading the whole file runs into memory problems.
If anyone has an idea i would be happy
Thanks in advance.
Greets
Kane

I hate to defocus you, but there is a more efficient way to do this.  My apologies that I do not have the time to write code, but here is the pseudo code.
Create an array for your output greater than or equal to what you think you will need.
Read a 65,000 character chunk from the file (or the rest of the file, whichever is smaller).
Use the string search functions functions to find successive line ends and the appropriate tab character delimiters for your column.
Convert and replace the element in your output array.
When done, trim your output array to the right size.
If you drop an LVM read, convert it to a regular VI, and dive in, you will see an example of this type of process.  The idea is to keep disk reads, which are very inefficient, to a minimum.  It also minimizes your memory allocations, because you do not need to resize your input buffer for every line.  Problems you will need to deal with (which are handled by the LVM read) are such things as:
Your line crosses a chunk boundary.
The end-of-file creates a smaller chunk than 65,000 characters (the optimum chunk size for Win32 systems).
The end-of-line character is not well defined (in your case, this is probably not an issue)
Searching for a character can produce memory allocations
You may want to try reading the data as a U8 array instead of a string and doing your searches on that instead of the string.
I have always wanted to write the piece of code, but never had the time or reason to do so.  Good luck.  I will try to help if I can.
This account is no longer active. Contact ShadesOfGray for current posts and information.

Similar Messages

  • How to extract the column width in ALv report if its executed in background

    I am executing an ALV report in background , in front end i am getting data properly, in back end for some columns some of the digits are missing.For example if PO no is of 10 digits it will display only 8 becos column size is like that , how to extract coulmns in back ground.
    I have executed in background and checked the spool and  for some of the columns width is not sufficient to display comeplete data so please suggest how to extract the columns sizes if executed inj background for an ALV

    Hi Deepthi,
    you can try with the above mentioned suggestions ,if its worked its fine ,
    If not use Docking container instead of custom container, For ALV in back ground jobs, its suggest to use docking container instead of custom container , below you can find the declaration for docking container and code to use docking and custom container in your program for fore and back ground.
    or you can use docking container alone for both operations.
    Data : G_DOCK1 TYPE REF TO CL_GUI_DOCKING_CONTAINER,
    IF CCON IS INITIAL. (ccon is container name )
    *Check whether the program is run in batch or foreground
        IF CL_GUI_ALV_GRID=>OFFLINE( ) IS INITIAL.
    *Run in foreground
          CREATE OBJECT CCON
            EXPORTING
              CONTAINER_NAME = 'CON1'.
        CREATE OBJECT GRID1
            EXPORTING
              I_PARENT = parent_1.
    ELSE.
    *Run in background
          CREATE OBJECT GRID1
            EXPORTING
              I_PARENT = G_DOCK1.
        ENDIF.
      ENDIF.
    B&R,
    Saravana.S

  • How to extract the signal out from the waveform by my designated power level?

    Dear all,
         How can  I extract the signal from the waveform accroding to the power level? I read the Trigger&Gate.vi, but this vi extract signal according to the duration time. I want to extract signal according to power level.
         As shown in the following figures, the signal I want to process is between 130000 to 140000, if I zoom in, I can see the the useful signal is between 135400 to 138200. The question is how to extract the signal in the zone?  
        I tried the sub_NoiseEst_And_Chop_Shell.vi in the Packet_based_link example too, but this subvi seemed to be a little slow. Can anybody give me better advice? Thanks in advance!
    Solved!
    Go to Solution.

    I was working on something similar but haven't had time to fully develop it.
    My idea was to use an envelope detector (low pass filter) and then use an energy detection VI on the envelope.
    Here's where I left off
    Anthony F.
    Product Marketing Engineer
    National Instruments
    Attachments:
    test.vi ‏331 KB

  • Open Hub: How-to doc "How to Extract data with Open Hub to a Logical File"

    Hi all,
    We are using open hub to download transaction files from infocubes to application server, and would like to have filename which is dynamic based period and year, i.e. period and year of the transaction data to be downloaded. 
    I understand we could use logical file for this purpose.  However we are not sure how to have the period and year to be dynamically derived in filename.
    I have read in sdn a number of posted messages on a similar topic and many have suggested a 'How-to' paper titled "How to Extract data with Open Hub to a Logical Filename".  However i could not seem to be able to get document from the link given. 
    Just wonder if anyone has the correct or latest link to the document, or would appreciate if you could share the document with all in sdn if you have a copy.
    Many thanks and best regards,
    Victoria

    Hi,
    After creating open hub press F1 in Application server file name text box from the help window there u Click on Maintain 'Client independent file names and file paths'  then u will be taken to the Implementation guide screen > click on Cross client maintanance of file name > create a logical file path by clicking on new entiries > after creating logical file path now go to Logical file name definition there give your Logical file , name , physical file (ur file name followed by month or year what ever is applicable (press f1 for more info)) , data format (ASC) , application area (BW) and logical path (choose from F4 selection which u have created first), now goto Assignment of  physical path to logical path > give syntax group >physical path is the path u gave at logical file name definition.
    however we have created a logical path file name to identify the file by sys date but ur requirement seems to be of dynamic date of tranaction data...may u can achieve this by creating a variable. U can see the help from F1 that would be of much help to u. All the above steps i have explained will help u create a dynamic logical file.
    hope this helps u to some extent.
    Regards

  • How to extract one dimension out of a two dimensiona​l array

    Hello,
    May be this question is too naive and simple.I have a two dimensional array (two columns and 256 rows). All I want is to extract one of these columns as a separate one dimensional array. It seems like a very very basic task that any programming language should address. However, I could not find the right funciton in the array functions list. I am sure I am missing something very obvious. I tried index array, array subset, array to cluster and reshape array. None of them is the right function for this purpose. I have used LabVIEW for a quite a while now, but still I can not find a solution to this basic problem. Can someone help me out.
    Thanks
    Solved!
    Go to Solution.

    Hi -
    With the Index Array  function have you tried wiring a number to the input labeled index (col)?
    See the attached vi for an example.
    Attachments:
    array index1.vi ‏6 KB

  • How to show the checked-out symbol in list of files in a folder

    Hi,
    When the folder is selected it is showing all the files in that folder. But, the user would not know which file had been checked out. In general it should show the lock symbol (As similar to Documentum, webtop) when a file had been checked out by some other user.
    How to show the lock/key column in list of files. Any other way to find a file had been checked out or not instead of going into the content information.
    Also, how do we add additional columns like author etc... to the list of files screen in UCM?
    Please help

    You should double check, but I don't believe that the check-out info is in the resultset. Because that information isn't available, the state of each row in the search results table cannot be defined. So, the first hurdle would be to add that info to the dataset.
    The second hurdle would be to alter the UI to leverage that info (e.g., the key icon when checked out).
    If you're looking to alter the folder views, then look to the COLLECTION_DISPLAY service & related template. Other views (e.g., search results) have different templates and backing services.
    -ryan

  • How to extract table from oracle in to delimited flat files

    Hi i have the following requirement.i tried one dump procedure but i could extract table one by one.i need to do extract on regular basis using plsql procedure.
    Data will be extracted from production tables in Oracle into pipe delimited flat files that will be sent by SFTP. The list below represents the tables that will be used for extract along with a notation whether the entire table is extracted or only incremental transactional data.
    Table name extraction type No of records
    EXPIRE     All Records     157 - One Time     All Records     17
    ACE All Records     7,970
    DATA All Records     5,868
    MEMBER     All Records     24,794,879
    MEMBER Incremental & Update     13,893,587 (Initial Load)
    MEMBERRED All Records     25,108,606
    MEMBERPOINT     All Records     42,487,640
    MEMBERCOM     Incremental & Update     14,337,561 (Initial Load)
    MEMBERCODE     Incremental Only     14,985,568 (Initial Load)
    MEMBERDETAIL     Incremental Only     14,341,890 (Initial Load)
    MEMBERHISTORY     Incremental Only     70,021,067 (Initial Load)
    suggest me how can i extract these tables by using plsql procedure.In the above table some table has be extract select list of column.

    Saubhik wrote:
    This may help you.
    Re: Dynamic Fetch on dynamic Sql
    Well I was going to post my standard response, but I see I don't have to. ;)

  • How to extract individual projects from an Aperture 3 vault file in Finder?

    I want to extract a single (or multiple) projects from a vault on an external HD. My main library resides on my imac and sometimes I want to take my Macbook and the external HD with the vault and just work in on a single project. I have gone into package contents in Finder but I'm not sure how to get a project with all the folders/albums out of it as everything is broken up into "masters" "previews" etc. Is there an easy way? Also is the process of doing this any faster than just exporting the project to a new library on my imac and then copying that to the external drive?
    Thanks for any help.
    Message was edited by: arbitrage

    I would personally avoid playing with the vault as it is a vault. I'd instead export the relevant projects to your external then import them into a unique library on your mac book. When you're done you would then reverse the procedure and backup to your vault.
    HTH
    M.

  • How to extract the size and date from a given file

    Hi,
    I want to extract the size and date of a file (it can be either a video, audio or text file) that the user points to the location of. But I am not sure how. Does Java have an api that can do this? If not is there some other way of doing this? Can anyone help? Thanks in advance.

    Have a look at java.io.File, specifically
    public long lastModified()
    This format returned (I find) is nasty, so then use java.util.Date (or java.sql.Date, look the same on the surface to me) to format it.
    Cheers,
    Radish21

  • How to make a stationery out of .doc or .docx file?

    Hi,
    I stumbled upon a website that has some nice stationery, in .doc format. and since there few free pre made stationery available for Mail.app, at least to my knowledge, because I think I almost downloaded every free stationery available out there that could be easily found via Google. I decided to download some of these .doc stationeries and try to put them in Mail.app, but first I had to convert them into .html format. so here is what I've already tried and did not work:
    first I tried opening them in TextEdit, which supports .doc out of the box, only to find out that the pictures in the documents are gone to thin air. I tried saving it and then opening it with something else like Open Office, but the pictures were permanently gone.
    then I tried opening the original documents directly in Open Office, you guessed right If you said they will look screwed, and they were, and tried to save them as html, they were screwed even more.
    so I did a bit of research and found out that I could use Google Docs in order to convert the documents, I uploaded them into my Google docs and then tried to open them with it, but found it that Google docs can't handle the pictures (specially background pictures) either. as a result I did more searching to find a document converter and here is what I found:
    I found a few online service to convert these documents:
    http://www.freefileconvert.com/ was pretty good to convert the files into an .odt format. at first I thought yes I got the solution, now I can simply convert it to html using Open Office, but unfortunately, when I saved the document in html format and checked it in Safari to see how does it look, as If I would have done the same in MS Office with the original format, it looked pretty much screwed .
    So I continued to look and I found a few other online services one of them asked me to pay a fee, I skipped to the other one which was http://docx-converter.com/. unfortunately the website wasn't functioning properly I think, at least for me (maybe someone out there has succeeded using it, I don't know.) and it didn't send me anything.
    Finally I found http://www.zamzar.com/ that could convert .doc or .docx documents into html, or at least that is what it claims to do so, I tried it, and after I received the file via email, I found out it only contains a bmp image of the document's background. So much for the online conversion services!
    But I decided I would give it a last shot using MS Office online, so I opened up my old Windows Live mail and went into my Skydrive, I uploaded the documents and tried saving it as html, but there were no save as html (after all an online copy of MS Office should have some limitations, otherwise few people will be willing to buy a Desktop version) but this didn't made me disappointed, since I already knew, that Neither Office Word can make a clear conversion, so I tried something different, while I was in Safari looking at the document in online MS office, I saved the page as Web Archives, then I opened the file with TextEdit, and after deleting useless links and pictures, I finally managed to get a clear document with everything that supposed to be in it. At this point I only need to know how to make use of this document to create a stationery with this raw file which is in Web Archive format. any ideas?
    null

    I'm a total idiot! How did I overlook http://www.zamzar.com/ functions? I think I might have mistakenly chosen convert to BMP instead of html. I tried it and the result was very good.
    OK I finally found a good solution for this problem, so everyone out there that has the same problem, here is what I did:
    1. OK if you have some .doc and .docx file formats and you want to convert them into something like .html or .odt, without having to reedit the code or getting a screwed document, use http://www.freefileconvert.com/ in order to convert them into .odt so they would look as they do in MS office just go to the website and upload your documents then choose .odt, then click convert, after a few moments you will get a link to download your .odt files. you can use and play around with your document in Open Office, I have tested it and it's really good.
    2. Or if you want to convert them into .html with a clean code, and all the elements in it, simply go to http://www.zamzar.com/ and then ulpoad your documents, choose .html and enter your email address, you will have your .html files zipped and sent right in your inbox, then you could use Kompozer like me, or any other html editor, to make a template out of it. I tried it and the result was nice, now I have a bunch of nice html files that I can use to make templates and email stationery.

  • How to extract the content of a user uploaded txt file in web dynpro?

    Hi,
    I'm working on a java web dynpro component. This component consists of document upload field, where users should be able to upload .txt documents. These uploaded text documents should then be somehow read, and thir content displayed. I am already able to upload documents using the upload field, and store it in the context, but I'm still not able to extract the content of these text documents for displaying.
    Does anyone have any suggestions of how I could do this?
    Any help will be greatly appreciated!
    Thanks!

    Hi Alain,
        You can do through this document on how to upload/download files in Webdynpro.
    [https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/202a850a-58e0-2910-eeb3-bfc3e081257f]
    Once you have the uploaded file in your context if you are storing it as a byte array then convert it to a string using the String constructor String(byte[] bytes)  and then you can store this string in an attribute of type String which could be bound to a UI element (TextArea) to display the contents.
    If you are using an IWDResource then you will get an inputstream from which you can read the data and convert it to a string for display as mentioned above.
    Hope this helps.
    Sanyev

  • How to have sticky column headers in a large list?

    I have a list in sharepoint 2013.  It was created from a Microsoft Access database and so it displays like a spreasheat type list.  It is quite large and it would be nice to make the column headers "sticky" so that when you scroll down
    the page/list you can tell what the data type (header) is for the data you are looking at.
    Kind of like in excel when you do a "split" is what I am looking for .....
    Any help greatly appreciated.
    Ray

    Hi Ray,
    Please see this thread which is the same you are looking for.
    http://social.msdn.microsoft.com/Forums/sharepoint/en-US/95018474-2a4e-4bb5-9786-f3895ff17eee/how-to-fix-custom-list-header-when-i-scroll-down-the-list?forum=sharepointcustomization
    Please let us know in case you face any issue while implementing this.
    Regards
    Krishana
    Krishana Kumar http://www.mosstechnet-kk.com

  • How to get Context paramters out of Tomcat web.xml file

    The Documentation at Tomcat seems to suggest:
    Context initialization parameters that define shared
    String constants used within your application, which
    can be customized by the system administrator who is
    installing your application. The values actually
    assigned to these parameters can be retrieved in a
    servlet or JSP page by calling:
    String value =
    getServletContext().getInitParameter("name");
    Now I want to write a jsp, where the user enters his username and pwd, and this contacts a jdbc connection bean. I want to store the driver name in the web.xml file. Now can some1 please suggest a method as to how I can extract this (It ain't that straightforward as suggested in tomcat doumntation. You cannnot call the above method ithout Servlet Initialization (actually to say ithout HTTP Connection). I succeeded when I had written a servlet, but can some1 please suggest an alternative (bcoz othrwise just to extract that I am writing a servlet).
    Hope I am clear.
    Thanks in advance.

    Edit your web.xml file and add these entries,
    <context-param>
    <param-name>dbUrl</param-name>
    <param-value>jdbc:oracle:thin:@server5:1521:pl2java</param-value>
    </context-param>
    <context-param>
    <param-name>dbDriver</param-name>
    <param-value>oracle.jdbc.driver.OracleDriver</param-value>
    </context-param>
    Place the above entries in between your already existing <web-app> and </web-app> tags.
    Now in your jsp, you can use, <% application.getInitParameter(); %> and in servlet, getServletContext().getInitParameter();
    Hope this helps.
    Sudha

  • How to refresh/apply column value default setting on current files or folders

    Hi All
    I have set-up default column data per folder in my library (via
    Library Settings > Column default settings) and it works great for new documents or folders that are added to the library.
    But what do I do if I have an existing Library with folders and files and need to apply default column data to each? Is there a way of "refreshing" the default columns so that the data is populated through a specific folder and/or its sub-folders?
    (I really hope this is an easy fix or just a setting that I over-looked somewhere!)
    Thank you!

    I had to do this as well recently, and remembered your post.
    Here is the function I wrote , this worked for text, choice, and metadata columns
    It is pretty slow and could be optimized and broken up into more functions, but I had to do several things:
    1. I mass-updated the content types in a library
    2. On Library settings : set default values and also different defaults per folder
    3. For each file I then needed to:
    3.a. either copy the value from a column in the old content type to the new, or
    3.b. set the column to the default
    so this function does step 3, like I said it works for certain types of columns and can be sped up (I used it to update 700 files in a couple minutes), and it makes some assumptions about the environment but this at least is a starting point.
    As Alex said you may want to change SystemUpdate($true) to just Update(), depending on your requirements.
    <#
    .SYNOPSIS
    Resets columns in a document library to defaults for blank columns. Use this
    after changing the content types or adding columns to a doc lib with existing files
    .DESCRIPTION
    Resets columns in a doc lib to their defaults. Will only set them if the columns are blank (unless overridden)
    Will also copy some values from one column to another while you are there.
    Can restrict the update to a subset of columns, or have it look for all columns with defaults.
    Will use the list defaults as well as folder defaults.
    All names of columns passed in should use InternalName.
    This has ONLY been tested on Text, Choice, Metadata, and mult-Choice and Mult-Metadata columns
    Pass in a list and it will recursively travel down the list to update all items with the defaults for the items in that folder.
    If you call it on a folder, it will travel up the tree of folders to find the proper defaults
    Author:
    Chris Buchholz
    [email protected]
    @plutosdad
    .PARAMETER list
    The document library to update. Using this parameter it will update all files in the doc lib
    .PARAMETER folder
    The folder containing files to update. Function will update all files in this folder and subfolders.
    .PARAMETER ParentFolderDefaults
    Hashtable of internal field names as KEY, and value VALUE, summing up all the parent folders or list defaults.
    If not supplied, then the function will travel up the tree of folders to the parent doclib to determine
    the correct defaults to apply.
    If the field is managed metadata, then the value is a string
    Currently only tested for string and metadata values, not lookup or date
    .PARAMETER termstore
    The termstore to use if you are going to update managed metadata columns, this assumes we are only using the one termstore for all columns to update
    If you are using the site collection specific termstore for some columns you want to update, and
    the central termstore for others, then you should call this method twice, once with each termstore,
    and specify the respective columns in fieldsToUpdate
    .PARAMETER fieldsToCopy
    Hashtable of internal field names, where KEY is the "to" field, and VALUE is the "from" field
    Use this to copy values from one field to another for the item.
    These override the defaults, and also cause the "from" (Value) fields to NOT be overwritten with defaults even if
    they are in the fieldsToUpdate array.
    Example: @{"MyNewColumn" = "My_x0020_Old_x0020_Column"}
    .PARAMETER fieldsToUpdate
    If supplied then the method will update only the fields in this array to their default values, if null then it will update
    all fields that have defaults.
    If you pass in an empty array, then this method will only copy fields in the fieldtocopy and not
    apply any defaults
    Example: @() - to only copy and not set any fields to default
    Example2: @('UpdateField1','UpdateField2') will
    .EXAMPLE
    Set-SPListItemValuesToDefaults -list $list -fieldsToCopy @{"MyNewColumn" = "My_x0020_Old_x0020_Column"} -fieldsToUpdate @() -overwrite -termStore $termStore
    This will not set any defaults, but instead only set MyNewColumn to non null values of My_x0020_Old_x0020_Column
    It will overwrite any values of MyNewColumn
    .EXAMPLE
    Set-SPListItemValuesToDefaults -list $list -overwrite
    This will set all columns to their default values even if they are filled in already
    .EXAMPLE
    Set-SPListItemValuesToDefaults -folder $list.RootFolder.SubFolder[3].SubFolder[5]
    This will set all columns to their defaults in the given subfolder of a library
    .EXAMPLE
    Set-SPListItemValuesToDefaults -list $list -fieldsToUpdate @('ColumnOneInternalName','ColumnTwoInternalName')
    This will set columns ColumnOneInternalName and ColumnTwoInternalName to their defaults for all items where they are currently null
    .EXAMPLE
    Set-SPListItemValuesToDefaults -list $list -fieldsToCopy @{"MyNewColumn" = "My_x0020_Old_x0020_Column"} -fieldsToUpdate @("MyNewColumn") -termStore $termStore
    This will set all MyNewColumn values to their default, and then also copy the values of My_x0020_Old_x0020_Column to MyNewColumn where the old column is not null,
    but both of these will only happen for items where MyNewColumn is null
    .EXAMPLE
    Set-SPListItemValuesToDefaults -list $list -fieldsToCopy @{"MyNewColumn" = "My_x0020_Old_x0020_Column"} -termStore $termStore
    This will set ALL columns with defaults to the default value (if the item's value is null),
    except for My_x0020_Old_x0020_Column which will not be modified even if it has a default value, and will also set MyNewColumn to the
    value of My_x0020_Old_x0020_Column if the old value is not null
    #>
    function Set-SPListItemValuesToDefaults {
    [CmdletBinding(SupportsShouldProcess=$true)]
    param(
    [Parameter(Mandatory=$true,ValueFromPipeline=$true,ParameterSetName="List")][Microsoft.SharePoint.SPList]$list,
    [Parameter(Mandatory=$true,ValueFromPipeline=$true,ParameterSetName="Folder")][Microsoft.SharePoint.SPFolder]$folder,
    [Parameter(Mandatory=$false,ParameterSetName="Folder")][HashTable]$ParentFolderDefaults,
    [Parameter(Mandatory=$false)][HashTable]$fieldsToCopy,
    [Parameter(Mandatory=$false)][Array]$fieldsToUpdate,
    [Parameter(Mandatory=$false)][Microsoft.SharePoint.Taxonomy.TermStore]$termStore,
    [Switch]$overwrite,
    [Switch]$overwriteFromFields
    begin {
    #one or both can be null, but if both empty, then nothing to do
    if ($null -ne $fieldsToUpdate -and $fieldsToUpdate.Count -eq 0 -and
    ( $null -eq $fieldsToCopy -or $fieldsToCopy.Count -eq 0)) {
    Write-Warning "No fields to update OR copy"
    return
    if ($PSCmdlet.ParameterSetName -eq "Folder") {
    $list = $folder.DocumentLibrary
    if ($null -eq $termStore ) {
    $taxonomySession = Get-SPTaxonomySession -site $list.ParentWeb.Site
    $termStores = $taxonomySession.TermStores
    $termStore = $termStores[0]
    #if we did not pass in the parent folder defaults then we must go backward up tree
    if ($PSCmdlet.ParameterSetName -eq "Folder" -and $null -eq $ParentFolderDefaults ) {
    $ParentFolderDefaults = @{}
    if ($null -eq $fieldsToUpdate -or $fieldsToUpdate.Count -gt 0) {
    write-Debug "ParentFolderDefaults is null"
    $tempfolder=$folder.ParentFolder
    while ($tempfolder.ParentListId -ne [Guid]::Empty) {
    Write-Debug "at folder $($tempfolder.Url)"
    $pairs = $columnDefaults.GetDefaultMetadata($tempfolder)
    foreach ($pair in $pairs) {
    if (!$ParentFolderDefaults.ContainsKey($pair.First)) {
    Write-Debug "Folder $($tempfolder.Name) default: $($pair.First) = $($pair.Second)"
    $ParentFolderDefaults.Add($pair.First,$pair.Second)
    $tempfolder = $tempfolder.ParentFolder
    #listdefaults
    Write-Debug "at list"
    foreach ($field in $folder.DocumentLibrary.Fields) {
    if ($field.InternalName -eq "_ModerationStatus") { continue }
    #$field = $list.Fields[$name]
    if (![String]::IsNullOrEmpty($field.DefaultValue)) {
    #Write-Verbose "List default found key $($field.InternalName)"
    if (!$ParentFolderDefaults.ContainsKey($field.InternalName)) {
    Write-Debug "List Default $($field.InternalName) = $($field.DefaultValue)"
    $ParentFolderDefaults.Add($field.InternalName,$field.DefaultValue)
    process {
    Write-Debug "Calling with $($PSCmdlet.ParameterSetName)"
    Write-Debug "Parent folder hash has $($ParentFolderDefaults.Count) items"
    if ($PSCmdlet.ParameterSetName -eq "List" ) {
    $folder = $list.RootFolder
    $ParentFolderDefaults=@{}
    if ($null -eq $fieldsToUpdate -or $fieldsToUpdate.Count -gt 0) {
    foreach ($field in $list.Fields) {
    if ($field.InternalName -eq "_ModerationStatus") { continue }
    if (![String]::IsNullOrEmpty($field.DefaultValue)) {
    Write-Debug "List Default $($field.InternalName) = $($field.DefaultValue)"
    $ParentFolderDefaults.Add($field.InternalName,$field.DefaultValue)
    Write-Verbose "At folder $($folder.Url)"
    $FolderDefaults=@{}
    $FolderDefaults += $ParentFolderDefaults
    if ($null -eq $fieldsToUpdate -or $fieldsToUpdate.Count -gt 0) {
    $pairs = $columnDefaults.GetDefaultMetadata($folder)
    foreach ($pair in $pairs) {
    if ($FolderDefaults.ContainsKey($pair.First)) {
    $FolderDefaults.Remove($pair.First)
    Write-Debug "Folder $($folder.Name) default: $($pair.First) = $($pair.Second)"
    $FolderDefaults.Add($pair.First,$pair.Second)
    #set values
    foreach ($file in $folder.Files) {
    if ($file.CheckOutType -ne [Microsoft.SharePoint.SPFile+SPCheckOutType]::None) {
    Write-Warning "File $($file.Url).CheckOutType = $($file.CheckOutType)) ... skipping"
    continue
    $item = $file.Item
    $ItemDefaults=@{}
    $ItemDefaults+= $FolderDefaults
    #if we only want certain fields then remove the others
    #Move this to every time we add values to the defaults
    if ($null -ne $fieldsToUpdate ) {
    $ItemDefaults2=@{}
    foreach ($fieldInternalName in $fieldsToUpdate) {
    try {
    $ItemDefaults2.Add($fieldInternalName,$ItemDefaults[$fieldInternalName])
    } catch { } #who cares if not in list
    $ItemDefaults = $ItemDefaults2
    #do not overwrite already filled in values unless specified
    if (!$overwrite) {
    $keys = $itemDefaults.Keys
    for ($i=$keys.Count - 1; $i -ge 0; $i-- ) {
    $key=$keys[$i]
    try {
    $val =$item[$item.Fields.GetFieldByInternalName($key)]
    if ($val -ne $null) {
    $ItemDefaults.Remove($key)
    } catch {} #if fieldname does not exist then ignore, we should check for this earlier
    #do not overwrite FROM fields in copy list unless specified
    if (!$overwriteFromFields) {
    if ($null -ne $fieldToCopy -and $fieldsToCopy.Count -gt 0) {
    foreach ($value in $fieldsToCopy.Values) {
    try {
    $ItemDefaults.Remove($value)
    } catch {} #who cares if not in list
    #do not overwrite TO fields in copy list if we're going to copy instead
    if (!$overwriteFromFields) {
    if ($null -ne $fieldToCopy -and $fieldsToCopy.Count -gt 0) {
    foreach ($key in $fieldsToCopy.Keys) {
    $fromfield = $item.Fields.GetFieldByInternalName($fieldsToCopy[$key])
    try {
    if ($null -ne $item[$fromfield]) {
    $ItemDefaults.Remove($key)
    } catch {} #who cares if not in list
    Write-Verbose $item.Url
    $namestr = [String]::Empty
    if ($ItemDefaults.Count -eq 0) {
    write-Verbose "No defaults, copy only"
    } else {
    $str = $ItemDefaults | Out-String
    $namestr += $str
    Write-Verbose $str
    if ($null -ne $fieldsToCopy -and $fieldsToCopy.Count -gt 0) {
    $str = $fieldsToCopy | Out-String
    $namestr +=$str
    if ($PSCmdlet.ShouldProcess($item.Url,"Set Values: $namestr"))
    #defaults
    if ($null -ne $ItemDefaults -and $ItemDefaults.Count -gt 0) {
    foreach ($key in $ItemDefaults.Keys) {
    $tofield = $item.Fields.GetFieldByInternalName($key)
    if ($tofield.TypeAsString -like "TaxonomyFieldType*") {
    $taxfield =[Microsoft.SharePoint.Taxonomy.TaxonomyField]$tofield
    $taxfieldValue = New-Object Microsoft.SharePoint.Taxonomy.TaxonomyFieldValue($tofield)
    $lookupval=$ItemDefaults[$key]
    $termval=$lookupval.Substring( $lookupval.IndexOf('#')+1)
    $taxfieldValue.PopulateFromLabelGuidPair($termval)
    if ($tofield.TypeAsString -eq "TaxonomyFieldType") {
    $taxfield.SetFieldValue($item,$taxfieldValue)
    } else {
    #multi
    $taxfieldValues = New-Object Microsoft.SharePoint.Taxonomy.TaxonomyFieldValueCollection $tofield
    $taxfieldValues.Add($taxfieldValue)
    $taxfield.SetFieldValue($item,$taxfieldValues)
    } else {
    $item[$field]=$ItemDefaults[$key]
    #copyfields
    if ($null -ne $fieldsToCopy -and $fieldsToCopy.Count -gt 0) {
    #$fieldsToCopy | Out-String | Write-Verbose
    foreach ($key in $fieldsToCopy.Keys) {
    $tofield = $item.Fields.GetFieldByInternalName($key)
    $fromfield = $item.Fields.GetFieldByInternalName($fieldsToCopy[$key])
    if ($null -eq $item[$fromfield] -or ( !$overwrite -and $null -ne $item[$tofield] )) {
    continue
    if ($tofield.TypeAsString -eq "TaxonomyFieldType" -and
    $fromfield.TypeAsString -notlike "TaxonomyFieldType*" ) {
    #non taxonomy to taxonomy
    $taxfield =[Microsoft.SharePoint.Taxonomy.TaxonomyField]$tofield
    $termSet = $termStore.GetTermSet($taxfield.TermSetId)
    [String]$fromval = $item[$fromfield]
    $vals = $fromval -split ';#' | where {![String]::IsNullOrEmpty($_)}
    if ($null -ne $vals -and $vals.Count -ge 0 ) {
    $val = $vals[0]
    if ($vals.Count -gt 1) {
    write-Warning "$($item.Url) Found more than one value in $($fromfield.InternalName)"
    continue
    $terms =$termSet.GetTerms($val,$true)
    if ($null -ne $terms -and $terms.Count -gt 0) {
    $term = $terms[0]
    $taxfield.SetFieldValue($item,$term)
    Write-Verbose "$($tofield.InternalName) = $($term.Name)"
    } else {
    Write-Warning "Could not determine term for $($fromfield.InternalName) for $($item.Url)"
    continue
    } elseif ($tofield.TypeAsString -eq "TaxonomyFieldTypeMulti" -and
    $fromfield.TypeAsString -notlike "TaxonomyFieldType*" ) {
    Write-Debug "we are here: $($item.Name): $($fromfield.TypeAsString) to $($tofield.TypeAsString )"
    #non taxonomy to taxonomy
    $taxfield =[Microsoft.SharePoint.Taxonomy.TaxonomyField]$tofield
    $termSet = $termStore.GetTermSet($taxfield.TermSetId)
    $taxfieldValues = New-Object Microsoft.SharePoint.Taxonomy.TaxonomyFieldValueCollection $tofield
    [String]$fromval = $item[$fromfield]
    $vals = $fromval -split ';#' | where {![String]::IsNullOrEmpty($_)}
    foreach ($val in $vals){
    $terms =$termSet.GetTerms($val,$true)
    if ($null -ne $terms -and $terms.Count -gt 0) {
    $term=$terms[0]
    $taxfieldValue = New-Object Microsoft.SharePoint.Taxonomy.TaxonomyFieldValue($tofield)
    $taxfieldValue.TermGuid = $term.Id.ToString()
    $taxfieldValue.Label = $term.Name
    $taxfieldValues.Add($taxfieldValue)
    } else {
    Write-Warning "Could not determine term for $($fromfield.InternalName) for $($item.Url)"
    continue
    #,[Microsoft.SharePoint.Taxonomy.StringMatchOption]::ExactMatch,
    $taxfield.SetFieldValue($item,$taxfieldValues)
    $valsAsString = $taxfieldValues | Out-String
    Write-Debug "$($tofield.InternalName) = $valsAsString"
    } elseif ($tofield.TypeAsString -eq "TaxonomyFieldTypeMulti" -and
    $fromfield.TypeAsString -eq "TaxonomyFieldType" ) {
    #single taxonomy to multi
    $taxfieldValues = New-Object Microsoft.SharePoint.Taxonomy.TaxonomyFieldValueCollection $tofield
    $taxfield =[Microsoft.SharePoint.Taxonomy.TaxonomyField]$tofield
    $taxfieldValues.Add($item[$fromfield])
    $taxfield.SetFieldValue($item,$taxFieldValues)
    Write-Verbose "$($tofield.InternalName) = $valsAsString"
    } elseif ($tofield.TypeAsString -eq "TaxonomyFieldType" -and
    $fromfield.TypeAsString -eq "TaxonomyFieldTypeMulti" ) {
    #multi taxonomy to single taxonomy
    Write-Warning "multi to non multi - what to do here"
    continue
    } elseif ($tofield.TypeAsString -eq "Lookup" -and
    $fromfield.TypeAsString -ne "Lookup" ) {
    #non lookup to lookup
    Write-Warning "non lookup to lookup - still todo"
    continue
    } else {
    #straight copy
    $item[$tofield] = $item[$fromfield]
    $item.SystemUpdate($false)
    $folders = $folder.SubFolders | where name -ne "Forms"
    $folders | Set-SPListItemValuesToDefaults -ParentFolderDefaults $FolderDefaults -fieldsToCopy $fieldsToCopy -fieldsToUpdate $fieldsToUpdate -overwrite:$overwrite -overwriteFromFields:$overwriteFromFields -termStore $termStore

  • How to extract/export SD billing invoices to flat data file

    Hi,
    We are using SAP R/3 4.5b and we have to extract billing invoices to flat data file like csv or xml. How do we do it?
    Thanks

    Hi Gyan Der,
    There is another simple method to extract the SD Billing values from SAP.
    You can find the SD Billing data in the following tables:
    <b>VBRK - Header Data
    VBRP - Item Details.</b>
    You can create a report using these 2 tables in the Transaction <b>SQVI</b>.
    check the following link for help to create Quick Views:
    http://help.sap.com/saphelp_nw04/helpdata/en/d1/44f2b5c7f411d296080000e82de14a/frameset.htm
    In SQVI, you can create table joins for the above 2 tables & extract the required output.
    The report output can be transferred to an excel file or text file.
    hope this helps!
    best regards,
    Thangesh

Maybe you are looking for

  • OIM 9.1.0.2 - SAP UM Integration

    Hi Gurus, IHAC who have facing the following situation during SAP UM provisioning: When customer does a provisioning request for SAP UM, the provisioning is successful completed. Some of the requested roles are Role Master (Primary). This roles adds

  • My ipod 4th gen. Is disabled and does not get recognized by my computer or itunes.

    My ipod charges but does not show its charging. And will not show up in itunes on my computer or even show thats its connected to my computer ....

  • 802.1X un-authenticated user and guest VLAN

    Is there an option for 802.1X wired network to put any un-authenticated user onto the guest VLAN instead of no access? Thanks.

  • Deploying an Applet

    hey there, if created a applet with which you can send SQL statements to a Oracle Database and then see the results in a nice swing table (JTable). If Only used the connection manager to acces the database. In the JDeveloper environment this works ve

  • Role of Developer in Implementation

    Hi, Can anyone tell me what does the role of a developer in Implementation project. Thanks