Where to import CSV file using HANA STUDIO version 1.0.26

Hi Experts,
In this version HANA studio, i have no idea how to import CSV file ,as it looks different from the HANA academy vedio . hope someone could provide me the method.Tks!
About Screenshot

Hi Krishna Tangudu ,
How to use the IMPORT command as you mentioned?

Similar Messages

  • Re: How to Import .CSV file using Mac Desktop Manager 1.0.4?

    I am at a total loss.  I have exported my existing address book of 3100 contacts and it can be in any format -  csv or whatever.
     It is from an app called NowContcat and up to date.  You would not know it.
    Anyway all I want to do is import to my BB Bold 9000 running the latest upgrades.
    Can anyone help? - I have tried to read all of the posts but there does not seem to be anything i the manual or otherwise.
    SDK

    Hey kempyau,
    To clarify your issue, are you looking to take the csv file of 3100 contacts and synchronize it with your BlackBerry?
    You can import the contacts in Mac Address Book and then use BlackBerry Desktop Software to synchronize the contacts.
    -ViciousFerret
    Come follow your BlackBerry Technical Team on Twitter! @BlackBerryHelp
    Be sure to click Like! for those who have helped you.
    Click  Accept as Solution for posts that have solved your issue(s)!

  • Error when used hana studio to import Delivery Unit

    HANA version:1.00.80.00.391861
    HANA studio version:1.80.3
    Plantform :SUSE Linux Enterprise Server 11.2
    I meet a error when used hana studio followed that steps to import Delivery Unit:
    Launch HANA Studio 
    Select your HANA instance
    On the Quick launch page, choose Content -> Import
    Now Select HANA Content -> Delivery unit.
    Choose Next
    Select the server, browse the Service DU (Service DU on server: SYS/global/hdb/content): HCO_INA_SERVICE.tgz
    Who can help me what shoud i do.
    thanks.

    I has Resolved.
    You must set the OS directory privileges which has listed on the picutre  to 777 by 'chmod' command ,
    if it's 775 or  others.

  • Error importing CSV files with 'hidden' characters using External Table

    Hi Folks
    Bit of a strange one here.
    We're well used to using the External Table method of loading data from CSV files into the database but a recent event has presented us with a problem.
    We have received some CSV files that 'look' like regular CSV files but Oracle will not load them.
    When we examined the CSV file using VIM on a UNIX box we saw the following 'hidden' characters between every regular character in the file.
    ^@So a string that looks like this when opened in Excel/Wordpad etc
    "TEST","TEXT"Looks like this when exmained with VIM
    ^@"^@T^@E^@S^@T^@"^@,^@"^@T^@E^@X^@T^@"Has anyone come across this before?
    Many thanks
    Simon Gadd
    Oracle 11g 11.2.0.1.0

    Hi Simon,
    ^@ represents the NUL character (0x00).
    So, most likely, you've got a Unicode-encoded file.
    You'll have to specify the character set in the record specification (and if necessary the byte order mark), for instance :
    CREATE TABLE ext_table
      col1 VARCHAR2(10),
      col2 VARCHAR2(10)
    ORGANIZATION EXTERNAL
      TYPE ORACLE_LOADER
      DEFAULT DIRECTORY dump_dir
      ACCESS PARAMETERS
       RECORDS DELIMITED BY '
    ' CHARACTERSET 'UTF16'
      FIELDS TERMINATED BY ','
      LOCATION ('dump.csv')
    REJECT LIMIT UNLIMITED;http://download.oracle.com/docs/cd/E11882_01/server.112/e16536/et_params.htm#i1009499

  • Looking for a Notes app that I can import CSV files into?

    As the title says, I'm looking for a good notes application for my iPhone that I can import CSV files into. I have tried both Appigo and Notespark, but I can't easily scroll through them, as I have 2,000+ notes. Is there any app that I can import my notes into and also scroll through quickly? I'd like something that works similarly to how you scroll through songs on the iPhone/iTouch, with the column on the right where you can skip to songs (notes, in this case) that start with a certain letter.
    Thanks!

    Hi Tx Tar Heel,
    I've been using Office2HD: https://itunes.apple.com/us/app/office2-hd/id364361728?mt=8
    Its cheaper than Numbers and it also works for Word and PowerPoint files too. I like the Dropbox integration. I can start on my Office docs (Word, Excel, PowerPoint) in the office and then edit those files with Office2HD when I'm out of the office. Files saves right back to Dropbox so that when I get back to the office the files are already updated. Not bad for a $7.99 app!
    Hope this helps!
    ~Joe

  • How to import csv-file in Numbers 3.2.2.

    I start using Numbers in stead of Excel. I would like to import csv-files from my bank, but when I open the csv-file in Numbers, everything is imported in the same cell. I composed a testfile: 01/08/2014,”text”,”more text”,”even more text” in Pages, exported to a textfile and changed the extension from .txt in .csv. It did not help, everything was in the same cell. What must be changed to become successful in importing csv-files? I am using Numbers 3.2.2. and an iMac with 2,8 GHz Intel Core i7 processor and 8 GB 1067 MHz DDR3 memory with OS X 10.9.4.
    Thanks, Joan Voormolen

    You can do this using Pages. Without using outside scripts or functions. The Pages Find/Replace function will let you change the delimiter on the data in your file.
    Open the file in Pages. Click Show Invisibles. (this will show you the delimiter used in the file)
    If you see a * as the delimiter, that is a space. Some data files are space delimited. This is a really poor way to delimit numerical data files.
    If you see a fat arrow to the right, the file is Tab delimited
    Obviously, a comma is not a hidden character. Some files are comma delimited
    Whatever else might have been used as a delimiter (for example a semi colon is sometimes used) will be apparent.
    The delimiter should be something that is not used anywhere else in the "data"... text, numbers, etc., you want to delimit. Numbers considers a comma as a valid delimiter for files with the suffix .csv . It considers a tab as a valid delimiter with files with the suffix .txt . It does not consider spaces a valid delimiter in with any file suffix. But some programs use odd delimiters (semi colon, colon, double spaces, etc) as delimiters.
    Use the Find command, then Find/Replace as you need to create that delimiter numbers recognizes. Let's say a semi colon was used as a delimiter. Enter the current delimiter (semi colon)  into the Find box. Pages should highlight all the instances of your entry. Enter a comma (to create comma delimited data file) in the Replace box. You should now see a comma as the delimiter.
    Important Don't forget, any other comma used in the file will also be considered a delimiter. (a comma in 1,000 for example). So check the data. If you see a comma used another way you will want to eliminate that BEFORE you do the "comma as delimiter" replacement. If you have 1,000, do a find/replace with comma as the find, nothing as the replace, first. THEN do the replacement of the semi colon.
    Now comes the "tricky" part from what I could see. You want to save this new file with a suffix of .csv. (Export the file) Numbers will only open a comma delimited file with separated data (by comma) if it's suffix is .csv. Pages only gives you limited export options and puts the file suffix on for you automatically. CSV is not one of the options!
    Choose Text. Pages will name the file .txt. Quit Pages. Go to the file on your desktop (or wherever you saved it). Change the file suffiix from .txt to .csv.
    That's it. Open the file with Numbers. Numbers will create a separate column for everything between the comma's.
    You can use this same method to alter your data file before you import it into Numbers. For example, one file I wanted to import had time=xxx . I only wanted the actual time, not the text attached to it, in my spreadsheet. I did a find/replace with "time=" as the find. A comma as the replace. Even though "time=xxx" is one "word", Pages identified the "time=" within the word to allow the replacement.
    Numbers does not provide a "choose delimiter" function when opening a file. Instead it automatically uses the standard delimiter based on the file suffix. CSV means Comma, so if the file is named .csv it will only look for and use a comma as the delimiter to put the data into separate columns. I believe .txt uses only a tab as the delimiter. In the above example you could find/replace to a Tab. Then Export to Text. And numbers will open the data into columns the way you want, without the extra step of renaming the file on your desktop.
    While some files use a second space (ie two in a row) as a delimiter that's a nasty way to delimit. You always want a specific delimiter that is not used within the data element.
    The above is to import numerical data into separate columns. You could use the same method to manipulate a file that contains text. Let's say you had a file with the suffix .txt. In the file are names and addresses.  John Smith 246 Rose Road . You want Name in one column. Address in another.  Look at all the spaces, which ones should be delimiters which not? Are there any delimiters in the file?
    If you open with Pages and choose show invisibles you can see. You might see John Smith --> 246 Rose Road. (the --> will look like a fat arrow in Pages). Numbers will open this file, IF it has .txt as the suffix, based on the Tab,  with name in one column, Address in another.
    Or you might see John*Smith**246*Rose*Road. Even though the creator of this intended two spaces to be a delimiter Numbers does not recognize that. Numbers will put everything into one column. The fix? In Pages, put a tab between name and address. Find/replace two spaces with Tab. Export, as Text.
    Based on what you see (with show invisible active) in Pages, you can use the Find/Replace function to create the specific delimiter you want (tab or comma). You can use that function to manipulate the file easily so the data you want shows up in separate columns. You may need to get clever to accomplish the unique delimiters. You might even need to do two passes with Find/Replace.
    In the instance above  if there was only one space between each element. (not two as a pseudo delimiter) You could replace all spaces with a tab in Pages. Export as Text.  Numbers will open that file with a column for each word (one for John, one for Smith). Then  "Merge" the two cells (columns) you want to put back together. 

  • Issue while loading a csv file using sql*loader...

    Hi,
    I am loading a csv file using sql*loader.
    On the number columns where there is data populated in them, decimal number/integers, the row errors out on the error -
    ORA-01722: invalid number
    I tried checking the value picking from the excel,
    and found the chr(13),chr(32),chr(10) values characters on the value.
    ex: select length('0.21') from dual is giving a value of 7.
    When i checked each character as
    select ascii(substr('0.21',5,1) from dual is returning a value 9...etc.
    I tried the following command....
    "to_number(trim(replace(replace(replace(replace(:std_cost_price_scala,chr(9),''),chr(32),''),chr(13),''),chr(10),'')))",
    to remove all the non-number special characters. But still facing the error.
    Please let me know, any solution for this error.
    Thanks in advance.
    Kiran

    control file:
    OPTIONS (ROWS=1, ERRORS=10000)
    LOAD DATA
    CHARACTERSET WE8ISO8859P1
    INFILE '$Xx_TOP/bin/ITEMS.csv'
    APPEND INTO TABLE XXINF.ITEMS_STAGE
    FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' TRAILING NULLCOLS
    ItemNum                    "trim(replace(replace(:ItemNum,chr(9),''),chr(13),''))",
    cross_ref_old_item_num               "trim(replace(replace(:cross_ref_old_item_num,chr(9),''),chr(13),''))",
    Mas_description               "trim(replace(replace(:Mas_description,chr(9),''),chr(13),''))",
    Mas_long_description               "trim(replace(replace(:Mas_long_description,chr(9),''),chr(13),''))",
    Org_description               "trim(replace(replace(:Org_description,chr(9),''),chr(13),''))",
    Org_long_description               "trim(replace(replace(:Org_long_description,chr(9),''),chr(13),''))",
    user_item_type                    "trim(replace(replace(:user_item_type,chr(9),''),chr(13),''))",
    organization_code               "trim(replace(replace(:organization_code,chr(9),''),chr(13),''))",
    primary_uom_code               "trim(replace(replace(:primary_uom_code,chr(9),''),chr(13),''))",
    inv_default_item_status          "trim(replace(replace(:inv_default_item_status,chr(9),''),chr(13),''))",
    inventory_item_flag               "trim(replace(replace(:inventory_item_flag,chr(9),''),chr(13),''))",
    stock_enabled_flag               "trim(replace(replace(:stock_enabled_flag,chr(9),''),chr(13),''))",
    mtl_transactions_enabled_flag          "trim(replace(replace(:mtl_transactions_enabled_flag,chr(9),''),chr(13),''))",
    revision_qty_control_code          "trim(replace(replace(:revision_qty_control_code,chr(9),''),chr(13),''))",
    reservable_type               "trim(replace(replace(:reservable_type,chr(9),''),chr(13),''))",
    check_shortages_flag               "trim(replace(replace(:check_shortages_flag,chr(9),''),chr(13),''))",
    shelf_life_code               "trim(replace(replace(replace(replace(:shelf_life_code,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
    shelf_life_days               "trim(replace(replace(replace(replace(:shelf_life_days,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
    lot_control_code               "trim(replace(replace(:lot_control_code,chr(9),''),chr(13),''))",
    auto_lot_alpha_prefix               "trim(replace(replace(:auto_lot_alpha_prefix,chr(9),''),chr(13),''))",
    start_auto_lot_number               "trim(replace(replace(:start_auto_lot_number,chr(9),''),chr(13),''))",
    negative_measurement_error          "trim(replace(replace(replace(replace(:negative_measurement_error,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
    positive_measurement_error          "trim(replace(replace(replace(replace(:positive_measurement_error,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
    serial_number_control_code          "trim(replace(replace(:serial_number_control_code,chr(9),''),chr(13),''))",
    auto_serial_alpha_prefix          "trim(replace(replace(:auto_serial_alpha_prefix,chr(9),''),chr(13),''))",
    start_auto_serial_number          "trim(replace(replace(:start_auto_serial_number,chr(9),''),chr(13),''))",
    location_control_code               "trim(replace(replace(:location_control_code,chr(9),''),chr(13),''))",
    restrict_subinventories_code          "trim(replace(replace(:restrict_subinventories_code,chr(9),''),chr(13),''))",
    restrict_locators_code               "trim(replace(replace(:restrict_locators_code,chr(9),''),chr(13),''))",
    bom_enabled_flag               "trim(replace(replace(:bom_enabled_flag,chr(9),''),chr(13),''))",
    costing_enabled_flag               "trim(replace(replace(:costing_enabled_flag,chr(9),''),chr(13),''))",
    inventory_asset_flag               "trim(replace(replace(:inventory_asset_flag,chr(9),''),chr(13),''))",
    default_include_in_rollup_flag          "trim(replace(replace(:default_include_in_rollup_flag,chr(9),''),chr(13),''))",
    cost_of_goods_sold_account          "trim(replace(replace(:cost_of_goods_sold_account,chr(9),''),chr(13),''))",
    std_lot_size                    "trim(replace(replace(replace(replace(:std_lot_size,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
    sales_account                    "trim(replace(replace(:sales_account,chr(9),''),chr(13),''))",
    purchasing_item_flag               "trim(replace(replace(:purchasing_item_flag,chr(9),''),chr(13),''))",
    purchasing_enabled_flag          "trim(replace(replace(:purchasing_enabled_flag,chr(9),''),chr(13),''))",
    must_use_approved_vendor_flag          "trim(replace(replace(:must_use_approved_vendor_flag,chr(9),''),chr(13),''))",
    allow_item_desc_update_flag          "trim(replace(replace(:allow_item_desc_update_flag,chr(9),''),chr(13),''))",
    rfq_required_flag               "trim(replace(replace(:rfq_required_flag,chr(9),''),chr(13),''))",
    buyer_name                    "trim(replace(replace(:buyer_name,chr(9),''),chr(13),''))",
    list_price_per_unit               "trim(replace(replace(replace(replace(:list_price_per_unit,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
    taxable_flag                    "trim(replace(replace(:taxable_flag,chr(9),''),chr(13),''))",
    purchasing_tax_code               "trim(replace(replace(:purchasing_tax_code,chr(9),''),chr(13),''))",
    receipt_required_flag               "trim(replace(replace(:receipt_required_flag,chr(9),''),chr(13),''))",
    inspection_required_flag          "trim(replace(replace(:inspection_required_flag,chr(9),''),chr(13),''))",
    price_tolerance_percent          "trim(replace(replace(replace(replace(:price_tolerance_percent,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
    expense_account               "trim(replace(replace(:expense_account,chr(9),''),chr(13),''))",
    allow_substitute_receipts_flag          "trim(replace(replace(:allow_substitute_receipts_flag,chr(9),''),chr(13),''))",
    allow_unordered_receipts_flag          "trim(replace(replace(:allow_unordered_receipts_flag,chr(9),''),chr(13),''))",
    receiving_routing_code               "trim(replace(replace(:receiving_routing_code,chr(9),''),chr(13),''))",
    inventory_planning_code          "trim(replace(replace(:inventory_planning_code,chr(9),''),chr(13),''))",
    min_minmax_quantity               "trim(replace(replace(replace(replace(:min_minmax_quantity,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
    max_minmax_quantity               "trim(replace(replace(replace(replace(:max_minmax_quantity,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
    planning_make_buy_code               "trim(replace(replace(:planning_make_buy_code,chr(9),''),chr(13),''))",
    source_type                    "trim(replace(replace(:source_type,chr(9),''),chr(13),''))",
    mrp_safety_stock_code               "trim(replace(replace(:mrp_safety_stock_code,chr(9),''),chr(13),''))",
    material_cost                    "trim(replace(replace(:material_cost,chr(9),''),chr(13),''))",
    mrp_planning_code               "trim(replace(replace(:mrp_planning_code,chr(9),''),chr(13),''))",
    customer_order_enabled_flag          "trim(replace(replace(:customer_order_enabled_flag,chr(9),''),chr(13),''))",
    customer_order_flag               "trim(replace(replace(:customer_order_flag,chr(9),''),chr(13),''))",
    shippable_item_flag               "trim(replace(replace(:shippable_item_flag,chr(9),''),chr(13),''))",
    internal_order_flag               "trim(replace(replace(:internal_order_flag,chr(9),''),chr(13),''))",
    internal_order_enabled_flag          "trim(replace(replace(:internal_order_enabled_flag,chr(9),''),chr(13),''))",
    invoice_enabled_flag               "trim(replace(replace(:invoice_enabled_flag,chr(9),''),chr(13),''))",
    invoiceable_item_flag               "trim(replace(replace(:invoiceable_item_flag,chr(9),''),chr(13),''))",
    cross_ref_ean_code               "trim(replace(replace(:cross_ref_ean_code,chr(9),''),chr(13),''))",
    category_set_intrastat               "trim(replace(replace(:category_set_intrastat,chr(9),''),chr(13),''))",
    CustomCode                    "trim(replace(replace(:CustomCode,chr(9),''),chr(13),''))",
    net_weight                    "trim(replace(replace(replace(replace(:net_weight,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
    production_speed               "trim(replace(replace(:production_speed,chr(9),''),chr(13),''))",
    LABEL                         "trim(replace(replace(:LABEL,chr(9),''),chr(13),''))",
    comment1_org_level               "trim(replace(replace(:comment1_org_level,chr(9),''),chr(13),''))",
    comment2_org_level               "trim(replace(replace(:comment2_org_level,chr(9),''),chr(13),''))",
    std_cost_price_scala               "to_number(trim(replace(replace(replace(replace(:std_cost_price_scala,chr(9),''),chr(32),''),chr(13),''),chr(10),'')))",
    supply_type                    "trim(replace(replace(:supply_type,chr(9),''),chr(13),''))",
    subinventory_code               "trim(replace(replace(:subinventory_code,chr(9),''),chr(13),''))",
    preprocessing_lead_time          "trim(replace(replace(replace(replace(:preprocessing_lead_time,chr(9),''),chr(32),''),chr(13),''),chr(10),''))",
    processing_lead_time                "trim(replace(replace(replace(replace(:processing_lead_time,chr(9),''),chr(32),''),chr(13),''),chr(10),''))",
    wip_supply_locator               "trim(replace(replace(:wip_supply_locator,chr(9),''),chr(13),''))"
    Sample data from csv file.
    "9901-0001-35","390000","JMKL16 Pipe bend 16 mm","","JMKL16 Putkikaari 16 mm","","AI","FJE","Ea","","","","","","","","","","","","","","","","","","","","","","","","","21-21100-22200-00000-00000-00-00000-00000","0","21-11100-22110-00000-00000-00-00000-00000","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","0.1","Pull","AFTER PROD","","","Locator for Production"
    The load errors out on especially two columns :
    1) std_cost_price_scala
    2) list_price_per_unit
    both are number columns.
    And when there is data being provided on them. It errors out. But, if they are holding null values, the records go through fine.
    Message was edited by:
    KK28

  • Importing CSV file and parsing it

    First of all I am very new to writing powershell code.  Therefore, my question could be very rudimentary, but I cannot find an answer, so please help.
    I'm trying to read a CSV file and parse it.  I cannot figure out how to access nth element without hardcoding its name.
    $data = Import-Csv $file   #import CSV file
    # read column headers (manually read the first row of the data file, or import it from other source, or ...)
    $file_dump = Get-Content $file  #OK, I'm sure there is another way to get just the first line, but that's not relevant
    $name_list = $file_dump[0].split(",")
    # access element
    $temp = $data[$i].Name  # works - but that's HARDCODING the column name into the script - what if someone changes it?
    #but what I want to do is
    $temp = $data[$i].$name_list[0]
    How do I do this in PowerShell?

    So you're asking how to get the first data point from the first column, no matter what the header is?
    Why won't you know what your input file looks like?
    You can always drop the first line of the file to remove the existing headers and then use the -Header parameter of Import-Csv to give yourself known headers to reference (this will only work if you know how many columns to expect, etc etc etc).
    http://ss64.com/ps/import-csv.html
    Don't retire TechNet! -
    (Don't give up yet - 13,085+ strong and growing)

  • Mav Contacts does not import csv file

    Trying to import my Outlook exchange contacts to Mavericks Contacts.  Exported from outlook to excel. Saved excel file to csv format.  Tried to import csv file to Mavericks Contacts:  Error message says the csv file is not compatible.  Also imported the csv file to Numbers and saved again as another csv file.  Same error message.  Suggestions?

    The following was copied from the Contacts Help.
    Import contact information saved or exported from other apps. The files can be in LDAP Interchange Format (LDIF) or text format (tab-delimited or comma-separated value (CSV)).
    If you’re importing a tab-delimited or CSV file, make sure it’s formatted correctly using a text editor such as TextEdit.
    Remove any line breaks within a contact’s information.
    Make sure that fields in a tab-delimited file are separated by a tab, instead of by another character. Don’t include spaces before or after tabs.
    Make sure that fields in a CSV file are separated by a comma, instead of by another character. Don’t include spaces before or after commas.
    Make sure all addresses have the same number of fields. Add empty fields as needed.
    In Contacts, choose File > Import, select the file, change the encoding if necessary, then click Open.
    If you’re importing a text file, review the field labels.
    If the first card contains headers, make sure the headers are correctly labeled or marked “Do not import.” Any changes you make to this card are made to all cards in the file. To not import the headers card, select “Ignore first card.”
    To change a label, click the arrows next to the label and choose a new label. If you don’t want to import a field, choose “Do not import.”

  • Issue loading CSV file into HANA

    Hi,
    From last couples of weeks i am trying to load my CSV file into HANA Table, but i am unable to succeed.
    I am getting error "Cannot open Control file, /dropbox/P1005343/CRM_OBJ_ID.CTL". I have followed each and every step in SDN, still I could not load data into my HANA table.
    FTP: /dropbox/P1005343
    SQL Command:
    IMPORT FROM '/dropbox/P1005343/crm_obj_id.ctl'
    Error:
    Could not execute 'IMPORT FROM '/dropbox/P1005343/crm_obj_id.ctl''
    SAP DBTech JDBC: [2]: general error: Cannot open Control file, /dropbox/P1005343/crm_obj_id.ctl
    Please help me on this
    Regards,
    Praneeth

    Hi All,
    I have successfully loaded the file into HANA database in folder P443348 but while importing file, I am getting the following error message such as
    SAP DBTECH JDBC: [2]  (at 13) : general error: Cannot open Control file, "/P443348/shop_facts.ctl"
    This is happening while I am executing the following import statement
    IMPORT FROM '/P443348/shop_facts.ctl';
    I have tried several options including changing the permissions of the folders and files to no success. As of now my folder has full access, which is 777
    Any help would be greatly appreciated so that I can proceed further
    Thanks
    Vamsidhar

  • Is it possible to monitor State change of a .CSV file using powershell scripting ?

    Hi All,
    I just would like to know Is it possible to monitor State change of a .CSV file using powershell scripting ? We have SCOM tool which has that capability but there are some drawbacks in that for which we are not able to utilise that. So i would like
    to know is this possible using powershell.
    So if there is any number above 303 in the .CSV file then i need a email alert / notification for the same.
    Gautam.75801

    Hi Jrv,
    Thank you very much. I modified the above and it worked.
    Import-Csv C:\SCOM_Tasks\GCC2010Capacitymanagement\CapacityMgntData.csv | ?{$_.Mailboxes -gt 303} | Export-csv -path C:\SCOM_Tasks\Mbx_Above303.csv;
    Send-MailMessage -Attachments "C:\SCOM_Tasks\Mbx_Above303.csv" -To “[email protected]" -From “abc@xyz" -SMTPServer [email protected] -Subject “Mailboxex are above 303 in Exchange databases” -Body “Mailboxex are above 303 in Exchange databases" 
    Mailboxex - is the line which i want to monitor if the values there are above 303. And it will extract the lines with all above 303 to another CSV file and 2nd is a mail script to email me the same with the attachment of the 2nd extract.
    Gautam.75801

  • Tips for editing VOB files using Pinnacle Studio

    Summary: Have trouble importing VOB files into Pinnacle
    Studio for further editing? If so, follow the quick-start guide to learn
    how to prepare VOB videos for editing in Pinnacle Studio without
    quality loss.
    I normally receive my stuff in a format I can stick on Pinnacle
    Studio, but VOBs are not compatible. What I want to do with the current
    project is take the VOB I have and convert it for use with Pinnacle
    Studio. And what I need is a VOB converter I know.
    After getting some reviews online and multiple testing, I found the
    best and easiest way to import VOB files into Pinnacle Studio along with
    the help from Brorsoft Video Converter.
    It is an optimal VOB converter for you, which can help you change VOB
    into a different format like MPEG-2, AVI with least quality loss. You
    should then be able to import the new file into Pinnacle Studio with
    smooth editing without any trouble. It will ensure a perfect VOB
    importing, playing and editing workflow with Pinnacle Studio
    9/11/12/14/15.
    How to Convert VOB to Pinnacle Studio editable format
    1. Download, install and run the VOB to Pinnacle Studio Converter; click "Add Videos" icon to load your source .vob videos.
    2. Click “Format” bar and choose "Adobe Premiere/Sony Vegas
    > MPEG-2 (*.mpg)” as output format on the drop-down menu. Of course,
    you can also choose AVI, MP4, WMV from “Common Video” as the output
    format.
    3. Click “Settings” button if you’d like to customize
    advanced audio and video parameters like Video Codec, aspect ratio, bit
    rate, frame rate, Audio codec, sample rate, audio channels.
    4. Click the convert button under the preview window, the
    converter will start encoding VOB for importing to Pinnacle Studio. Soon
    after the conversion is finished, just click "Open" button to get the
    generated files for edit in Pinnacle Studio 14/15/16 perfectly.
    Source: How to split large vob files to Pinnacle Studio
    [quote] movies-videos-convert-tips.overblog.com/2014/02/how-to-make-video-object-.vob-format-is-it-editable-friendly-in-pinnacle-studio.html [/quote]

    If you have the VOB files is it on a pre-existing DVD that will play on a set top box? (i.e. a VIDEO_TS folder with the VOB of your film inside it?)
    then you could use Toast or similar to do a copy of it.
    Otherwise you can use a program like MPEGStreamclip to demux the VOB back to an m2v and AC3 (Video and Audio file) and use DVD-SP to make a new playable DVD. There would be no loss in quality if you are merely demuxing.

  • Adding row into existing CSV file using C#

    How to add row to existing CSV file using .NET Code.the file should not be overwrite,it need add another row with data.how to implement this scenario.

    Hi BizQ,
    If you only just write some data to CSV file. Please follow A.Zaied and Magnus  's reply. In general,we use CSV file to import or export some data. Like following thread and a good article in codeproject
    Convert a CSV file to Excel using C#
    Writing a DataTable to a CSV file
    Best regards,
    Kristin
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • How to get Document Set property values in a SharePoint library in to a CSV file using Powershell

    Hi,
    How to get Document Set property values in a SharePoint library into a CSV file using Powershell?
    Any help would be greatly appreciated.
    Thank you.
    AA.

    Hi,
    According to your description, my understanding is that you want to you want to get document set property value in a SharePoint library and then export into a CSV file using PowerShell.
    I suggest you can get the document sets properties like the PowerShell Command below:
    [system.reflection.assembly]::loadwithpartialname("microsoft.sharepoint")
    $siteurl="http://sp2013sps/sites/test"
    $listname="Documents"
    $mysite=new-object microsoft.sharepoint.spsite($siteurl)
    $myweb=$mysite.openweb()
    $list=$myweb.lists[$listname]
    foreach($item in $list.items)
    if($item.contenttype.name -eq "Document Set")
    if($item.folder.itemcount -eq 0)
    write-host $item.title
    Then you can use Export-Csv PowerShell Command to export to a CSV file.
    More information:
    Powershell for document sets
    How to export data to CSV in PowerShell?
    Using the Export-Csv Cmdlet
    Thanks
    Best Regards
    TechNet Community Support
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
    [email protected]

  • How to get DocSet property values in a SharePoint library into a CSV file using Powershell

    Hi,
    How to get DocSet property values in a SharePoint library into a CSV file using Powershell?
    Any help would be greatly appreciated.
    Thank you.
    AA.

    Hi AOK,
    Would you please post your current script and the issue for more effcient support.
    In addition, to manage document set in sharepoint please refer to this script to start:
    ### Load SharePoint SnapIn
    2.if ((Get-PSSnapin "Microsoft.SharePoint.PowerShell" -ErrorAction SilentlyContinue) -eq $null)
    3.{
    4. Add-PSSnapin Microsoft.SharePoint.PowerShell
    5.}
    6.### Load SharePoint Object Model
    7.[System.Reflection.Assembly]::LoadWithPartialName(“Microsoft.SharePoint”)
    8.
    9.### Get web and list
    10.$web = Get-SPWeb http://myweb
    11.$list = $web.Lists["List with Document Sets"]
    12.
    13.### Get Document Set Content Type from list
    14.$cType = $list.ContentTypes["Document Set Content Type Name"]
    15.
    16.### Create Document Set Properties Hashtable
    17.[Hashtable]$docsetProperties = @{"DocumentSetDescription"="A Document Set"}
    18.$docsetProperties = @{"CustomColumn1"="Value 1"}
    19.$docsetProperties = @{"CustomColum2"="Value2"}
    20. ### Add all your Columns for your Document Set
    21.
    22.### Create new Document Set
    23.$newDocumentSet = [Microsoft.Office.DocumentManagement.DocumentSets.DocumentSet]::Create($list.RootFolder,"Document Set Title",$cType.Id,$docsetProperties)
    24.$web.Dispose()
    http://www.letssharepoint.com/2011/06/document-sets-und-powershell.html
    If there is anything else regarding this issue, please feel free to post back.
    Best Regards,
    Anna Wang
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

Maybe you are looking for