Distinc on a subset of columns

Hi, I have a query working such as:
selec distinct t1.col1, t2.col1, t2.col2, t3.col1
from table1 t1, table2 t2, table3 t3
where t1.some_id = t2.some_id and t2.some_val = t3.some_val
order by t1.col1
This goes through some code that outputs the same query but it adds some fields at the begining of the select statement (select count(1) over () fq_reserved_total_cnt,), this causes my query to not work anymore since the distinct is moved from the first row.. without changing the code (the prefix bolded), how can i fix this distinct issue? the query takes a while to run given that i actually have more tables so i'm looking for a solution that won't impact run time that much.
select count(1) over () fq_reserved_total_cnt, distinct t1.col1, t2.col1, t2.col2, t3.col1
from table1 t1, table2 t2, table3 t3
where t1.some_id = t2.some_id and t2.some_val = t3.some_val
order by t1.col1
Thanks,

If ,
select count(1) over () fq_reserved_total_cnt, distinct t1.col1, t2.col1, t2.col2, t3.col1 
from table1 t1, table2 t2, table3 t3 
where t1.some_id = t2.some_id and t2.some_val = t3.some_val 
order by t1.col1
is your original query, can the below query,
select count(1) over () fq_reserved_total_cnt, distinct t1.col1, t2.col1, t2.col2, t3.col1 
from table1 t1, table2 t2, table3 t3 
where t1.some_id = t2.some_id and t2.some_val = t3.some_val 
order by t1.col1
be tweaked as
select distinct  col1_t1, col1_t2, col2_t2, col1_t3
from(
select count(1) over () fq_reserved_total_cnt, distinct t1.col1 col1_t1, t2.col1 col1_t2, t2.col2 col2_t2, t3.col1  col1_t3
from table1 t1, table2 t2, table3 t3 
where t1.some_id = t2.some_id and t2.some_val = t3.some_val 
order by t1.col1
Thanks,
Vivek

Similar Messages

  • Is it possible to give a user read access to an SAP table but to restrict it to a subset of columns?

    Hi,
    is it possible to give a user read access to an SAP table but to restrict it to a subset of columns?
    Thanks,
    Digesh

    Hi Digesh,
    If your requirement is to restrict the excess to specific rows it is possible to use S_TABU_LIN, but it works only for table which contains org units, like plant, company code, etc.
    Please search for S_TABU_LIN if this is your requirement.
    Otherwise please follow Alex's suggestion.
    BR,
    Mangesh

  • Find table column pattern in a variable

    I have a variable @a.  There is a table B with column A,C.  I need to find out those values of column A where column C values are a subset of @a. 
    For example @a='123456abcd'table
    Table B
    A     C
    1      abcd
    2      xyzt
    In the above case I need to get A=1 since value C pattern belongs within @a.

    Hi John_nn,
    I’m confused about what’s the desired result you want. Are you want to find out all matched values of column A where column C values are a subset of @a, or find out all matched values of column A where @a is a subset of column C values? If in the first scenario,
    you can refer to the first reply of Prashanth and the second reply of Tom. If in the second scenario, you can refer to the first reply of Tom and the second reply of Prashanth.
    If there are any misunderstanding, please elaborate the issue for further investigation.
    Regards,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Flat File Import, Ignore Missing Columns?

    The text files I'm importing always contain a fixed set of columns for example total number of full set of columns is 60, or a subset of those columns (some csv contain 40 columns, some contain 30 or 20 or any other number.) .  I would like to import
    these csv based on the column header inside the each csv, if it is a subset of full column set then the missing columns can be ignored with null value.
    At the moment in SQL 2012, if I import a subset of columns in the csv file, the data doesn't import...I assume because the actual file doesn't include every column defined in the flat file source object? 
    Is it possible to accomplish this without dynamically selecting the columns,  or using script component?
    Thanks for help.
    Sea Cloud

    If the columns coming are dynamic, then you might have to first get it into staging table with a single column reading entire row contents onto that single column and parse out the column information using string parsing logic as below
    http://visakhm.blogspot.in/2010/02/parsing-delimited-string.html
    This will help you to understand what columns are present based on which you can do the insertion to your table.
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Can you specify the columns that get written to a spreadsheet when using the query attribute of cfspreadsheet?

    Hello, I wanted to see if you can control which columns from a query get written to a excel spreadsheet. I have some items in my query that I do not want in the spreadsheet but need for my query, if that makes sense.
    Thanks.
    Steve

    There are no parameters in the cfSpreadsheet tag to choose specific columns, however you can use a query of queries (About Query of Queries - ColdFusion English Documentation - Adobe Learning Resources) to get a query object with a subset of columns to pass to the tag.

  • Row and Column Level Select Permission

    Hello Friends,
    I am using Oracle Oracle9i Enterprise Edition Release 9.2.0.1.0 and Windows XP. I have two questions. How to set :
    1. Row Level Select Permission?
    2.Column Level Select Permission?
    1. I have a table having 100 records in it. I don’t want to allow all the user to see them; means, if user1, user2 and user3 are going to select * from mytable then only they can get all the rows; while other users (including sys) should not able to get all rows, they should be capable of from 11th record.
    Though it can be managed by using another table, but I am just finding the other solution.
    2. Likewise, if I don’t want to allow to fetch all the columns; suppose column4 is having confidential info and only be visible by user1,user2 and user3 only, not by any othr user; what should I do?
    Please guide and help me.
    Regards

    You would need to use Virtual Private Database (VPD)/ row level security (RLS) to apply row-level security policies to the table. The DBMS_RLS package is used for this
    http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_rls.htm#sthref6168
    Unfortunately, column-level security wasn't available in 9.2. You would need to upgrade to Oracle 10g to get that functionality. Before that, you would have to create views that selected appropriate subsets of columns and grant permissions on those views to different users.
    Justin

  • Inserting value in the single column table where the only column is a identity column

    Hi,
    Following is my table definition:
    create table test1 (col1 int primary key identity(1,1))
    Now here, the only column col1 in the table is an identity column. Now I want to add explicit values to this column
    without SET Identity_Insert test1 On.
    How?

    You have a fundamental misconception. IDENTITY is not a column; it is a table property and it is totally non-relational. This is a left-over from UNIX file systems, which were built on magnetic tape files. It is a sequential data
    model which counts the insertion attempts to get a physical record number. By definition IDENTITY cannot be a valid key; it is not a valid data type (all data types in SQL are NULL-able); keys are a subset of columns in a table. 
    Have you ever read a single book on RDBMS? You can start with MANGA GUIDE TO DATABASE, it is the clearest intro book I know. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • Document List columns

    Hi All,
    I have spaces up and running as well as UCM. It is all connected fine and works.
    Only trouble is, in UCM i have an advanced metadata model and when i go and choose the columns to display there is only a small subset of columns and i cannot choose the metadata fields i have created.
    This happens on all the the widgets in the content management functionality - eg. document explorer, document list viewer etc. I can find out how i use any metadata field to formt he basis of a search, but not columsn to display.
    Does anyone know how I can add one of my metadata fields as a column?
    Thanks
    John

    I don't know for sure but you have to check your Oracle sales. Maybe you can use OHS as a restricted license for WebCenter just as you have UCM with a restricted license. But i'm not sure if this is also the case of OHS...
    The reason why you need this, is that when you configure OHS so that both webcenter and UCM point to the same host/port, a lot of these "advanced" features from UCM are enabled. If you do not configure an OHS, WebCenter will think of UCM as a standard document service and you will not be able to see custom metadata fields in the advanced tab.
    When you enable OHS and configure it for webcenter/ucm, you are able to use profiles in UCM. This means that when you check in a content and assign it to a profile, all these metadata fields that are bound to the profile will be visible in the advanced section of the metadata in webcenter.
    I don't know if this would work with a regular apache and enable it for redirecting. There is a free mod for apache so you can configure it so that /webcenter points to localhost:8888/webcenter and /cs will point to loàcalhost:16200/cs but i don't know if this will do the same as using the OHS because OHS uses a custom mod from Oracle...
    If you don't want to pay for OHS and you want to display the metadata than you also can create your custom taskflows or content presenter templates. You won't be able to show the metadata in the default taskflows like the document manager, folder view, list view and so on but you have the tools to create it for yourself. You can use the RIDC api which allows you to do anything you like. You also can use the content presenter to show documents and there metadata.
    If you realy needed to show metadata and the apache way does not work and you don't want to pay for OHS, than i suggest you create a custom "document manager" taskflow with the RIDC API from UCM. Together with the content presenter you should be able to achieve what you want.

  • How to refresh/apply column value default setting on current files or folders

    Hi All
    I have set-up default column data per folder in my library (via
    Library Settings > Column default settings) and it works great for new documents or folders that are added to the library.
    But what do I do if I have an existing Library with folders and files and need to apply default column data to each? Is there a way of "refreshing" the default columns so that the data is populated through a specific folder and/or its sub-folders?
    (I really hope this is an easy fix or just a setting that I over-looked somewhere!)
    Thank you!

    I had to do this as well recently, and remembered your post.
    Here is the function I wrote , this worked for text, choice, and metadata columns
    It is pretty slow and could be optimized and broken up into more functions, but I had to do several things:
    1. I mass-updated the content types in a library
    2. On Library settings : set default values and also different defaults per folder
    3. For each file I then needed to:
    3.a. either copy the value from a column in the old content type to the new, or
    3.b. set the column to the default
    so this function does step 3, like I said it works for certain types of columns and can be sped up (I used it to update 700 files in a couple minutes), and it makes some assumptions about the environment but this at least is a starting point.
    As Alex said you may want to change SystemUpdate($true) to just Update(), depending on your requirements.
    <#
    .SYNOPSIS
    Resets columns in a document library to defaults for blank columns. Use this
    after changing the content types or adding columns to a doc lib with existing files
    .DESCRIPTION
    Resets columns in a doc lib to their defaults. Will only set them if the columns are blank (unless overridden)
    Will also copy some values from one column to another while you are there.
    Can restrict the update to a subset of columns, or have it look for all columns with defaults.
    Will use the list defaults as well as folder defaults.
    All names of columns passed in should use InternalName.
    This has ONLY been tested on Text, Choice, Metadata, and mult-Choice and Mult-Metadata columns
    Pass in a list and it will recursively travel down the list to update all items with the defaults for the items in that folder.
    If you call it on a folder, it will travel up the tree of folders to find the proper defaults
    Author:
    Chris Buchholz
    [email protected]
    @plutosdad
    .PARAMETER list
    The document library to update. Using this parameter it will update all files in the doc lib
    .PARAMETER folder
    The folder containing files to update. Function will update all files in this folder and subfolders.
    .PARAMETER ParentFolderDefaults
    Hashtable of internal field names as KEY, and value VALUE, summing up all the parent folders or list defaults.
    If not supplied, then the function will travel up the tree of folders to the parent doclib to determine
    the correct defaults to apply.
    If the field is managed metadata, then the value is a string
    Currently only tested for string and metadata values, not lookup or date
    .PARAMETER termstore
    The termstore to use if you are going to update managed metadata columns, this assumes we are only using the one termstore for all columns to update
    If you are using the site collection specific termstore for some columns you want to update, and
    the central termstore for others, then you should call this method twice, once with each termstore,
    and specify the respective columns in fieldsToUpdate
    .PARAMETER fieldsToCopy
    Hashtable of internal field names, where KEY is the "to" field, and VALUE is the "from" field
    Use this to copy values from one field to another for the item.
    These override the defaults, and also cause the "from" (Value) fields to NOT be overwritten with defaults even if
    they are in the fieldsToUpdate array.
    Example: @{"MyNewColumn" = "My_x0020_Old_x0020_Column"}
    .PARAMETER fieldsToUpdate
    If supplied then the method will update only the fields in this array to their default values, if null then it will update
    all fields that have defaults.
    If you pass in an empty array, then this method will only copy fields in the fieldtocopy and not
    apply any defaults
    Example: @() - to only copy and not set any fields to default
    Example2: @('UpdateField1','UpdateField2') will
    .EXAMPLE
    Set-SPListItemValuesToDefaults -list $list -fieldsToCopy @{"MyNewColumn" = "My_x0020_Old_x0020_Column"} -fieldsToUpdate @() -overwrite -termStore $termStore
    This will not set any defaults, but instead only set MyNewColumn to non null values of My_x0020_Old_x0020_Column
    It will overwrite any values of MyNewColumn
    .EXAMPLE
    Set-SPListItemValuesToDefaults -list $list -overwrite
    This will set all columns to their default values even if they are filled in already
    .EXAMPLE
    Set-SPListItemValuesToDefaults -folder $list.RootFolder.SubFolder[3].SubFolder[5]
    This will set all columns to their defaults in the given subfolder of a library
    .EXAMPLE
    Set-SPListItemValuesToDefaults -list $list -fieldsToUpdate @('ColumnOneInternalName','ColumnTwoInternalName')
    This will set columns ColumnOneInternalName and ColumnTwoInternalName to their defaults for all items where they are currently null
    .EXAMPLE
    Set-SPListItemValuesToDefaults -list $list -fieldsToCopy @{"MyNewColumn" = "My_x0020_Old_x0020_Column"} -fieldsToUpdate @("MyNewColumn") -termStore $termStore
    This will set all MyNewColumn values to their default, and then also copy the values of My_x0020_Old_x0020_Column to MyNewColumn where the old column is not null,
    but both of these will only happen for items where MyNewColumn is null
    .EXAMPLE
    Set-SPListItemValuesToDefaults -list $list -fieldsToCopy @{"MyNewColumn" = "My_x0020_Old_x0020_Column"} -termStore $termStore
    This will set ALL columns with defaults to the default value (if the item's value is null),
    except for My_x0020_Old_x0020_Column which will not be modified even if it has a default value, and will also set MyNewColumn to the
    value of My_x0020_Old_x0020_Column if the old value is not null
    #>
    function Set-SPListItemValuesToDefaults {
    [CmdletBinding(SupportsShouldProcess=$true)]
    param(
    [Parameter(Mandatory=$true,ValueFromPipeline=$true,ParameterSetName="List")][Microsoft.SharePoint.SPList]$list,
    [Parameter(Mandatory=$true,ValueFromPipeline=$true,ParameterSetName="Folder")][Microsoft.SharePoint.SPFolder]$folder,
    [Parameter(Mandatory=$false,ParameterSetName="Folder")][HashTable]$ParentFolderDefaults,
    [Parameter(Mandatory=$false)][HashTable]$fieldsToCopy,
    [Parameter(Mandatory=$false)][Array]$fieldsToUpdate,
    [Parameter(Mandatory=$false)][Microsoft.SharePoint.Taxonomy.TermStore]$termStore,
    [Switch]$overwrite,
    [Switch]$overwriteFromFields
    begin {
    #one or both can be null, but if both empty, then nothing to do
    if ($null -ne $fieldsToUpdate -and $fieldsToUpdate.Count -eq 0 -and
    ( $null -eq $fieldsToCopy -or $fieldsToCopy.Count -eq 0)) {
    Write-Warning "No fields to update OR copy"
    return
    if ($PSCmdlet.ParameterSetName -eq "Folder") {
    $list = $folder.DocumentLibrary
    if ($null -eq $termStore ) {
    $taxonomySession = Get-SPTaxonomySession -site $list.ParentWeb.Site
    $termStores = $taxonomySession.TermStores
    $termStore = $termStores[0]
    #if we did not pass in the parent folder defaults then we must go backward up tree
    if ($PSCmdlet.ParameterSetName -eq "Folder" -and $null -eq $ParentFolderDefaults ) {
    $ParentFolderDefaults = @{}
    if ($null -eq $fieldsToUpdate -or $fieldsToUpdate.Count -gt 0) {
    write-Debug "ParentFolderDefaults is null"
    $tempfolder=$folder.ParentFolder
    while ($tempfolder.ParentListId -ne [Guid]::Empty) {
    Write-Debug "at folder $($tempfolder.Url)"
    $pairs = $columnDefaults.GetDefaultMetadata($tempfolder)
    foreach ($pair in $pairs) {
    if (!$ParentFolderDefaults.ContainsKey($pair.First)) {
    Write-Debug "Folder $($tempfolder.Name) default: $($pair.First) = $($pair.Second)"
    $ParentFolderDefaults.Add($pair.First,$pair.Second)
    $tempfolder = $tempfolder.ParentFolder
    #listdefaults
    Write-Debug "at list"
    foreach ($field in $folder.DocumentLibrary.Fields) {
    if ($field.InternalName -eq "_ModerationStatus") { continue }
    #$field = $list.Fields[$name]
    if (![String]::IsNullOrEmpty($field.DefaultValue)) {
    #Write-Verbose "List default found key $($field.InternalName)"
    if (!$ParentFolderDefaults.ContainsKey($field.InternalName)) {
    Write-Debug "List Default $($field.InternalName) = $($field.DefaultValue)"
    $ParentFolderDefaults.Add($field.InternalName,$field.DefaultValue)
    process {
    Write-Debug "Calling with $($PSCmdlet.ParameterSetName)"
    Write-Debug "Parent folder hash has $($ParentFolderDefaults.Count) items"
    if ($PSCmdlet.ParameterSetName -eq "List" ) {
    $folder = $list.RootFolder
    $ParentFolderDefaults=@{}
    if ($null -eq $fieldsToUpdate -or $fieldsToUpdate.Count -gt 0) {
    foreach ($field in $list.Fields) {
    if ($field.InternalName -eq "_ModerationStatus") { continue }
    if (![String]::IsNullOrEmpty($field.DefaultValue)) {
    Write-Debug "List Default $($field.InternalName) = $($field.DefaultValue)"
    $ParentFolderDefaults.Add($field.InternalName,$field.DefaultValue)
    Write-Verbose "At folder $($folder.Url)"
    $FolderDefaults=@{}
    $FolderDefaults += $ParentFolderDefaults
    if ($null -eq $fieldsToUpdate -or $fieldsToUpdate.Count -gt 0) {
    $pairs = $columnDefaults.GetDefaultMetadata($folder)
    foreach ($pair in $pairs) {
    if ($FolderDefaults.ContainsKey($pair.First)) {
    $FolderDefaults.Remove($pair.First)
    Write-Debug "Folder $($folder.Name) default: $($pair.First) = $($pair.Second)"
    $FolderDefaults.Add($pair.First,$pair.Second)
    #set values
    foreach ($file in $folder.Files) {
    if ($file.CheckOutType -ne [Microsoft.SharePoint.SPFile+SPCheckOutType]::None) {
    Write-Warning "File $($file.Url).CheckOutType = $($file.CheckOutType)) ... skipping"
    continue
    $item = $file.Item
    $ItemDefaults=@{}
    $ItemDefaults+= $FolderDefaults
    #if we only want certain fields then remove the others
    #Move this to every time we add values to the defaults
    if ($null -ne $fieldsToUpdate ) {
    $ItemDefaults2=@{}
    foreach ($fieldInternalName in $fieldsToUpdate) {
    try {
    $ItemDefaults2.Add($fieldInternalName,$ItemDefaults[$fieldInternalName])
    } catch { } #who cares if not in list
    $ItemDefaults = $ItemDefaults2
    #do not overwrite already filled in values unless specified
    if (!$overwrite) {
    $keys = $itemDefaults.Keys
    for ($i=$keys.Count - 1; $i -ge 0; $i-- ) {
    $key=$keys[$i]
    try {
    $val =$item[$item.Fields.GetFieldByInternalName($key)]
    if ($val -ne $null) {
    $ItemDefaults.Remove($key)
    } catch {} #if fieldname does not exist then ignore, we should check for this earlier
    #do not overwrite FROM fields in copy list unless specified
    if (!$overwriteFromFields) {
    if ($null -ne $fieldToCopy -and $fieldsToCopy.Count -gt 0) {
    foreach ($value in $fieldsToCopy.Values) {
    try {
    $ItemDefaults.Remove($value)
    } catch {} #who cares if not in list
    #do not overwrite TO fields in copy list if we're going to copy instead
    if (!$overwriteFromFields) {
    if ($null -ne $fieldToCopy -and $fieldsToCopy.Count -gt 0) {
    foreach ($key in $fieldsToCopy.Keys) {
    $fromfield = $item.Fields.GetFieldByInternalName($fieldsToCopy[$key])
    try {
    if ($null -ne $item[$fromfield]) {
    $ItemDefaults.Remove($key)
    } catch {} #who cares if not in list
    Write-Verbose $item.Url
    $namestr = [String]::Empty
    if ($ItemDefaults.Count -eq 0) {
    write-Verbose "No defaults, copy only"
    } else {
    $str = $ItemDefaults | Out-String
    $namestr += $str
    Write-Verbose $str
    if ($null -ne $fieldsToCopy -and $fieldsToCopy.Count -gt 0) {
    $str = $fieldsToCopy | Out-String
    $namestr +=$str
    if ($PSCmdlet.ShouldProcess($item.Url,"Set Values: $namestr"))
    #defaults
    if ($null -ne $ItemDefaults -and $ItemDefaults.Count -gt 0) {
    foreach ($key in $ItemDefaults.Keys) {
    $tofield = $item.Fields.GetFieldByInternalName($key)
    if ($tofield.TypeAsString -like "TaxonomyFieldType*") {
    $taxfield =[Microsoft.SharePoint.Taxonomy.TaxonomyField]$tofield
    $taxfieldValue = New-Object Microsoft.SharePoint.Taxonomy.TaxonomyFieldValue($tofield)
    $lookupval=$ItemDefaults[$key]
    $termval=$lookupval.Substring( $lookupval.IndexOf('#')+1)
    $taxfieldValue.PopulateFromLabelGuidPair($termval)
    if ($tofield.TypeAsString -eq "TaxonomyFieldType") {
    $taxfield.SetFieldValue($item,$taxfieldValue)
    } else {
    #multi
    $taxfieldValues = New-Object Microsoft.SharePoint.Taxonomy.TaxonomyFieldValueCollection $tofield
    $taxfieldValues.Add($taxfieldValue)
    $taxfield.SetFieldValue($item,$taxfieldValues)
    } else {
    $item[$field]=$ItemDefaults[$key]
    #copyfields
    if ($null -ne $fieldsToCopy -and $fieldsToCopy.Count -gt 0) {
    #$fieldsToCopy | Out-String | Write-Verbose
    foreach ($key in $fieldsToCopy.Keys) {
    $tofield = $item.Fields.GetFieldByInternalName($key)
    $fromfield = $item.Fields.GetFieldByInternalName($fieldsToCopy[$key])
    if ($null -eq $item[$fromfield] -or ( !$overwrite -and $null -ne $item[$tofield] )) {
    continue
    if ($tofield.TypeAsString -eq "TaxonomyFieldType" -and
    $fromfield.TypeAsString -notlike "TaxonomyFieldType*" ) {
    #non taxonomy to taxonomy
    $taxfield =[Microsoft.SharePoint.Taxonomy.TaxonomyField]$tofield
    $termSet = $termStore.GetTermSet($taxfield.TermSetId)
    [String]$fromval = $item[$fromfield]
    $vals = $fromval -split ';#' | where {![String]::IsNullOrEmpty($_)}
    if ($null -ne $vals -and $vals.Count -ge 0 ) {
    $val = $vals[0]
    if ($vals.Count -gt 1) {
    write-Warning "$($item.Url) Found more than one value in $($fromfield.InternalName)"
    continue
    $terms =$termSet.GetTerms($val,$true)
    if ($null -ne $terms -and $terms.Count -gt 0) {
    $term = $terms[0]
    $taxfield.SetFieldValue($item,$term)
    Write-Verbose "$($tofield.InternalName) = $($term.Name)"
    } else {
    Write-Warning "Could not determine term for $($fromfield.InternalName) for $($item.Url)"
    continue
    } elseif ($tofield.TypeAsString -eq "TaxonomyFieldTypeMulti" -and
    $fromfield.TypeAsString -notlike "TaxonomyFieldType*" ) {
    Write-Debug "we are here: $($item.Name): $($fromfield.TypeAsString) to $($tofield.TypeAsString )"
    #non taxonomy to taxonomy
    $taxfield =[Microsoft.SharePoint.Taxonomy.TaxonomyField]$tofield
    $termSet = $termStore.GetTermSet($taxfield.TermSetId)
    $taxfieldValues = New-Object Microsoft.SharePoint.Taxonomy.TaxonomyFieldValueCollection $tofield
    [String]$fromval = $item[$fromfield]
    $vals = $fromval -split ';#' | where {![String]::IsNullOrEmpty($_)}
    foreach ($val in $vals){
    $terms =$termSet.GetTerms($val,$true)
    if ($null -ne $terms -and $terms.Count -gt 0) {
    $term=$terms[0]
    $taxfieldValue = New-Object Microsoft.SharePoint.Taxonomy.TaxonomyFieldValue($tofield)
    $taxfieldValue.TermGuid = $term.Id.ToString()
    $taxfieldValue.Label = $term.Name
    $taxfieldValues.Add($taxfieldValue)
    } else {
    Write-Warning "Could not determine term for $($fromfield.InternalName) for $($item.Url)"
    continue
    #,[Microsoft.SharePoint.Taxonomy.StringMatchOption]::ExactMatch,
    $taxfield.SetFieldValue($item,$taxfieldValues)
    $valsAsString = $taxfieldValues | Out-String
    Write-Debug "$($tofield.InternalName) = $valsAsString"
    } elseif ($tofield.TypeAsString -eq "TaxonomyFieldTypeMulti" -and
    $fromfield.TypeAsString -eq "TaxonomyFieldType" ) {
    #single taxonomy to multi
    $taxfieldValues = New-Object Microsoft.SharePoint.Taxonomy.TaxonomyFieldValueCollection $tofield
    $taxfield =[Microsoft.SharePoint.Taxonomy.TaxonomyField]$tofield
    $taxfieldValues.Add($item[$fromfield])
    $taxfield.SetFieldValue($item,$taxFieldValues)
    Write-Verbose "$($tofield.InternalName) = $valsAsString"
    } elseif ($tofield.TypeAsString -eq "TaxonomyFieldType" -and
    $fromfield.TypeAsString -eq "TaxonomyFieldTypeMulti" ) {
    #multi taxonomy to single taxonomy
    Write-Warning "multi to non multi - what to do here"
    continue
    } elseif ($tofield.TypeAsString -eq "Lookup" -and
    $fromfield.TypeAsString -ne "Lookup" ) {
    #non lookup to lookup
    Write-Warning "non lookup to lookup - still todo"
    continue
    } else {
    #straight copy
    $item[$tofield] = $item[$fromfield]
    $item.SystemUpdate($false)
    $folders = $folder.SubFolders | where name -ne "Forms"
    $folders | Set-SPListItemValuesToDefaults -ParentFolderDefaults $FolderDefaults -fieldsToCopy $fieldsToCopy -fieldsToUpdate $fieldsToUpdate -overwrite:$overwrite -overwriteFromFields:$overwriteFromFields -termStore $termStore

  • SQL Query to re-order the columns

    Hello All,
    I want to know if it is possible to re-order the columns of a table once the rows has been inserted in all the columns.
    So for e.g I have initially a table containing 3 columns COL1, COL2 & COL3.
    Now after data has been inserted into the table, I want to re-order the columns like COL3, COL2, COl1 or may be COL2, COL1, COL3 etc keeping the data intact.
    Cheers,
    Parag

    Parag Kalra wrote:
    I want to know if it is possible to re-order the columns of a table once the rows has been inserted in all the columns.
    So for e.g I have initially a table containing 3 columns COL1, COL2 & COL3.
    Now after data has been inserted into the table, I want to re-order the columns like COL3, COL2, COl1 or may be COL2, COL1, COL3 etc keeping the data intact.Why? What is your reason for wanting to do this? What do you want to achieve by it? If we understand the actual problem, then we may be able to provide some usable suggestions.
    The reason why your request makes very little sense is that the physical sequence of columns in a row in a datablock, has no impact on you as developer writing code.
    Why? Because you control the order in which you want to select columns. You control the order in which you insert columns. You control the order in which you update columns.
    SQL allows you the programmer to specify the sequence of columns, and subset of columns, that you want to use in your SQL.
    Why would you want to change the physical table definition, and rewrite the entire table on disk (using very expensive I/O) to reorder the physical column sequence?

  • Parse column with csv string into table with one row per item

    I have a table (which has less than 100 rows) - ifs_tables that has two columns: localtable and Fields. Localtable is a table name and Fields contains a subset of columns from that table. Fields is a comma delimited list:  'Fname,Lname'. It looks like
    this:
    localtable         fields
    =========  =============
    customertable   fname,lname
    accounttable     type,accountnumber
    Want to end up with a new table that has one row per column. It should look like this:
    TableName             ColumnName
    ============ ==========
    CustomerTable        Fname
    CustomerTable        Lname
    AccountTable          Type
    AccountTable          AccountNumber
    Tried this code but have two issues (1) My query using the Splitfields functions gets "Subquery returned more than 1 value" (2) some of my Fields has hundreds of collumns in the commas delimited list. It will returns "Msg 530, Level 16, State
    1, Line 8. The statement terminated. The maximum recursion 100 has been exhausted before statement completion.maxrecursion greater than 100." Tried adding OPTION (maxrecursion 0) in the Split function on the SELECT statment that calls the CTE, but
    the syntax is not correct.
    Can someone help me to get this sorted out? Thanks
    DROP FUNCTION [dbo].[SplitFields]
    go
    CREATE FUNCTION [dbo].[SplitFields]
    @String NVARCHAR(4000),
    @Delimiter NCHAR(1)
    RETURNS TABLE
    AS
    RETURN
    WITH Split(stpos,endpos)
    AS(
    SELECT 0 AS stpos, CHARINDEX(@Delimiter,@String) AS endpos
    UNION ALL
    SELECT endpos+1, CHARINDEX(@Delimiter,@String,endpos+1)
    FROM Split
    WHERE endpos > 0
    SELECT 'Id' = ROW_NUMBER() OVER (ORDER BY (SELECT 1)),
    'Data' = SUBSTRING(@String,stpos,COALESCE(NULLIF(endpos,0),LEN(@String)+1)-stpos)
    FROM Split --OPTION ( maxrecursion 0);
    GO
    IF OBJECT_ID('tempdb..#ifs_tables') IS NOT NULL DROP TABLE #ifs_tables
    SELECT *
    INTO #ifs_tables
    FROM (
    SELECT 'CustomerTable' , 'Lname,Fname' UNION ALL
    SELECT 'AccountTable' , 'Type,AccountNumber'
    ) d (dLocalTable,dFields)
    IF OBJECT_ID('tempdb..#tempFieldsCheck') IS NOT NULL DROP TABLE #tempFieldsCheck
    SELECT * INTO #tempFieldsCheck
    FROM
    ( --SELECT dLocaltable, dFields from #ifs_tables
    SELECT dLocaltable, (SELECT [Data] FROM dbo.SplitFields(dFields, ',') ) from #ifs_tables
    ) t (tLocalTable, tfields) -- as Data FROM #ifs_tables
    SELECT * FROM #tempFieldsCheck

    Try this
    DECLARE @DemoTable table
    localtable char(100),
    fields varchar(200)
    INSERT INTO @DemoTable values('customertable','fname,lname')
    INSERT INTO @DemoTable values('accounttable','type,accountnumber')
    select * from @DemoTable
    SELECT A.localtable ,
    Split.a.value('.', 'VARCHAR(100)') AS Dept
    FROM (SELECT localtable,
    CAST ('<M>' + REPLACE(fields, ',', '</M><M>') + '</M>' AS XML) AS String
    FROM @DemoTable) AS A CROSS APPLY String.nodes ('/M') AS Split(a);
    Refer:-https://sqlpowershell.wordpress.com/2015/01/09/sql-split-delimited-columns-using-xml-or-udf-function/
    CREATE FUNCTION ParseValues
    (@String varchar(8000), @Delimiter varchar(10) )
    RETURNS @RESULTS TABLE (ID int identity(1,1), Val varchar(8000))
    AS
    BEGIN
    DECLARE @Value varchar(100)
    WHILE @String is not null
    BEGIN
    SELECT @Value=CASE WHEN PATINDEX('%'+@Delimiter+'%',@String) >0 THEN LEFT(@String,PATINDEX('%'+@Delimiter+'%',@String)-1) ELSE @String END, @String=CASE WHEN PATINDEX('%'+@Delimiter+'%',@String) >0 THEN SUBSTRING(@String,PATINDEX('%'+@Delimiter+'%',@String)+LEN(@Delimiter),LEN(@String)) ELSE NULL END
    INSERT INTO @RESULTS (Val)
    SELECT @Value
    END
    RETURN
    END
    SELECT localtable ,f.Val
    FROM @DemoTable t
    CROSS APPLY dbo.ParseValues(t.fields,',')f
    --Prashanth

  • Getting an error-column ambigiously defined

    DECLARE
    p_temptablename  VARCHAR2(30);
    p_loadtablename  VARCHAR2(30);
    p_retval  number;
    BEGIN
       p_retval := 0;
       MERGE INTO TEMP_MED_PARTIAL_RECORDS_0002 Tmpr
            USING (SELECT callstart,
                          seqno,
                          totduration,
                          callreleasetime mplcallreleasetime,
                          connectedcallingnumber mplconnectedcallingnumber,
                          mplimsi,
                          mplchargingid,
                          msisdn mplsisdn,
                          FILEID mplfileid,
                          FILENAME mplfilename,
                          SLNO mplslno,
                          IMEI mplimei,
                          UTCTIMEOFFSET mplutctimeoffset,
                          CAUSEFORTERMINATION mplcausefortermination,
                          CALLTYPE  mplcalltype,
                          SERVICETYPE mplservicetype,
                          SERVICECODE  mplservicecode,
                          SUPPLSERVICECODE  mplsupplservicecode,
                          DIALLEDDIGITS mpldialleddigits,
                          CONNECTEDCALLINGNUMBER mplconnectedcallingnumber,
                          THIRDPARTYNUMBER mplthirdpartynumber,
                          RECORDINGENTITYIDENTIFICATION   mrecordingentityidentification,
                          CALLREFERENCE  mplcallreference,
                          ACCESSPOINTNAMENI   mplaccesspointnameni,
                          ACCESSPOINTNAMEOI     mplaccesspointnameoi,
                          SGSNADDRESS    mplsgsnaddress ,
                          GGSNADDRESS   mplggsnaddress,
                          CHARGEAMOUNT  mplchargeamount,
                          MSISDN  mmsisdn,
                          PDPADDRESS    mplpdpaddress,
                          PLMNID  mplplmnid  ,
                          CELLID   mplcellid ,
                          LOCATIONAREACODE  mpllocationareacode ,
                          RES_1 mplres1,
                          RES_2  mplres2,
                          RES_3  mplres3,
                          RES_4 mplres4,
                          RES_5 mplres5,
                          CALLRELEASETIME mplcallreleasetime
                     FROM (
                     SELECT MIN (CALLEVENTSTARTTIMESTAMP) callstart,
                                     MAX (sequence_number) seqno,
                                     SUM (CALLEVENTDURATION) totduration,
                                     imsi mplimsi,
                                    SUM(DATAVOLUMEINCOMING) download,
                                    SUM(DATAVOLUMEOUTGOING) upload,
                                    chargingid mplchargingid
                                FROM MED_PARTIAL_RECORDS_0002_LOAD
                            GROUP BY chargingid,imsi
                            ) subset ,
                            (select FILEID                   ,
      FILENAME                ,
      SLNO                    ,
      IMEI                          ,
      UTCTIMEOFFSET                 ,
      CAUSEFORTERMINATION           ,
      CALLTYPE                      ,
      SERVICETYPE                   ,
      SERVICECODE                   ,
      SUPPLSERVICECODE              ,
      DIALLEDDIGITS                 ,
      CONNECTEDCALLINGNUMBER        ,
      THIRDPARTYNUMBER              ,
      RECORDINGENTITYIDENTIFICATION ,
      CALLREFERENCE     ,
      ACCESSPOINTNAMENI ,
      ACCESSPOINTNAMEOI ,
      SGSNADDRESS             ,
      GGSNADDRESS             ,
      CHARGINGID       ,
      CHARGEAMOUNT     ,
      MSISDN           ,
      PDPADDRESS       ,
      PLMNID           ,
      CELLID           ,
      LOCATIONAREACODE ,
      RES_1   ,
      RES_2   ,
      RES_3   ,
      RES_4   ,
      RES_5,
      callreleasetime,
      sequence_number,
      imsi,
      chargingid
    from MED_PARTIAL_RECORDS_0002_LOAD
    ) subsetinfo
    where
    SUBSETINFO.IMSI=subset.mplimsi     -----------column abbigiuosly defined
    and
    subsetinfo.chargingid=subset.mplchargingid -----------column abbigiuosly def
    and
    subsetinfo.sequence_number=subset.seqno
    mpl
    on
    (tmpr.imsi=mplimsi
    and
    tmpr.chargingid=mplchargingid
    WHEN MATCHED THEN
    update SET sequence_number=seqno,
    --CALLEVENTSTARTTIMESTAMP=callstart,
    CALLEVENTDURATION= totduration,
    datavolumeincoming =downloabytes,
    datavolumeoutgoing=uploadbytes
    WHEN NOT MATCHED THEN
    INSERT (FILEID,
    FILENAME,SLNO,IMSI,IMEI,CALLEVENTSTARTTIMESTAMP,UTCTIMEOFFSET,CALLEVENTDURATION,CAUSEFORTERMINATION,CALLTYPE,
    SERVICETYPE,SERVICECODE,SUPPLSERVICECODE,DIALLEDDIGITS,            
      CONNECTEDCALLINGNUMBER,THIRDPARTYNUMBER,RECORDINGENTITYIDENTIFICATION,CALLREFERENCE,ACCESSPOINTNAMENI,ACCESSPOINTNAMEOI,DATAVOLUMEINCOMING,
      DATAVOLUMEOUTGOING,SGSNADDRESS,                  
      GGSNADDRESS,CHARGINGID,CHARGEAMOUNT,MSISDN,PDPADDRESS,PLMNID,CELLID,LOCATIONAREACODE,RES_1,RES_2,RES_3,RES_4,RES_5,CALLRELEASETIME,SEQUENCE_NUMBER                
    values
    (mplFILEID,mplFILENAME,mplSLNO,mplIMSI,mplIMEI,callstart,mplUTCTIMEOFFSET,totduration,mplCAUSEFORTERMINATION,mplCALLTYPE,mplSERVICETYPE,mplSERVICECODE,mplSUPPLSERVICECODE,
    mplDIALLEDDIGITS,mplCONNECTEDCALLINGNUMBER,mplTHIRDPARTYNUMBER,mRECORDINGENTITYIDENTIFICATION,mplCALLREFERENCE,mplACCESSPOINTNAMENI,mplACCESSPOINTNAMEOI,mplDATAVOLUMEINCOMING,mplDATAVOLUMEOUTGOING,mplSGSNADDRESS,                  
      mplGGSNADDRESS,mplCHARGINGID,mplCHARGEAMOUNT,mplMSISDN,mplPDPADDRESS,mplPLMNID,mplCELLID,mplLOCATIONAREACODE,mplRES1,mplRES2,mplRES3,mplRES4,mplRES5,callend,seqno);
    --commit
    exception
    when others then
    --ROLLBACK
    p_retval:=-1;
    p3_errorlog('partial_stiching',SQLERRM);
    --COMMIT;*/
    end;
    /

    Handle:  user8731258   
    Status Level:  Newbie 
    Registered:  Aug 20, 2009 
    Total Posts:  293 
    Total Questions:  129 (121 unresolved) 

  • Space occupied by clustered index Vs non-clustered index

    I am trying to understand the indexes. Does clustered index occupy more space than a non-clustered index because it carries the information about rest of the other columns also. Could you guys please help me understand this. Thanks in advance.
    svk

    Hi czarvk,
    Clustered index in SQL Server takes up more space than non-clustered indexes.
    Clustered index arranges the way records are stored in a table putting them in order (key, value), all the data are sorted on the values of the index.
    A non-clustered index is a completely different object in a table, containing only a subset of columns and a row locator to the table’s rows or to the clustered index’s key.
    So clustered index in SQL Server takes up more space than non-clustered indexes.
    If you have any question, please feel free to let me know.
    Regards,
    Donghui Li

  • Data access from Application Server - Seeking Opinion

    I am working on a fairly large scale ERP application that is written in Java both on the front end, and middle tier ( using a JBoss application server ).
    All of the database access happens in my app server, and when I wanted to get peoples opinions on the best way to extract and pass data to and from the database from the app server, and of course to and from the client.
    I am using JDBC to make database calls and extra data from the database. This is of course pretty trivial. And I first started writing POJO that represent tables in my database. I then wrote a faily length method that uses reflection to call the setting methods and pass in the objects returned from the database result set. This of course becomes tricky when you start to join multiple tables together. And often you only need a small subset of columns in a table and don't require an object with all the columns from the table ( some set, and some as null objects ).
    So then I decided I was use a combination of ArrayList objects and HashMap's to store the data. If a result set returned multiple rows, it would return an ArrayList of HashMap objects. Each hash map would contain the data for that row, and the hash map keys are the column names.
    This seems to work pretty well and resolves the problem of joining multiple tables together and keeping track of which columns get stored in which objects.
    Does anyone have a different solution or idea as to how to handle this? And thoughts or ideas would be greatly appreciated.

    bryano wrote:
    Let me pose a quick Hibernate question as well.Let me recommend that you not be so thin-skinned about responses.
    >
    If you had a table that had 40 columns in it and say 1000 rows. And you needed to run a query that returned all 1000 rows, but you only needed two out of the
    40 columns. Would it be better to extract those two columns into the HashMap / ArrayList collection I mentioned in my original post? Or would using something like Hibernate and a class that mapped all of the columns in the table be okay?Why not just map the columns you needed in Hibernate? Who said you had to map all 40 and have them be null?
    My concern is on efficiency,
    and I was wondering if building 1000 objects that each have 40 members that only 2 are populated is the most efficient way of extracting the data.Doesn't sound very efficient.
    I will admit, my knowledge of Hibernate is limited at best so I may be missing a component of Hibernate that would allow you to only extract the columns you required, but you are still working with an object that has 38 null value objects for the columns you didn't require.I don't believe you're required to map every column in a table.
    %

  • SSRS 2008, can I use exec sp and Select combo in dataset query pane for DataSet

    Hi, I'm trying to use this combo for my dataset: i.e. call sp and then use table resulting from this sp, and it give me an error
    <procedure or function has too many arguments specified> while generating report on Preview, but runs OK from query designer, I'm totaly lost.
    Can it be done??
    create table #temp (c1....c2)
    insert #temp
    exec sp_1000_Get_Mir
    select c1, c2 from #temp

    You need to use it like below if you want to select subset of columns from sp output
    http://beyondrelational.com/modules/2/blogs/70/posts/10812/select-columns-from-exec-procedurename-is-this-possible.aspx
    But I would still recommend using a wrapper procedure inside it create a temp table with structure some as your sp output and then select required columns from it
    ie like below
    CREATE PROC WrapperProc
    AS
    create table #temp (c1....c2)
    insert #temp
    exec sp_1000_Get_Mir
    select c1, c2 from #temp
    go
    Then call the WrapperProc from your report
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

Maybe you are looking for

  • Network access in the US

    I will be in Philadelphia for a week starting this monday. I have an iPhone here in Germany with a german provider (T-Mobile). What is the best (cheapest) way to use in Philadelphia. Are there free or cheap WLAN that I can use? I just want to collect

  • How do i get airprint to work on an hp p1102w?

    I using the 64 bit version of Windows7 and I've attempted to upgrade the firmware in my P1102W printer for AirPrint capability.  But, it still isn't recognized as an AirPrint printer by my iPhone 4S.  I have two versions of the printer installed one

  • Accrual Reversal F.81

    Hi Guys, When I am doing the Accrual Reversal in F.81 I am getting the following error " VL 476: Item 'Official Doc. Number' is not copied from the reference document ". I have maintained the number ranges also. I am not able to understand why the sy

  • Differences between servlet and backing bean

    Could anyone please tell me whether everywhere that servlet can be used, backing bean and JSF can be used too? Are JSF and backing beans the new alternatives for JSP and servlets? please clarify the relations between these concepts for me. Thanks Lau

  • Quickinfo on a Cell in ALV-OO

    Hi, is it possible to create a Quickinfo to several cells on a OO-ALV? i thought its possible, but i cant find any information. So i dispute it dont works. Thanks & Best Regards Andreas