I have an excel sheet that contains more than 8k IDs. I have a table in SQL server that contains those IDs and related entries. What would be the best way to get those rows? The way I am doing right now is to use export data
function from for the specific table using query:
select * from table_name where uID in (ALL 8K IDs)
Since this has to be done multiple times I suggest using bulk insert from the csv file to a temporary sql table and then use inner join with this table.
Assuming your csv file contains the ids in a single row, (i.e 1,34,345,....), something like this should do the trick:
-- create the temporary table
CREATE TABLE #CSVData
(
IdValue int
)
-- create a clustered index for this table (Note: this doesn't need to be unique)
CREATE CLUSTERED INDEX IX_CSVData on #CSVData (IdValue )
-- insert the csv data to the table
BULK INSERT #CSVData
FROM 'c:\csvData.txt'
WITH
(
ROWTERMINATOR = ','
)
-- select the data
SELECT T.*
FROM table_name T
INNER JOIN #CSVData ON(T.uId = IdValue)
-- cleanup (the index will be dropped with the table)
DROP TABLE #CSVData
One more link to look at is This article by Pinal dave on sqlauthority.
Collected from the Internet
Please contact [email protected] to delete if infringement.
Comments