Skip to main content

Table 10 This table shows the inference time for processing different numbers of images across different cluster architecture configurations

From: Instance segmentation on distributed deep learning big data cluster

Num

Num workers

Cores per driver

Cores per worker

Memory driver

Memory worker

Number images

Num partitions

Stage-2 time sec

Stage-3 time

Patch size images

Failed tasks

1

1

4

4

14

14

70

8(R)

8

6.2 min

8/11

–

2

2

4

4

14

14

70

8

0.5

2.1 min

7/9

–

3

2

4

4

28

14

128

16(R)

8

8.5min

8

4failed tasks

4

2

4

4

14

28

128

8

0.5

3.7 min

17

–

5

2

4

4

14

28

128

16(R)

9

7.6 min

8

–

6

2

4

16

14

64

320

31

0.5

3.8 min

10

–

7

3

8

16

28

64

500

46

0.5

3.3 min

11

–

8

3

8

32

56

128

500

84

0.5

2.3 min

6

–

9

3

8

32

56

128

1000

91

0.5

5.4 min

11

–

10

4

16

32

56

128

1355

124

0.5

4.8 min

11

–

11

4

16

32

56

128

500

125

0.5

2.0 min

4

–

12

4

16

32

56

128

500

256(R)

11

5.3 min

4/5

–

13

4

16

32

56

128

70

70

0.5

47 sec

1

–

14

3

8

32

28

128

128

64

0.5

1.4 min

2

–

15

3

8

32

28

128

320

80

0.5

1.6 min

4

–

16

3

8

32

28

128

320

96(R)

9

1.3 min

8

–

17

5

16

32

64

128

1700

155

0.5

9.3 min

10/11

95failed tasks

18

5

16

32

64

128

1700

288(R)

15

5.3 min

7/6

32failed tasks

19

6

16

32

56

128

1700

288(R)

15

4.7 min

5/6

32failed tasks

20

6

16

32

56

128

1700

320(R)

15

4.5 min

5/6

–

21

6

16

32

112

128

1700

352(R)

15

5.8 min

4/5

63failed tasks

22

6

16

32

112

128

2000

352(R)

15

5.7 min

5/6

–

23

6

16

32

112

128

1355

288(R)

8

4.3 min

5/6

–

24

6

16

32

112

128

1355

170

0.5

3.3 min

7/8

–